text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
BUTknot at SemEval-2016 Task 5: Supervised Machine Learning with Term Substitution Approach in Aspect Category Detection
This paper describes an approach used to solve Aspect Category Detection (Subtask 1, Slot 1) of SemEval 2016 Task 5. The core of the presented system is based on Supervised machine learning using bigram bag-of-words model. The performance is enhanced by several pre-processing methods, most importantly by a term substitution technique. The system has reached a very good performance in comparison with other submitted systems.
Introduction
As the Internet more and more becomes the means of expressing opinions about various subjects, the need to effectively process those opinions (subjective information) is becoming more and more important. Many companies around the world are now interested in gathering public opinion and performing strategic moves accordingly. Thus, Sentiment Analysis (also known as Opinion mining) has become an important area of interest.
Existing systems performing sentiment analysis usually only predict polarity of a given sentiment. While this can be sufficient in a lot of cases, sometimes we wish to analyze opinions about different aspects of the same entity. This task is known as Aspect-based Sentiment Analysis.
SemEval 2016 Task 5 (Pontiki et al., 2016) consists of three subtasks. The first subtask is about sentence-level sentiment analysis and is divided into three slots: Aspect Category Detection, Opinion Target Expression and Sentiment Polarity Detection.
There are multiple distinct domains in which participants were given the opportunity to test their systems. Each domain was available for one of the following languages: Arabic, Chinese, Dutch, English, French, Russian, Spanish and Turkish.
For all domains, there is a limited, known in advance, list of entities and aspects the system should recognize. Each entity can be associated with only certain aspect(s). The set of all possible entityaspect pairs is also limited and known in advance.
Each submitted system could run in two modes: Constrained (using no external data sources like lexicons or additional training sets) and Unconstrained (no data source restriction).
The system I propose is focused on Aspect Category Detection only, i.e. it only predicts which aspects of a given entity a given sentiment has opinions about. I decided to participate in the English language, for which two domains were available: restaurant and laptop reviews. There were 12 entityaspect pairs for the restaurants domain and over 80 for the laptops domain.
Approach
My approach was inspired by the NLANGP system (Toh and Su, 2015) which achieved excellent results in SemEval 2015 (Pontiki et al., 2015) with a straightforward solution.
I model the task as a multi-label classification with binary relevance transformation, where labels correspond to the entity-aspect pairs. All train and test sentences are pre-processed (see Section 2.2). Words from each sentence are used as individual binary features of that sentence. For each entity-aspect pair, all training sentences are used as positive or negative examples of that entity-aspect pair.
Vowpal Wabbit 1 , a supervised machine learning tool, is used to train the resulting binary classifiers. More precisely, a variant of online gradient descent algorithm is used to perform logistic regression with squared cost function.
Model properties
It has been observed that the model offers higher accuracy if bigrams are allowed in VW. However, additional raising the n of VW's n-gram feature is counterproductive. Allowing skips inside bigrams also does not help. For the RESTAU-RANT#MISCELLANEOUS aspect category, however, setting n = 5 seemed to be a better choice. The system has generally unsatisfying score of predicting aspect categories associated with the MIS-CELLANEOUS attribute. Setting n > 2 did not improve the accuracy for any other aspect category.
I have also tuned VW's learning rate and a threshold T for comparing probabilities returned from VW's classifiers. When it predicts that a sentiment has opinion(s) about an aspect category C with the probability of P , the system drops the prediction iff P < T . Table 1 shows the tuned properties.
Domain
Restaurants Laptops Learning rate 0.41 0.38 Prediction threshold T 0.40 0.34
Text pre-processing
The initial text pre-processing step consists of removing the punctuation from each sentence and converting all characters to lower case. Stanford CoreNLP 2 is used to tokenize each sentence, lemmatize all words and also extract their part of speech (POS).
Filtering words
The POS tags are useful to estimate how much important given words are in terms of aspect category detection. The system does not introduce the tags to the machine learning algorithm, but it uses each tag to consider removing the corresponding word from its sentence. Also, some words are removed regardless of their POS. An automated experiment over the oficial training set has been implemented to produce a POS filter and a list of stop words, for each domain separately. Both features are included in the constrained mode.
The experiment showed that for the restaurants domain, it was not demonstrably beneficial to remove any word based on its POS. On the other hand, it generated a list of tags for the laptops domain.
The lists of stop words that the experiment produced were surprisingly small. It seemed there were just few high frequency words which were irrelevant in the learning process.
Term groups
As a part of the pre-processing phase, a simple substitution system has been implemented to support machine learning. When multiple n-grams have roughly the same meaning, or they are related in a certain way, it is often beneficial if classifiers do not distinguish between them. A set of n-gram (term) lists in the fashion depicted in Table 2 has been manually compiled. Presence of the listed terms is then checked in each sentence and if found, each occurrence is replaced by its representative. Terms are always compared by their lemmas. The lists have been compiled by applying the following: 1. For each entity and attribute independently, only those sentences from the train dataset that contain the entity or attribute were selected. Then all unigrams from the sentences were sorted by number of occurrences. The most frequent words were manually checked, one by one, and the ones closely related to a particular entity or attribute were added to the respective lists.
2. In case of the restaurants domain, opinion targets from the train datasets were also extracted. This resulted in much shorter lists of high precision terms. All terms could be therefore individually checked in a reasonable time. Again, terms closely related to an entity or attribute were added to the respective lists. Some ngrams were split into multiple pieces, e.g. lava cake dessert → {lava cake, dessert}.
3. While performing the preceding two methods, I also noticed that some terms played a certain role in aspect category detection even if they were not associated with just a single entity or attribute but rather a set of them. This, for example, included opinion words (indicating attributes GENERAL and QUALITY) and words describing problems (e.g. fail, problematic, blue screen -indicating attributes OP-ERATION PERFORMANCE, QUALITY and possibly some other).
The following methods for extending the lists were also used but they are not applied in the constrained mode: 4. For some specific words (e.g. adjectives expressing food taste), an online dictionary 3 has been used to search for their synonyms and the term lists have been manually appended with suitable words.
Several lists of words publicly available on the
Internet have also been included:
Other pre-processing steps
The system also tries to improve its accuracy by replacing all numbers by the word number. Numbers preceded by a currency symbol ($, , £) are replaced with the word price (which then indicates the PRICES attribute).
Words containing both alpha and numeric characters are replaced with the word model as it was observed that in most cases, such words represent particular model names (e.g. i7, G73JH-x3, d620). This seemed to be helpful notably in the laptops domain.
The system removes all words shorter than two characters. It is important to note that this step comes after removing all non-alphanumeric characters. This means words like w/ are eventually removed.
The system also neutralizes consecutive letters which effectively replaces words like waay or waaaaaay by the word way.
Prediction post-processing
The presence of previously detected opinion words indicates the QUALITY and GENERAL attributes. The corresponding aspect categories are correctly predicted by VW only in cases in which the opinion words are directly preceded or followed by words indicating entities (i.e. they make up bigrams). When these words are separated by one or more other words, no aspect category is predicted. For this reason, the system always looks for an entity-related word which is closest to the opinion word and still not farther than four skips. If such word is found, the corresponding aspect category is additionally predicted. The same process is repeated with the PRICES attribute with the difference that when no suitable entity is found, RESTAURANT#PRICES is predicted. The system does no post-processing in the laptops domain.
Results
All techniques and features described in Section 2 have been tuned separately for each domain using 4fold cross validation. Table 3 displays the reached accuracy.
The tuned system has been trained using the official training sets containing 2000 and 2500 sentences in the restaurants and laptops domains respectively. The test sets consisted of 676 sentences in the restaurants domain and 808 sentences in the laptops domain.
The system achieved very good results, especially in the restaurants domain where it ranked third in the unconstrained mode, falling behind the winner only by 0.635%, and first in the constrained mode. Table 4 shows the F-score of the top five systems in each domain and mode. The best accuracy in the unconstrained mode has been reached in both domains by the NLANGP team which has ranked first also in SemEval 2015.
Conclusion
This paper described my approach to aspect category detection. The presented system has ranked as one of the most accurate in this task. As I have never contributed to the field of aspect-based sentiment analysis (and SA generally) before, I find my results more than satisfactory.
The system has an advantage of its relatively high adaptability to work with previously unseen domains. All techniques except the term groups and the prediction post-processing can be tuned automatically with no additional manual help needed. Multilinguality is limited by the used lemmatizer(s) but since lemmatization offers just a mild increase in accuracy, it is possible to omit it when working with unsupported languages.
The system does not take review context of input sentences into consideration. In my future work I would like to remove this flaw. The term groups described in Section 2.2.2 will be significantly improved by extending them and creating new groups for other important terms. | 2,480.8 | 2016-06-01T00:00:00.000 | [
"Computer Science"
] |
Fibroblasts derived from human embryonic stem cells direct development and repair of 3D human skin equivalents
Introduction Pluripotent, human stem cells hold tremendous promise as a source of progenitor and terminally differentiated cells for application in future regenerative therapies. However, such therapies will be dependent upon the development of novel approaches that can best assess tissue outcomes of pluripotent stem cell-derived cells and will be essential to better predict their safety and stability following in vivo transplantation. Methods In this study we used engineered, human skin equivalents (HSEs) as a platform to characterize fibroblasts that have been derived from human embryonic stem (hES) cell. We characterized the phenotype and the secretion profile of two distinct hES-derived cell lines with properties of mesenchymal cells (EDK and H9-MSC) and compared their biological potential upon induction of differentiation to bone and fat and following their incorporation into the stromal compartment of engineered, HSEs. Results While both EDK and H9-MSC cell lines exhibited similar morphology and mesenchymal cell marker expression, they demonstrated distinct functional properties when incorporated into the stromal compartment of HSEs. EDK cells displayed characteristics of dermal fibroblasts that could support epithelial tissue development and enable re-epithelialization of wounds generated using a 3D tissue model of cutaneous wound healing, which was linked to elevated production of hepatocyte growth factor (HGF). Lentiviral shRNA-mediated knockdown of HGF resulted in a dramatic decrease of HGF secretion from EDK cells that led to a marked reduction in their ability to promote keratinocyte proliferation and re-epithelialization of cutaneous wounds. In contrast, H9-MSCs demonstrated features of mesenchymal stem cells (MSC) but not those of dermal fibroblasts, as they underwent multilineage differentiation in monolayer culture, but were unable to support epithelial tissue development and repair and produced significantly lower levels of HGF. Conclusions Our findings demonstrate that hES-derived cells could be directed to specified and alternative mesenchymal cell fates whose function could be distinguished in engineered HSEs. Characterization of hES-derived mesenchymal cells in 3D, engineered HSEs demonstrates the utility of this tissue platform to predict the functional properties of hES-derived fibroblasts before their therapeutic transplantation.
Introduction
The use of pluripotent, human stem cells, including human embryonic stem (hES) cells and human induced pluripotent stem (hiPS) cells, for future therapies provides advantages over more traditional sources of progenitor cells, such as adult stem cells, due to their ability to give rise to a variety of differentiated cell types and to their unlimited expansion potential [1,2]. However, such therapies will be dependent upon the development of novel approaches that can best assess tissue outcomes of hES-and hiPS-derived cells and will be essential to better predict their safety and stability following in vivo transplantation. One possible approach would be to use three dimensional (3D), engineered tissues to monitor the functional outcomes of hES-and hiPS-derived cells. By providing an in vivo-like microenvironment that enables progenitor cells to manifest their in vivo characteristics in 3D tissue context, tissue engineering can play an important role in determining the function, stability, and safety of hES-and hiPS-derived cells before their future application.
Stromal fibroblasts play a critical role in regulating tissue homeostasis and wound repair through the synthesis of extracellular matrix proteins and by secreting paracrine-acting growth factors and cytokines that have a direct effect on the proliferation and differentiation of adjacent epithelial tissues [3][4][5][6]. Despite the critical impact of this reciprocal cross-talk between stromal fibroblasts and epithelial cells on tissue homeostasis, little is known about the identity and maturational development of the precursor cells that give rise to these fibroblasts. This incomplete understanding of fibroblast lineage development is in large part due to the lack of definitive markers and to their cellular heterogeneity in vivo that has complicated their isolation, characterization, and potential therapeutic applications [7][8][9].
In light of this, human pluripotent stem cells may serve as an alternative to adult tissues of more uniform fibroblasts that may provide more predictable tissue outcomes upon their therapeutic use. Several previous studies have demonstrated the derivation of mesenchymal stem cell (MSC)-like cells from hES cells that can differentiate to bone, fat, and cartilage [10][11][12][13], and fibroblastlike cells that have been used as autogenic feeders to support the culture of undifferentiated hES cells [14][15][16][17]. In our previous work, we have demonstrated that hES cells give rise to fibroblast-like cells [18]; however, we have not determined if hES-derived cells can manifest the functional properties of dermal fibroblasts that can support the organization and development of 3D skinlike tissues also known as human skin equivalents (HSEs) through epithelial-mesenchymal cross-talk. As the morphogenesis, homoeostasis, and repair of many tissues depends on interactions between epithelial cells and their adjacent stromal fibroblasts [3][4][5][6], the functional analysis of hES-derived fibroblasts could best be accomplished in such engineered HSEs that demonstrate many features of their in vivo counterparts.
In this study, we have characterized two cell lines with features of MSC lineages (EDK and H9-MSC) that differ from each other in their production of hepatocyte growth factor (HGF), a growth factor known to be secreted by dermal fibroblasts that supports epithelial development and repair. In monolayer cultures, we found that EDK and H9-MSCs exhibited considerable overlap as seen by their mesenchymal morphology and expression of surface markers characteristic of both MSCs and dermal fibroblasts. However, EDK cells could not undergo differentiation to bone and fat and demonstrated properties similar to stromal fibroblasts that could support epithelial tissue development and enable re-epithelialization of HSEs that was linked to the elevated expression and secretion of HGF. In contrast, H9-MSCs displayed multipotent differentiation capacity typical of an MSC phenotype [19], but did not support epithelial tissue development or repair, possible due to a low level of HGF production. When HGF secretion from EDK cells was suppressed by shRNA, epithelial repair was significantly decreased, suggesting that the regenerative phenotype of EDK cells is mediated, at least in part, by HGF secretion. HSEs used in our studies provided a complex tissue microenvironment that enabled characterization of the functional properties of hES-derived fibroblasts and provide an important platform to further establish their stability, safety, and efficacy for future therapeutic transplantation.
Materials and methods
Monolayer cell culture of hES cells and directed differentiation to mesenchymal and fibroblast fates A H9 hES cell line was maintained in culture as described [20], on irradiated feeder layers of mouse embryonic fibroblasts (MEF). EDK cells were prepared using our previously described protocol [18]. Briefly, we derived multiple, independent EDK cell lines by first growing H9 hES cells on MEFs fixed in 4% formaldehyde in differentiation medium supplemented with 0.5 nM human BMP-4 (R&D Systems, Minneapolis, MN, USA) from day four to seven of differentiation. These EDK cell lines were then propagated first on tissue culture plastic and then expanded on type I collagencoated plates (BD Biosciences, San Jose, CA, USA). EDK cell lines were then screened by ELISA to determine their secretion of HGF and KGF and the lines with highest production of these growth factors were chosen for further study. H9-MSCs were prepared as previously described [21]. Briefly, H9 hES cells were grown on irradiated feeder layers of MEFs in differentiation medium for 10 days and then switched to MSC medium (Lonza, Basel, Switzerland) for four extra days to enrich for MSC-like cells. Differentiating cells were then fluorescence-activated cell sorting (FACS)-sorted for CD73positive cells expanded on MSC medium first on tissue culture plastic and then passaged on type I collagencoated plates. Both protocols for cell derivation from H9 hES cells are summarized in (Figure 1a). Control human dermal fibroblasts (HFF) were derived from newborn foreskin and grown in DMEM (Invitrogen, Carlsbad, CA, USA) supplemented with 10% FBS (Hyclone, Logan UT, USA).
Construction of human skin equivalents and 3D tissue model of cutaneous wound healing
Engineered 3D HSEs were constructed as previously described [22] by adding HFF, EDK, or H9-MSCs or no cells into type I collagen gels (Organogenesis, Canton, MA, USA) to a final concentration of 2.5 × 10 4 cells/ml. Human keratinocytes (NHK) at a concentration of 5 × 10 5 were seeded onto the collagen matrix and tissues were then maintained submerged in low calcium epidermal growth media for two days, for an additional two days in normal calcium media, and raised to the airliquid interface (Organogenesis, Canton, MA, USA). For the preparation of 3D tissue models of cutaneous wound healing, HSEs constructed using HFF were wounded with a 4 mm punch and placed on the surface of a contracted collagen gel populated with either HFF, EDK, H9-MSCs, or without cells as previously described [22,23]. Wounded cultures were maintained at the airliquid interface for 72 or 96 hours at 37°C in 7.5% CO 2 to monitor re-epithelialization. For quantitative analysis of wound re-epithelialization, tissue samples were fixed in 4% neutral-buffered formalin, embedded in paraffin, and serially sectioned at 8 μm. Histological sections were stained with H&E, images were captured using a Nikon Eclipse 80i microscope (Nikon Instruments Inc., Melville, NY, USA) equipped with a SPOT RT camera (Diagnostic Instruments, Sterling Heights, MI, USA) and analysed using Spot Advanced software (Diagnostic Instruments, Sterling Heights, MI, USA).
Real-time RT-PCR
Total RNA was extracted from confluent cultures of EDK, H9-MSC, or HFF cells using RNeasy kit (Qiagen, Valencia, CA, USA) and cDNA was transcribed with 1 μg RNA using the Quantiscript Reverse Transcriptase (Qiagen, Valencia, CA, USA). For each real-time RT-PCR reaction, we used 20 ng of cDNA, 200 nM of each primer, and 2X SYBRgreen (Applied Biosystems, Foster City, CA, USA) at a total sample volume of 12.5 μL and samples were run in triplicates on the Bio-Rad CFX96 Real-Time PCR Detection System according to manufacturers' instructions. The relative level of gene expression was assessed using the 2 -ΔΔC t method and results are presented as an average of two experiments and three technical replicates. The following oligonucleotide primer sequences were used: GAPDH-F1: 5'tcgacagtcagccgcatcttcttt-3', GAPDH-R1: 5'-accaaatccgtt gacctt-3', HGF-F1: 5'-aggggcactgtcaataccatt-3', and HGF-R1: 5'-cgtgaggatactgagaatcccag-3'.
RNAi
The pLKO.1-puro non-target shRNA control vector and the pLKO.1-puro vector containing shRNA against HGF (TRCN clone 3310) were purchased from Sigma (Sigma, MISSION shRNA, St. Louis, MO, USA). Lentiviral particles were generated in 293FT cells using ViraPower™ Lentiviral Expression System (Invitrogen, Carlsbad, CA, USA) according to manufacturer protocol. Knockdown viruses were titered using an end-point dilution assay and cell lines were infected with lentiviruses at a multiplicity of infection of one. Stable cell lines were selected with puromycin (2 μg/ml).
ELISA
Tissue culture supernatants were harvested and processed using commercial DuoSet HGF ELISA kit (R&D Systems, Minneapolis, MN, USA). Media was assayed in triplicates from at least three independent samples. For monolayer cultures, the values were normalized according to cell numbers counted in the respective cultures at the time of supernatant harvesting and expressed in pg/ ml per 10 4 cells.
Antibody-based cytokine array HFF, EDK, or H9-MSCs 10 6 cells were seeded onto 100 mm tissue culture plates and grown in 10 ml of tissue culture medium. Forty-eight hours before the analysis, cultures were maintained in 5 ml of tissue culture medium. Tissue culture supernatants were harvested and the supernatants from the plates containing equal cell numbers were processed using commercial Proteome Profiler Human Angiogenesis Antibody Array (R&D Systems, Minneapolis, MN, USA) according to manufacturer protocol. Histogram profiles for select analytes were generated by quantifying the mean spot pixel densities from the array membrane using ImageJ software (U. S. National Institutes of Health, Bethesda, Maryland, USA).
Morphologic, immunofluorescent, and immunohistochemical analysis of human skin equivalents HFF, EDK, or H9-MSCs were grown to confluence on glass or type I collagen-coated coverslips, fixed in 4% paraformaldehyde, and permeabilized using 0.1% triton X-100. Cells were stained using primary monoclonal
Osteogenic and adipogenic differentiation assays
To induce osteogenesis HFF, EDK, or H9-MSCs were grown to confluence in the presence of 100 nM dexamethasone, 0.05 mM ascorbic acid, and 10 mM-glycerophosphate (all from Sigma, St. Louis, MO, USA) in DMEM supplemented with 1% of non-essential aminoacids (Invitrogen, Carlsbad, CA, USA), and 10% FBS. After 28 days, cultures were fixed in 1% formaldehyde, and stained for 10 minutes with alizarin red solution (Sigma, St. Louis, MO, USA) to determine calcium deposition, and staining was quantified spectrophotometrically (A 550 ) after solubilization with 10% CPC (Sigma, St. Louis, MO, USA). To induce adipogenesis HFF, EDK, or H9-MSCs were grown to confluence in 24-well plates in the presence of 1 μM dexamethasone, 50 μM indomethacin, 5 μg/ml insulin, and 0.5 mM 3-isobutyl-1-methyl-xanthine (all from Sigma, St. Louis, MO, USA) in DMEM supplemented with 1% of non-essential amino-acids, and 10% FBS. After 28 days, cultures were fixed in 4% formalin and stained with oil red O solution (Sigma, St. Louis, MO, USA) for 30 minutes to visualize oil droplets, and staining was quantified spectrophotometrically (A 500 ) after solubilisation with isopropanol. All results are presented as mean +/-standard deviation (SD) of three experiments and three technical replicates.
Statistical analysis
Data are expressed as mean ± SD of at least three independent samples. Statistical comparisons between groups were performed with a two-tailed Student's t-test, *P ≤ 0.05 and **P ≤ 0.01 was considered significant.
EDK and H9-MSC generated from hES cells demonstrate morphologic and phenotypic characteristics of mesenchymal cells
We generated two populations of mesenchymal cells (EDK and H9-MSC) from hES cells using previously established protocols as shown schematically (Figure 1a). EDK cells were derived using a differentiation protocol that we had previously established to generate fibroblast-like cells from H9 hES cells [18]. To ensure the fibroblast-like phenotype of EDK cells, several cell lines were derived from hES-cells using the same differentiation conditions and were screened for the secretion of paracrine factors known to be secreted at high levels in dermal fibroblasts keratinocyte growth factor (KGF) and hepatocyte growth factor (HGF); data not shown) [5,6]. Only EDK cell lines secreting elevated levels of KGF and HGF were selected for further study. H9-MSCs were derived using an alternative differentiation protocol that has previously been shown to generate MSC-like cells from H9 hES cells based on FACS-isolation of CD73-positive cell subpopulations [21]. Both EDK and H9-MSC populations showed similar fibroblast morphology and expression of Thy-1 and vimentin by immunohistochemical staining that were similar to that seen in dermal HFF, while α-SMA-positive cells were observed more frequently in EDK cultures than in either H9-MSC or HFF (Figures 1b and 1c). Surface marker profiling of EDK and H9-MSCs was performed by flow cytometric analysis and compared with the profile generated for HFFs. EDK and H9-MSC lines presented a similar profile of surface antigen expression characteristic of mesenchymal cells. Both hES-derived cell lines expressed CD73, CD105, CD106, CD44, CD166, CD13, and CD146 and lacked expression of hematopoietic and endothelial lineage markers CD45, CD34, and CD31 (Figure 1d). The intensity of fluorescent labelling and the distribution of labelled cells were similar for CD73, CD105, CD106, CD44, CD166, and CD13 when EDK and H9-MSC were compared. However, expression of CD10 was significantly higher, while CD146 was lower in the EDK cell line when compared with H9-MSCs. This elevated expression of CD10 and reduced expression of CD146 was also seen in HFFs, suggesting a fibroblast-like phenotype for EDK cells. These findings indicated that while both hES-derived cell populations exhibited characteristic mesenchymal surface markers, differences in their CD profile suggested differences in their biological potential.
H9-MSCs undergo differentiation to osteogenic and adipogenic fates, while EDK cells do not show multilineage differentiation potential
To assess the biological potential of the EDK and H9-MSC populations, we performed osteogenic and adipogenic differentiation assays. After four weeks under differentiation conditions, H9-MSCs cultures contained calcium deposits as detected by alizarin red (Figures 2a and 2b), and oil red-O-positive lipid droplets (Figures 2c and 2d). In contrast, EDK did not show this differentiation potential as cells did not form bone or fat (Figure 2). These findings demonstrated that EDK and H9-MSC cell lines had divergent cell fates, as seen by their potential to form osteoblasts and adipocytes. Although H9-MSCs manifested properties of MSC-like progenitors, absence of multilineage differentiation potential in EDK cells suggested that these cells may have underwent developmental restriction to fibroblast lineage fate [10,24].
EDK cells demonstrate fibroblast function characterized by their support of the normal development of 3D human skin equivalents As the paracrine cross-talk between fibroblasts and epithelial cells is required for the development of stratified epithelial tissue, we next incorporated EDK or H9-MSC into HSEs as a functional assay for their capacity to support normal skin development. EDK or H9-MSC were embedded into collagen gels and allowed to contract the gel for seven days. Normal NHK were then seeded onto the surface of collagen gels, and tissues were grown at an air-liquid interface for an additional seven days to compare their capacity to form a multi-layered epithelium. HSEs constructed with HFFs or no cells embedded into collagen gels served as positive and negative controls, respectively. Morphological analysis revealed that EDK cells promoted epithelial tissue development in a manner similar to HFFs, while tissues grown in the presence of H9-MSC did not support normalization of the overlying epithelium (Figure 3a). HSEs constructed with EDK cells developed a fully-stratified, multi-layered epithelium that was well-differentiated and demonstrated tissue architecture similar to epithelial tissues generated with control HFF cells. In contrast, tissues populated with H9-MSCs generated a thin and poorly-developed epithelium that was similar to tissues generated in the absence of any fibroblast support. The characteristic supra-basal distribution of keratin 1/10, a marker of differentiated keratinocytes, in both EDK and HFF containing tissues confirmed the capacity of EDK cells to promote normal epithelial morphogenesis and differentiation ( Figure 3b).
As efficient development of HSEs is dependent on paracrine support from connective tissue fibroblasts, as has been previously shown for mature fibroblasts [3][4][5][6], we used a panel of growth factors and cytokines known to function as mediators of epithelial cell growth to compare the secretory profile of EDK, H9-MSC, and HFF cells. Supernatants from EDK, H9-MSC, and control HFF cultures containing equal cell numbers were harvested and assayed using an antibody-based cytokine array (Figures 3c and 3d). This analysis revealed that EDK cells produced elevated levels of epidermal growth factor (EGF), granulocyte-macrophage colony-stimulating factor (GM-CSF), platelet-derived growth factor-AA (PDGF-AA), KGF, and HGF when compared with both H9-MSC and HFF cells. In contrast, H9-MSC produced higher levels of IL-8, monocyte chemotactic protein-1 (MCP-1), and PDGF-AB/BB than EDK or HFF cells. Notably, the level of HGF, known to be essential for skin development and repair [5,6] was elevated in both EDK and HFF cells but not in H9-MSCs. Using real-time PCR and ELISA, we detected a 10-fold elevation of HGF expression and secretion in EDK cells compared with HFF and a 100-fold elevation when compared with H9-MSC (Figures 3e and 3f). These findings indicated that the biological potential of hES-derived mesenchymal cells was not uniform. Although H9-MSCs could differentiate to osteogenic and adipogenic lineages, this cell line did not provide the paracrine support needed for epithelial development or maturation. In contrast, EDK cells demonstrated properties similar to dermal fibroblasts (HFF) based on their capacity to direct epithelial morphogenesis in an in vivo-like tissue environment.
EDK cells accelerate the rate of wound healing in 3D human skin equivalents linked to their HGF production
We used our previously-developed 3D tissue model of cutaneous wound healing [22,23] to determine whether EDK and H9-MSCs could also modulate the re-epithelialization of wounded epithelia. To test this, HSEs constructed using HFF were wounded with a 4 mm punch and placed on the surface of a contracted collagen gel populated with either HFF, EDK, H9-MSCs, or without cells (see schematic in Figure 4a). HFFs have previously been shown to enable complete re-epithelialization [23,25] and serve as a positive control, while collagen gels generated without any cells were used as a negative control for re-epithelialization. Ninety-six hours after wounding, the wound bed was partially or completely covered by a migrating epithelial tongue towards the central area of the wound bed and by a more stratified epithelium toward the wound margins (Figure 4b). The degree of re-epithelialization following wounding was measured by calculating the distance separating the two epithelial tongues and was normalized to the distance between the initial wound margins and expressed graphically as the percentage of wound closure (Figure 4c). In general, EDK cells showed nearly complete or complete wound closure that was similar to tissues in which HFF were incorporated (HFF = 86 ± 12% and EDK = 99 ± 1.6%). In contrast, H9-MSCs-harboring tissues and the tissues constructed without any fibroblast support showed a significantly lower degree of re-epithelialization compared with HFF (H9-MSC = 39 ± 5.5% and no cells = 17 ± 5%, t-test: P = 0.003 and P = 0.0007, respectively; Figure 4c). To test whether degree of wound re-epithelialization was linked to HGF levels, HGF concentrations in supernatants from wounded HSEs were measured by ELISA. Quantification of HGF production revealed that both HFF-and EDK-containing tissues produced significantly higher levels of HGF (5-fold and 50-fold higher, respectively) than H9-MSC-containing tissues or in tissues without cells in their stroma ( Figure 4d). These results imply that HGF-mediated paracrine signalling was responsible, at least in part, for the wound re-epithelialization mediated by EDK cells.
Suppression of HGF production impairs the ability of EDK cells to promote wound healing in 3D human skin equivalents
To confirm that HGF secretion is necessary to optimally direct the re-epithelialization of cutaneous wound Figure 4 EDK cells accelerate re-epithelialization of wounded HSEs. (a) Schematic of 3D tissue model of wound re-epithelialization. In step 1, a full-thickness wound is generated by excising a human skin-equivalent (HSE). In step 2, the wounded HSE is placed on a second, contracted collagen gel populated with EDK, H9-MSC, human foreskin fibroblasts (HFF), or constructed without cells. In step 3, keratinocytes (NHK) undergo migration to close the wound gap and restored epithelial integrity. The far right panel is an image of six-well insert containing HSE 96 hours after wounding. (b) Representative morphology of wounded tissues constructed with HFF, EDK, H9-MSC, or without cells 96 hours after wounding (black arrows demarcate the initial wound edges). Bars, 200 μm. (c) EDK showed a rate of wound closure similar to tissues in which HFF were incorporated. The degree of re-epithelialization was significantly lower in H9-MSC and no cells-containing tissues as compared with HFF (t-test: **P < 0.01). (d) Levels of hepatocyte growth factor (HGF) in supernatants of wounded tissues 96 hours after wounding as measured by ELISA. HFF-and EDK-containing tissues produced higher levels of HGF as compared with H9-MSC-containing tissues or tissues constructed without cells. All results are presented as the mean +/-standard deviation of three independent experiments and three technical replicates.
Shamis et al. Stem Cell Research & Therapy 2011, 2:10 http://stemcellres.com/content/2/1/10 mediated by EDK cells, levels of HGF secreted from EDK cells were significantly reduced by introducing a lentiviral shRNA construct that targeted HGF mRNA (shHGF). Efficiency of shHGF in EDK cells was measured relative to a scrambled, shRNA control (shScram) by ELISA and showed a 95% reduction in secretion of HGF from EDK cells (Figure 5a, top panel). This shHGF lentiviral construct did not alter secretion of other growth factors, such as KGF (Figure 5a, lower panel). The effect of decreased HGF secretion on the capacity of EDK cells to promote re-epithelialization of wounded HSEs was examined by incorporating EDK-shHGF and EDK-shScram into 3D tissue model of cutaneous wound repair. Tissue morphology and the degree of re-epithelialization following wounding were analysed 72 hours after wounding. Tissues constructed using EDK-shHGF cells showed a significantly lower degree of re-epithelialization compared with EDK-shScram (EDK-shScram = 33 ± 9% vs. EDK-shHGF = 11.6 ± 5.8%, t-test: P = 0.017; Figures 5b and 5c). HGF levels in supernatants from wounded HSEs were measured by ELISA and showed roughly 85% reduction of HGF levels in cultures containing EDK-shHGF (Figure 5d). To further characterize the phenotype of wounded epithelial tissues in the presence of EDK-shHGF and control EDK-shScram, we assayed proliferation of keratinocytes in the basal layer of wound margins using a BrdU-incorporation assay (Figure 5e). Tissues harboring EDK-shHGF demonstrated a significantly lower percentage of proliferating basal keratinocytes compared with EDK-shScram (EDK-shScram = 10 ± 3.5% and EDK-shHGF = 4.4 ± 1.8%, t-test: P = 0.004; Figure 4f). These results support the known function of HGF as a mediator of epithelial cell proliferation in restoring epithelial integrity following wounding [6]. This suggested that hES-derived EDK cells stimulated wound re-epithelialization by providing paracrine signals, such as HGF, that is needed to direct the repair of a cutaneous wound.
Discussion
Our comparative characterization of two hES-derived cell lines (EDK and H9-MSC) that were differentiated under different conditions has revealed that both cell lines exhibited similar morphology and cell surface marker expression indicative of a mesenchymal phenotype [19]. However, although EDK cells showed some phenotypic overlap with H9-MSC and adult MSCs [19] in two dimensional (2D) monolayer culture, their functional properties in engineered 3D tissues and the absence of the multilineage differentiation potential seen in H9-MSC provided evidence of their fibroblast lineage commitment. Thus, we have demonstrated that engineered, 3D tissues can be used as a sensitive assay to characterize the biological potency of hES-derived fibroblasts. Incorporation of hES-derived cells into tissues that mimic their in vivo counterparts enables a more complete determination of their lineage commitment and phenotype than conventional, monolayer cultures. We have previously generated a fibroblast-like cell line that partially supported a 3D tissue whose surface cells were hES-derived cells with ectodermal features [18]. We now extend those studies using an hES-derived cell line that produced elevated levels of HGF to provide paracrine support for tissue development and repair that mimics the functional features of dermal fibroblasts. We have shown the suppression of HGF production in EDK cells is linked to a significant decrease in epithelial proliferation and repair, suggesting that these cells mediate epithelial tissue phenotype through a paracrine support. In contrast, a hESderived cell line that manifests multilineage differentiation, but very low HGF levels (H9-MSC) was unable to support epithelial tissue development and repair.
Despite the critical impact of fibroblasts on tissue morphogenesis, homeostasis, and repair, gaps in our understanding of fibroblast lineage development has made the identification and reproducible isolation of specific fibroblast populations particularly challenging [8,9]. In this light, we expect that our findings will help address a central challenge facing clinical application of cells with properties of dermal fibroblasts for regenerative medicine by developing an efficient approach to reproducibly procure these cells from pluripotent stem cells, in a way that offers predictable and effective tissue outcomes upon their therapeutic use [1,2].
In previous studies, MSC-like cells expressing α-smooth muscle actin (α-SMA) have been derived from hES cells [26,27] and fibroblast-like cells isolated from hES cells have been used as autogenic feeders to support the culture of undifferentiated hES cells [14][15][16][17]. In addition, we have found that surface marker profile of EDK and H9-MSC cell lines is similar to those expressed on MSCs previously derived from hES cells using alternative differentiation conditions to those described in our study [10][11][12][13]. However, MSC-like cells derived in these previous studies were not grown in 3D tissue microenvironments that could provide a more complete functional characterization of their identity as fibroblasts. As the use of cell surface markers to distinguish MSCs from cells restricted to fibroblast lineage fate is limited, tissue-based, functional readouts are essential to predict the potency of hES-derived cells differentiated into fibroblast lineages [24,28]. As it is possible that EDK cells represent a heterogeneous mixture of fibroblasts and a smaller number of cells that retain MSC-like features, their future therapeutic use may require further characterization of fibroblast subpopulations. This approach has recently been demonstrated to improve the specificity of MSCs differentiated from hES cells in ways that can select for reproducible and clinically-applicable cells [11]. In addition, it remains unclear what factors may direct the differentiation of EDK cells to a fibroblast fate.
Transplantation of adult-derived MSCs has previously been shown to promote tissue regeneration and to improve wound repair in vivo [29][30][31][32]. Increasing evidence suggests that this repair potential is linked to the ability of MSCs to secrete paracrine-acting factors that can induce changes in their tissue microenvironment to guide healing [33,34]. We found that EDK cells secreted a broad repertoire of growth factors and cytokines linked to a variety of paracrine-mediated tissue Figure 5 Suppression of HGF production impairs EDK-mediated re-epithelialization of wounded HSEs. (a) Efficiency of shRNA knockdown of hepatocyte growth factor (shHGF) in EDK cells relative to a scrambled, shRNA control (shScram) as measured by ELISA. Secretion of hepatocyte growth factor (HGF; top panel) was reduced 95% relative to shScram (t-test: **P < 0.01), secretion of KGF (lower panel) was not affected by shHGF. Data are normalized to shScram and all results are expressed as the mean +/-standard deviation (SD) of three experiments and three technical replicates per experiment. (b) Representative morphology of wounded tissues constructed with EDK-shHGF and EDK-shScram cells 72 hours after wounding (black arrows demarcate the initial wound edges; white arrows demarcate the tip of epithelial tongues). Bars, 200 μm. (c) The degree of re-epithelialization was significantly lower in tissues containing EDK-shHGF cells as compared with EDK-shScram (t-test: **P < 0.01). (d) Relative levels of HGF in supernatants of wounded tissues 72 hours after wounding as measured by ELISA. Tissues containing EDK-shHGF cells produced significantly lower levels of HGF as compared with EDK-shScram (t-test: **P < 0.01). (e) Proliferation of basal keratinocytes in tissues containing EDK-shHGF and EDK-shScram cells was analyzed using immunoperoxidase staining with anti-BrdU antibody 72 hours after wounding (black arrows demarcate the initial wound edges; white arrows demarcate the BrdU-positive cells). (f) Quantification of BrdU-positive basal keratinocytes. Tissues containing EDK-shHGF demonstrated significantly lower percentage of proliferating basal keratinocytes compared with EDK-shScram (t-test: **P < 0.01). Bars, 100 μm. All results are presented as the mean +/-SD of three independent experiments and three technical replicates.
responses, including stimulatory, paracrine factors known to accelerate the healing of epithelial wounds. Specifically, we found that this augmented re-epithelialization of wounded keratinocytes was linked to the production of HGF by EDK cells, as the diminished production of this growth factor upon shRNA infection significantly decreased tissue re-epithelialization. As HGF is known to be induced in stromal cells in response to injury [5,6], directed differentiation of hESderived fibroblasts may allow these cells to be adapted for cell therapy and cutaneous regeneration. Beyond this, a decline in fibroblast numbers or function has been implicated in both skin aging and cancer development, because the age-related accumulation of senescent fibroblasts is thought to create a microenvironment that promotes squamous cell carcinoma progression [35][36][37]. As restoration of growth factor production in fibroblasts may decrease cancer risk and sustain tissue homeostasis, the development of future cell-based treatments using hES-derived fibroblasts can potentially augment repair and improve overall tissue health. We expect that our findings will enable future studies aimed at clarifying factors regulating fibroblast development and will help identify progenitor cells that give rise to these cells in human skin and other tissues.
Conclusions
A central challenge that must be overcome before therapeutic application of hES-derived cells is the development of reliable and sensitive methods to evaluate their safety and efficacy [1,2]. Although hES cells can undergo directed differentiation to numerous cell types, progress towards their clinical application has been limited by the lack of tools and platforms to evaluate their stability in an in vivo-like tissue context. It is therefore critical to fully-characterize the properties of differentiated cells derived from hES cells by developing engineered, preclinical tissue models that will better predict their behavior following future therapeutic transplantation to humans. As we have shown that fibroblasts derived from hES cells support the normal development of HSEs, our study is an important step towards determining the biological potency of clinically-relevant, functional cell types derived from hES cells. | 7,373.8 | 2011-02-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
Ubiquitous learning model using interactive internet messenger group (IIMG) to improve engagement and behavior for smart campus
The recent popularity of internet messenger based smartphone technologies has motivated some university lecturers to use them for educational activities. These technologies have enormous potential to enhance the teaching and ubiquitous learning experience for smart campus development. However, the design ubiquitous learning model using interactive internet messenger group (IIMG) and empirical evidence that would favor a broad application of mobile and ubiquitous learning in smart campus settings to improve engagement and behavior is still limited. In addition, the expectation that mobile learning could improve engagement and behavior on smart campus cannot be confirmed because the majority of the reviewed studies followed instructions paradigms. This article aims to present ubiquitous learning model design and showing learners’ experiences in improved engagement and behavior using IIMG for learner–learner and learner–lecturer interactions. The method applied in this paper includes design process and quantitative analysis techniques, with the purpose of identifying scenarios of ubiquitous learning and realize the impressions of learners and lecturers about engagement and behavior aspect, and its contribution to learning.
Introduction
The idea of a smart campus today is incredibly powerful. Smart campus has been widely accepted and used in strategy of education reformation in several countries in the world for improving the quality of the campus learning environment [1]. With the adopted of technological development of smartphone technology, smarter campuses are built to benefit the lecturer and learner, manage the available resources and enhance user experience with proactive services anytime and anywhere [2].
Within the context of smart campus, technology of smartphone devices can be used to support teaching and learning in ubiquitous environment through Interactive internet messenger group (IIMG). Some examples of IIMG educational activities are content generating, interacting, collaborating and sharing [3] [4], Content generating occurs when affective interaction and developing Question and Answer sessions. This application is designed to provide a platform for learners to share their knowledge, broadcast news, and asks for help [3][5] [6]. Member are able to share or publish their work and ideas by message for others to view and download. For example, multimedia and documents files can be shared with message [3] [7]. IIMG can be used by learners to collaboratively learn how to solve problems with members of a group, or to organize collaborative learning and study groups [3] [4]. Many researchers have discussed and focused on the infrastructure required for constructing smart campuses and applications that related to learning at ubiquitous environment, in [8] have proposed a framework specification for u-learning in a smart campus model that integrates real world learning resources with a social network of campus. This model aims to embed of learners within a smart campus environment that provides context based personalized learning and interaction or feedback, in [5] has designed and implemented a prototype system based OnCampus scheme. The model developed in three functional modules, namely Group, Transaction, and Forum modules to provide services related to learning. The model OnCampus can significantly contribute in the following: through the creation of social environment based on interests mining, the equipping of educational guidance based on emotion analysis, as an information sharing platform, and by developing a secondary platform that aimed at the optimal allocation of campus resources, in [9] has proposed an adaptive contents delivery model for context aware ubiquitous learning according to three level service models, which create the adaptive contents for learners to get a all access in learning according to learners' interest and contexts. The evaluation of a Ubiquitous learning system shows that the learners may not only study in media on mobile device at anytime and at any place and also get a better learning experience.
The majority of mobile and u-learning model studies showed positive effects. However, the design ubiquitous learning model using interactive internet messenger group (IIMG) and empirical proof that would favor a broad application of mobile technology devices and ubiquitous learning in smart campus settings to improve engagement and behavior is still limited. In addition, the expectation that mobile learning could improve engagement and behavior on smart campus cannot be confirmed because the majority of the reviewed of studies followed instructions paradigms.
This paper aims to design and implement and showing empirical evidence about Ubiquitous learning model using interactive internet messenger group (IIMG) to improve engagement and behavior for learner-learner and learner-lecturer interactions on smart campus. The method applied in this paper includes model design process and quantitative analysis techniques, with the purpose of identifying scenario of the ubiquitous learning model using IIMG and realize the impressions of learners and teachers about the engagement and behavioral aspects, and its contribution to learning.
Ubiquitous Learning on Smart Campus
Education in a smart campus supported by mobile and smart technologies, making use of smart tools and mobile devices, can be considered smart education with ubiquitous environment. In this respect, we observe that novel technologies have been widely adopted in campus for provides connectivity between learners and their surrounding environments. For learners, learning goals is inherently identified to trigger didactic models which guide their instruction around real world data, based on unique learning contexts and delivered with the right information, at the right time, at the right place, at the right way, to the right person [10]. The learning process as ubiquitous learning doesn't restrict to be in the classroom or formal learning environments. On the contrary, the learning involves situating learners in both the real and the virtual world to extend learners' learning experiences [11]. Therefore, ubiquitous learning on smart campus must provide learners with opportunities to learn in their own environment, with another term, selecting and using the proper equipment for their work and with their living experiences, so that it can support an ongoing, contextualized and useful learning to the learner [12][13].
Use IIMG communication for ubiquitous learning
Internet Messenger (IM) can be used an alternative approach to smart campus based learning environment, IM has a series of attractive features, interaction in IM provides a mobile group communication service among members of the campus community, if a group member send a message to a group, the member sends the message to the IM server and then the server forwards it towards the each of a group members. This interaction makes the IM as an essential component of the learning process. In the u-learning environment, interaction among participants is a crucial concern to lecturer and learner [14]. As a complex and multifaceted component, interaction provides a diverse range of functions in u-learning, and is vital for effectively improving learning [14]. Some researches indicated a positive effect of interaction on learners' academic achievement and performance as well as their satisfaction [15].
IIMG to improve engagement and behavior
Some literature used term of engagement to indicate readiness for devotes considerable energy to studying, desire and compulsion to actively participate in everyday campus activities involving things like attending to classes, participates actively in leaner organizations, adherence to lecturer's directives in the class like submit a report of progress or assignment [16] [17]. Learner engagement includes both the academic and non academic activities and when get more engaged in campus experience, it tends to generate high quality learning [18].
The rapid development of IM like whatsapp, Catfiz, facebook messenger has sparked the creative incorporation of media into current pedagogical application and processes, this technology are rapidly moving beyond their original purpose and significantly increased, even impact on leaner's collaboration and engagement [19], thus contributing to the improvement of learning efficiency and learner engagement when the specific instruction is aimed to encourage learner-centered activities [20].
Approach and methodology
In describing the methods used to conduct this research, the model design of u-learning using IIMG, participants, data collection and analysis will be discussed and follows the guidelines of quantitative research technique [21].
U-Learning model using IIMG
This modeling technique adopted to purpose development of scenarios and conceptual design of the u-learning environment as illustrated in fig. 1, the scenario shows that a learner can learn in the ubiquitous learning environment, Notifications and adaptive learning support messages are sent to learners by IM on the group, like illustrated in fig. 2. The learner can use the smartphone or mobile devices to perform learning tasks quickly. Moreover, the model of learning system can be recommended by lecturers to learner to consult on their learning problems by IM. In this model we designed a four-mode interactive mechanism for the platform. The first mode is to change a class for certain subjects into groups, lecturer as admin can assemble, organize and coordinate those groups by themselves, and the learner can join to group after registration and approved by the lecturer. The second mode is the scenario to post and circulate information using this platform, for example: lecturer transmit and distribute information like curriculum, syllabus, college contract and learning material. During online course, the learner can control what is distributed by the lecturer, the discussion pace and posting and demonstrate their own understanding or the result of the test or homework via text or visual media. The third mode is to track and record. When interaction exists between lecturer and learner, The IM platform automatically recorded all of feedback on every work from all learners and lecturer. The final mode is to conclude with brief remarks about the learners' knowledge acquirement and achievement via self-report based on the feedback.
Participant
In 2015/2016, participants in this study were two lecturers of UIN Walisongo Semarang and 147 learners (80 Male, 67 Female) from five classes, they were enrolled form dakwah and communication faculty.
Data collection and analysis
After the socialization of learning model for students in the mid lecture on December 2015, next, an initial questionnaire was given to the whole class of 147 learners before the start of the experiment to find out if they owned mobile devices and whatsapp. During the pilot program, daily activities of the group were facilitated by the lecturer that as a group administrator, moderator and driver to set the discussion or communication within the group, and researchers join in the group to monitor and determine the extent of use of the IIMG for engagement and collaboration between learner's communities, learner and their lecturer. Just before the end of the semester, the evaluation questionnaire was used to collect data from the students to measure the impact of IIMG on their engagement and behavior. Each participant was provided a questionnaire consisting of 22 items was designed. The survey contained questions regarding learner self-perception of their engagement during IIMG use. The results of attitude survey were converted into a numeric system allowing calculation of mean scores for each question and for each learner. Measurement of learner attitudes was obtained using an existing attitude survey modified by the researcher. There were a number of questionnaires about learner attitudes toward technology. There is the Computer Attitude Questionnaire (CAQ) a 65-item Likert-scale type instrument for measuring of learners' attitudes was chosen by the researcher. Same item for measuring the importance and enjoyment of computer use were modified into an attitude survey to measure learner perceptions of the impact of the instructional technology being investigated, that is IIMG, and their perceptions about their enjoyment and its importance in groups instruction. This is a free instrument available online at https://iittl.unt.edu/content/computer-attitude-questionnaire-caq developed by The Technology Applications Center for Educator Development. There are 7 subscales measuring various learner attitude components, but only part 1 was used and it was modified by changing the word "computer" to " ulearning model using IIMG. This instrument has been tested and used extensively by researchers. The reliability and validity data of CAQ are provided as part of the survey packages conducted in 1995 by Knezek and Christensen with a population of five hundred eightylearner and eight seventh and eighth grades in Texas to determine stability of measurement for the instrument. This attitude survey was modified by the researcher for comparative analysis of actual actual learner behavior and their own perception of their attitudes towards the IIMG.
Result
This section presents the results obtained from the model defined and implemented in order to develop u-learning model and analyze its impact in real environment, the results of the research based on the data collected and presented. Results are analyzed and presented in various sections below; Do you have a smart phone? This question required for detection of participants to indicate if they had a cell phone and if so which type (for example, smart phone or normal phone). The result from this question, 100% of the participants had a mobile phone and 85% of those devices were smart phone with the capabilities of accessing the internet, sending instant or multimedia messages, and other applications.
Do you have a whatsapp application? With our research, it was important to find out the average number of participants that had whatsapp applications on their mobile phone. The result from this question, the data show that 95% of the participants who have a smartphone, they also have a whatsapp application and 5% did not have and did not use whatsapp for personal reasons.
How often do you use your smart phone to chat with whatsapp application? This question was directed at finding out how often students used their mobiles to access whatsapp for class-related matters like discussion with peer friends or consult with their lecturer before and after learning models included. It was important to find out the contribution of the learner while accessing whatsapp to ascertain the number of the learner who were active participation, or sometimes or those who were not participating at all. The results are presented in Table 3. With Peer learner 76% 17% 7% Learns perceptions about benefit using whasapp The learner was asked to specify their level of agreements or disagreements on a scale to measure of subject attitude. Learner self-perception of personal attitudes of u-learning model using IIMG was collected using the modified CAQ. The results are presented in Table 4.
On the information accessed on the whtasapp as shown in table 4. 74% either agreed or strongly agreed that they are enjoy classroom instruction in the u-learning model using IIMG, while 11% neutral and 15% either disagreed or strongly disagreed, Similarly, 79% of participants agreed or strongly agreed if u-learning model using IIMG can make concentrate better on the lesson and 67% agreed or strongly agreed if they are know how to interaction in IIMG they will be able to get a good job. It is inversely with they are who tired in the u-learning model using IIMG as shown in table 4, 14% either agreed or strongly agreed, while 8% neutral and 78% either disagreed or strongly disagreed.
On the issue of usefulness of u-learning model using IIMG for engagement (Table 4 Item 1, 3 to 11, 13 to 17, 19, 21 and 22) a large number of 78% of the participants believed that the u-learning model using IIMG helped them to engage and be informed about the module in and outside the classroom. 11% neutral position and 11% disagreed.
The results of this observation and investigation showed that u-learning model using IIMG use has a positive effect on the behavior of all learners, thus, on their engagement with the ubiquitous learning environment. Overall, learners were aware of the positive impact of u-learning model using IIMG use on their engagement in ubiquitous classroom instruction. They regard positively, and this was evidenced by their task behavioral improvement. The investigation data showed general improvement in student behavior which translates into improved student engagement
Conclusion and Recommendations
The result of this research shows that the u-learning model using IIMG emphatically positive and can improve significantly on learner's engagement and behavior. Our analysis indicates that learners and lecturer, highly engaged in the learning process in ways transcended traditional classroom activities. The experimental evidence shows that IIMG can be used as an educational tool for u-learning in smart campus to help learners to engage and collaborate. Therefore, we conclude that, u-learning model using IIMG could be effective for student to engage as well as succeed in their campus activities.
For future work, the opportunities exist for further studies in this area. Corporation between ulearning model with mobile learning or e-learning it is possible to be done to produce models more smart in order to create a smart campus dream, so the learning activities between students and professors increasingly interwoven dynamic and output is expected to have high competitiveness. | 3,841.6 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
TDP-43 accelerates age-dependent degeneration of interneurons
TDP-43 is an RNA-binding protein important for many aspects of RNA metabolism. Abnormal accumulation of TDP-43 in the cytoplasm of affected neurons is a pathological hallmark of the neurodegenerative diseases frontotemporal dementia (FTD) and amyotrophic lateral sclerosis (ALS). Several transgenic mouse models have been generated that recapitulate defects in TDP-43 accumulation, thus causing neurodegeneration and behavioural impairments. While aging is the key risk factor for neurodegenerative diseases, the specific effect of aging on phenotypes in TDP-43 transgenic mice has not been investigated. Here, we analyse age-dependent changes in TDP-43 transgenic mice that displayed impaired memory. We found the accumulation of abundant poly-ubiquitinated protein aggregates in the hippocampus of aged TDP-43 transgenic mice. Intriguingly, the aggregates contained some interneuron-specific proteins such as parvalbumin and calretinin, suggesting that GABAergic interneurons were degenerated in these mice. The abundance of aggregates significantly increased with age and with the overexpression of TDP-43. Gene array analyses in the hippocampus and other brain areas revealed dysregulation in genes linked to oxidative stress and neuronal function in TDP-43 transgenic mice. Our results indicate that the interneuron degeneration occurs upon aging, and TDP-43 accelerates age-dependent neuronal degeneration, which may be related to the impaired memory of TDP-43 transgenic mice.
whether these mutations and pathological aggregation of TDP-43 cause disease by a loss of function, gain of function, or some combination of both remains unresolved 11 .
In patients with sporadic ALS/FTD, the levels of TDP-43 mRNA and protein are elevated by about 1.5-fold 12 and 1.5-2.5-fold, respectively, in affected brain regions 13,14 . Several reports indicate that elevated levels of wild-type TDP-43 are sufficient to cause neurological and pathological phenotypes mimicking FTD/ALS in mice [15][16][17] . Therefore, transgenic (Tg) mice expressing elevated levels of wild-type TDP-43 are appropriate disease models to capture the pathology of sporadic ALS/FTD in mice.
In this study, to define the pathomechanisms of sporadic ALS/FTD and to investigate the contribution of aging to the formation of such phenotypes, we generated Tg mice expressing wild-type TDP-43 under the control of the mouse prion promoter and defined the pathology, behaviour, and genes affected by the dysregulation of TDP-43 during aging. Consistent with other TDP-43 models, the Tg mice developed learning and memory deficits as well as mild impairment of motor function 15,18 . Interestingly, we observed massive aggregates derived from GABAergic inhibitory interneurons in the hippocampus of TDP-43 Tg mice. Intriguingly, we also observed the aggregates in aged wild-type mice; the aggregates increased as the animals got older, indicating that the degeneration of GABAergic interneurons occurs during aging and is accelerated by the increased accumulation of TDP-43.
Results
Generation and characterization of TDP-43 transgenic mice. To recapitulate the pathology of sporadic ALS/FTD in mice, we generated transgenic (Tg) mice in which full-length wild-type human TDP-43 was expressed under the control of the mouse prion promoter. We also generated mice expressing a truncated form of TDP-43, containing the C-terminal region (amino acid residues 208-414: R208) (Fig. 1a). This C-terminal fragment of TDP-43 is abundantly found in the affected neurons in patients with ALS/FTD 19 . We confirmed expression of FLAG-tagged TDP-43 by immunohistochemistry using an anti-FLAG antibody (Fig. 1b-d). The FLAG-tagged TDP-43 was localized mainly in the nuclei of neurons in the brain and spinal cord. We did not observe cytoplasmic accumulation of TDP-43 in the brain or spinal cord of the TDP-43 Tg mice even at 18 months of age (data not shown).
To assess the expression levels of TDP-43 in the brain tissue, we performed western blot analysis. The mean level of FLAG-tagged human TDP-43 in 8-month-old heterozygous TDP-43 Tg mice (TDP) was 1.28 fold higher than the endogenous mouse TDP-43 level in non-transgenic (NTg) mice (Fig. 1e,f). TDP-43 is known to bind to the 3' UTR of its own mRNA to induce its degradation, via an auto-regulation mechanism 20 . Consistent with this, quantitative analysis of the western blotting data revealed that the overexpression of human TDP-43-FLAG Fig. 1b), and the anterior horn of the spinal cord (d) of 8-month-old TDP-43 Tg mice were stained with the anti-FLAG antibody. (e) Immunoblots of brain tissues of non-Tg littermate (NTg), heterozygous TDP-43 Tg mice (TDP) at 8 months and 18 months of age. Whole brain tissues of 8-monthold mice with the indicated genotypes were immunoblotted with anti-TDP-43, anti-FLAG, or anti-GAPDH antibodies. The filled and open arrowheads denote hTDP-43 (transgene) and mTDP-43 (endogenous), respectively. (f) Quantification of the amount of TDP-43 relative to wild-type TDP-43 in NTg mice. The intensities of bands shown in Fig. 1E were quantified using the MultiGauge software; n = 2. (g) Body weights of TDP-43 Tg mice (TDP) and non-Tg mice (NTg). Averaged body weights of the male mice with the indicated genotypes were plotted. Unpaired t-test, *p < 0.05. (Fig. 1e, filled arrowhead) reduced the level of endogenous mouse TDP-43 protein (Fig. 1e, open arrowhead) to 74.1% (Fig. 1e,f) of its original level. Therefore, the mean total TDP-43 (endogenous and exogenous TDP-43) in the brains of 8-month-old TDP-43 Tg mice was 2.03 times that of the mean total endogenous TDP-43 in the NTg animals (Fig. 1e,f). These ratios were not significantly different in the brains of 18-month-old mice (Fig. 1e,f; FLAG-tagged human TDP-43, 1.35-fold; endogenous mTDP-43, 0.67-fold; total TDP-43, 2.02-fold). In the transgenic mice expressing the C-terminal TDP-43 fragment R208 (R208 Tg mice), the FLAG-tagged R208 level was 12.6% of the FLAG-tagged TDP-43 level in TDP-43 Tg mice ( Supplementary Fig. S1a, n = 4). As previously reported, the endogenous mouse TDP-43 expression was not affected in the presence of the C-terminal TDP-43 protein ( Supplementary Fig. S1b), given that R208 lacks an RNA-binding motif and has no ability to autoregulate its mRNA levels 9 .
The mean body weight of the TDP-43 Tg mice was slightly higher than that of the non-Tg mice at 7-9, 25, 26, and 30-35 weeks of age. While the heterozygous TDP-43 Tg mice showed a trend of a decrease in body weight with increasing age, they did not show obvious signs of paralysis for over 2 years.
To investigate the effect of further increases in the expression of TDP-43, we crossed male and female heterozygous TDP-43 Tg mice to generate homozygous TDP-43 Tg mice (TDP/TDP). All the homozygous progeny died between postnatal (P) days 25 and 27 with severe paralysis of the limbs ( Supplementary Fig. S2c, Supplementary Movie S1). This result is consistent with that observed by Wils et al., who used the Thy1 promoter to drive the TDP-43 expression 17 . At P20, the levels of exogenous hTDP-43-FLAG protein in the heterozygous and homozygous progeny were 74% and 112% of the endogenous mouse TDP-43 protein levels in NTg mice, respectively (Supplementary Fig. S2a,b). The endogenous expression of TDP-43 was only slightly affected ( Supplementary Fig. S2a,b; TDP/-, 86%; TDP/TDP, 84%). Therefore, the total amount of TDP-43 in the heterozygous and homozygous progeny was 1.6-fold and 2-fold of that in NTg mice, respectively. Despite this modest increase in the total TDP-43 levels in the homozygous progeny compared with the heterozygous mice, homozygous progeny (TDP/TDP) displayed microglial activation in the spinal cords in addition to muscle fibre atrophy and degeneration ( Supplementary Fig. S2d). These data indicated that there might be a threshold amount beyond which TDP-43 becomes extremely toxic to the central nervous system and muscles.
Memory deficits in TDP-43 transgenic mice. To investigate whether abnormal TDP-43 accumulation causes the cognitive phenotype related to FTD/ALS in mice, we performed serial behavioural tests including the Y-maze test, rotarod test, and contextual and cued fear conditioning tests. First, to assess the working memory and activity in heterozygous TDP-43 Tg mice, we performed the Y-maze test. There was no significant difference between the TDP-43 Tg (TDP) and non-Tg (NTg) mice at 8 months of age in either the alternation rate or the total number of entries (Fig. 2a, left and middle), indicating that the working memory of the TDP-43 Tg mice was normal. The total distance travelled by the TDP-43 Tg mice was also similar to that travelled by the non-Tg mice, indicating that the activity was also normal in the TDP-43 Tg mice (Fig. 2a, right). The same mice were tested on the Y-maze test when they were at 13 months and 18 months of age, and there was no significant difference between the TDP-43 Tg and non-Tg mice in the alternation rate, total number of entries, or the total distance travelled, suggesting that aging did not affect short-term memory and activity in TDP-43 Tg mice ( Supplementary Fig. S3a,b). These data showed that the short-term memory and activity are both normal in TDP-43 Tg mice.
Next, we performed a rotarod test to analyse motor function and learning. There were statistically significant differences between TDP-43 Tg and non-Tg mice (age, 8 months) in the latency to fall on day 2 ( Fig. 2b; Day 2, p = 0.0088). The test was conducted when the mice were 13 and 18 months old; 13-month-old mice showed a shorter latency to fall (Supplementary Fig. S3c; Day 1, p = 0.0047; Day 2, p = 0.0448; Day 3, p = 0.0203). These results indicate that motor function and learning were moderately impaired in the TDP-43 Tg mice.
Finally, to assess associative fear learning and memory in mice, we performed contextual and cued fear conditioning tests. In the contextual test, there was a significant difference between the TDP-43 Tg mice (TDP) and non-Tg mice (NTg) in the percentage of freezing time at all the ages tested, indicating that TDP-43 Tg mice displayed defective contextual fear memory ( We further noticed that the distance travelled during the conditioning of mice at 13 months of age was significantly longer in TDP-43 Tg mice than in non-Tg mice (NTg-1) (Fig. 2f). Likewise, the distance travelled during the conditioning of mice at 18 months of age was significantly longer in TDP-43 Tg mice than in non-Tg mice (NTg-1) (Fig. 2g). Therefore, the long-term memory might be impaired in TDP-43 Tg mice.
In heterozygous R208 Tg mice, there was no significant difference between NTg mice and R208 mice in all the tests performed ( Fig. 2f,g, Supplementary Fig. S4). Therefore, overexpression of the C-terminal fragment, the accumulated form of TDP-43 in patients, was not sufficient to induce memory deficits in mice. Alternatively, the lack of a phenotype in R208 Tg mice may be due to low level of expression of this fragment of TDP-43, as shown in Supplementary Fig. S1.
Poly-ubiquitin-and p62-positive aggregates derived from GABAergic interneurons in the hippocampus of aged mice. We further investigated the histopathological changes in the brains of TDP-43 Tg mice. Intriguingly, we found numerous poly-ubiquitin-positive aggregates in the hippocampus of both TDP-43 Tg mice and aged non-Tg mice (Fig. 3a,b). The aggregates were also immunoreactive for p62 (Fig. 3c). This pattern indicates that selective autophagy was impaired. Normal mouse Ig was used to confirm the specificity of mouse monoclonal antibodies ( Supplementary Fig. S5b). (a) Quantitative analysis of the Y-maze test data. The mean alteration rate, total number of entries, and total distance travelled by non-Tg (NTg) and TDP-43 Tg (TDP) mice are plotted. (b) Quantitative analysis of the rotarod test data. The mean holding time on the rotating rod at 8 months of age over three sequential trials is shown. (c) Experimental design used for contextual and cued fear conditioning tests. (d-g) Quantitative analysis of contextual and cued fear conditioning test data. The percentage of mean freezing times in the contextual test (d) and cued test (e) at the indicated ages is plotted. The distances travelled during conditioning of 13-month-old mice (f) or 18-month-old mice (g) are plotted for each genotype. Unpaired t-test; the data are presented as mean ± SEM. Significance level thresholds of *p < 0.05, **p < 0.01, and ***p < 0.001 were used ( Fig. 3a,b,d-g). To analyse whether these aggregates were derived from glial cells, we stained brain sections with antibodies against GFAP, Iba1, or Mac2. We did not observe these cell types to co-stain with p62 or poly-ubiquitin, indicating that the aggregates were not located in glial cells ( Fig. 3d-f). In contrast, the aggregates stained positive for GABAergic inhibitory interneuron markers, parvalbumin (PV) and calretinin (CR) (Fig. 3g,h). Since the size of an aggregate was about 1-2 μm (diameter), these were not intact interneurons (Fig. 3i, arrowheads, intact cells; arrow, aggregates). Tau, which stabilizes microtubules and is observed to abnormally accumulate in Alzheimer's disease, was also a component of the aggregates (Supplementary Fig. S5f). Since we did not detect phosphorylated Tau, recognized with AT8 or AT180 antibodies, in the aggregates (data not shown), the Tau present in the aggregates may be a non-phosphorylated form. MAP2, another microtubules-stabilizing protein localized in cell bodies and dendrites, was also not a component of the aggregates (Supplementary Fig. S5g). Moreover, anti-TDP-43 antibody did not stain the aggregates strongly, suggesting that TDP-43 was not a major component of these aggregates (Fig. 3j). However, we observed very weak staining in the aggregates at the intensity similar to that of cytoplasmic TDP-43 (Fig. 3j, high sensitivity), suggesting that a very small amount of TDP-43 was included in the aggregates.
We have also found that the Periodic-acid-Schiff (PAS) staining in the brains of aged mice showed a similar cluster pattern (Fig. 3k, Supplementary Fig. S5a). A PAS-positive granule was first described as a granular structure increased in the brains of senescence-accelerated mice (SAM) 21,22 . We immunostained the aggregates with anti-p62 antibody following PAS-staining, which confirmed that these were the same granules (Fig. 3k). Sometimes, we observed negative staining of p62 in the center of aggregates, but it was always stained with PAS (Fig. 3k, arrow).
Electron microscopy of the hippocampus of aged mice revealed that the aggregates were about 1-2 µm in diameter and composed of electron-dense crystalline-like fibrillary structures. Importantly, the aggregates were surrounded by a plasma membrane, suggesting their intracellular location (Fig. 4a).
Furthermore, by correlative light-and immunoelectron microscopy on Tokuyasu cryosections of the hippocampus of the TDP-43 Tg and aged non-Tg mice we confirmed that massive gold particles indicating p62 were accumulated in the cytoplasmic aggregates corresponding to those with coarse granular fluorescence for p62 (Fig. 4b). PAS, ubiquitin and p62-positive cytoplasmic aggregates in the parvalbumin neurons shared the similarity for polyglucosan bodies known as corpora amylacea 23 or PAS granules 24 observed in the brain of aged mice.
Increased number of aggregates derived from GABAergic interneurons in TDP-43 Tg mice.
Interestingly, the numbers of these poly-ubiquitin-positive aggregates are higher in TDP-43 Tg mice than in NTg mice at 8 months of age (Fig. 5a, p < 0.05) and 20 months of age (Fig. 5b, p < 0.05), indicating an increased number of aggregates derived from GABAergic interneurons in TDP-43 Tg mice. To confirm the increase of multi-ubiquitinated proteins in aged mice, the hippocampus of aged mice were analysed by western blot analysis (Fig. 5c). The amount of multi-ubiquitinated proteins in RIPA-insoluble fraction was increased upon aging.
To analyse whether the number of interneurons was lower in TDP-43 Tg mice due to accelerated neuronal degeneration, we counted GAD67-positve interneurons using sections adjacent to those used to count the p62-positve aggregates. Representative images of the sections used for counting are shown in Supplemental Fig. 5e. There was no significant difference between TDP-43 Tg mice and non-Tg mice in the number of GAD67-positive interneurons in the hippocampus of 20-month-old mice (Fig. 5d).
Differentially expressed or alternatively spliced genes in TDP-43 Tg mice. To define the molecules that might be involved in interneuron degeneration and pathogenesis of TDP-43 Tg mice, we analysed changes in gene expression and alternative splicing in TDP-43 Tg mice. We isolated the hippocampus, cerebral cortex, amygdala, and cerebellum from 8-month-old mice (n = 3). We extracted RNA from these brain regions, hybridized the RNAs to an exon array, and analysed for changes in splicing and gene expression (Fig. 6a). We observed significant splicing differences (p < 0.05, splicing index > 0.5) in 804 genes in either one of the tissues between TDP-43 Tg mice and NTg mice. We further analysed the microarray results for any overlaps between brain regions, and found that only a small numbers of genes overlapped (Fig. 6a, left). Significant gene expression changes (p < 0.05, 1.2-fold change) in either one of the brain regions of TDP-43 Tg mice compared with those of NTg mice were observed in 122 genes (Supplementary Table S1). Several genes were upregulated in TDP-43 mice, including genes involved in neuronal function or oxidative stress. We further analysed these genes for overlaps between brain regions, and again, only a small number of genes were seen to overlap (Fig. 6a, right). There was not a clear correlation between the changes we found in our mice and the changes in similar model mice reported previously 25,26 . However, two out of eleven genes whose expressions were changed in the cortex of TDP-43 Tg mice overlapped to those whose splicing was reported to change in previous studies of TDP-43 Tg mice using prion promotor 25 . Kcnip2 was reported to be one of the genes in which exon skipping was enhanced in the cortex of mutant TDP-43 mice or TDP-43-depleted mice, and it was downregulated in the cortex of our TDP-43 Tg mice ( Supplementary Fig. S6). The exon inclusion of Caly was enhanced in TDP-43-depleted mice 25 , and the expression of Caly was decreased in the cortex in our TDP-43 Tg mice (Supplementary Table S1). Sort1, the only gene whose splicing was changed in wild-type TDP-43 Tg mice in the study by Arnold et al. was not changed in our TDP-43 Tg mice. Among the genes with changes in expression, riboflavin kinase (Rfk) was the one for which we observed the largest change in expression in the hippocampus (Fig. 6b, Supplementary Table S1). To confirm the changes in RNA levels observed in the exon array, we performed quantitative RT-PCR for some genes. Changes in the expression of all genes tested including Rfk and Kcnip2 were confirmed (Supplementary Fig. S6).
Mild increase in the expression of Riboflavin kinase in the neurons of TDP-43 Tg mice. Among the many gene expression changes we observed in TDP-43 Tg mice, we further focused on riboflavin kinase (Rfk) as it was the one for which we observed the largest change in expression (Fig. 6b, Supplementary Table S1). Rfk is a kinase that phosphorylates riboflavin, and is known to be involved in the production of reactive oxygen species (ROS) as a co-receptor of the TNF receptor [27][28][29] . The increased expression of Rfk in the hippocampus of 8-month-old TDP-43 Tg mice was confirmed by quantitative RT-PCR using SYBR green ( Fig. 6c; p = 0.0437). To investigate which cell types express Rfk, we also performed in situ hybridization (Fig. 6d) and immunohistochemical analyses (Fig. 6e,f) with the sections of fresh frozen brains of 8-month-old mice. The expression pattern of Rfk was similar to that of NeuN, indicating that Rfk is expressed in all types of neurons (Fig. 6e). We also confirmed the expression of Rfk protein in parvalbumin interneurons (Fig. 6f).
Rfk can induce neuronal cell death. To investigate whether the increased expression of Rfk can potentially induce neuronal cell death, we overexpressed Rfk in primary hippocampal neurons. We observed increased cell death in neurons expressing Rfk (Fig. 6g), indicating that increased expression of Rfk is sufficient to induce neuronal cell death.
To define the pathway by which interneurons were degenerated in TDP-43 Tg mice, we analysed whether apoptotic cell death was induced in TDP-43 Tg mice. We stained brain sections with anti-cleaved caspase 3 and anti-GAD67 antibodies to detect apoptotic neurons. We did not detect any apoptotic cell death of GABAergic interneurons in both NTg mice and TDP-43 Tg mice (data not shown). Next, we analysed necroptosis using the RIPK1 antibody. The aggregates were slightly stained with RIPK1 antibody but obvious necroptosis was not detected in TDP-43 Tg mice (Supplementary Fig. S7). Therefore, while inhibitory neurons might have died via necroptosis, there was not a significant increase in cell death of inhibitory neurons in TDP-43 Tg mice.
Discussion
We found that mice expressing exogenous TDP-43 display memory deficits, and massive p62-and poly-ubiquitin-positive aggregates are observed in the hippocampus of both aged non-Tg mice and TDP-43 Tg mice. We found poly-ubiquitin-and p62-positive aggregates in the parvalbumin neurons had properties similar to polyglucosan bodies known as corpora amylacea 23 or PAS granules 24 observed in the brain of aged mice. The number of PAS-stained aggregates was higher in senescence-accelerated mice and upon aging 21 , indicating that aging accelerates the formation of poly-ubiquitin-positive aggregates. Therefore, aging-related cellular changes might be accelerated in TDP-43 Tg mice. We also found that the aggregates consist of calretinin, parvalbumin, p62, poly-ubiquitin, and a small amount of tau protein. These data suggest that aging and upregulation of TDP-43 cause GABAergic interneuron degeneration through an unknown pathway. Moreover, we found that the poly-ubiquitin-positive granules shared the similarity for polyglucosan body known as corpora amylacea that has been shown to be associated with both normal aging and neurodegenerative diseases including Alzheimer's disease 30 . Our study may provide a possible link of FTD/ALS and other neurodegenerative diseases.
Dysfunction of GABAergic interneurons could lead to hyperactivity of excitatory neurons in the hippocampus, causing impaired memory that is observed in TDP-43 Tg mice. In TDP-43 Tg mice, the number of cleaved caspase 3-positive neurons was slightly increased (Supplementary Fig. S7), but the total number of GABAergic neurons in the hippocampus remained unchanged (Fig. 5d). Therefore, while interneuron cell death may be induced in TDP-43 Tg mice, it did not influence the total number of interneurons. Electrophysiological studies would clarify whether the interneurons in the hippocampus of TDP-43 Tg mice are functionally normal.
Recently, Tg mice expressing mutant TDP-43 were shown to exhibit defects in the inhibitory circuitry due to the decreased activity of parvalbumin-positive GABAergic interneurons 31 . In mutant TDP-43 Tg mice, hyperactivity of somatostatin interneurons leads to the decreased activity of parvalbumin interneurons, resulting in the hyperactivity of pyramidal neurons in layer 5 of the cortex. If excess accumulation of wild-type TDP-43 has an effect similar to or even slightly weaker than that of mutant TDP-43, parvalbumin interneurons might be damaged in the TDP-43 Tg mice we generated as well. If so, we could have detected the defects in interneurons as an increase in the poly-ubiquitin-positive aggregates in the hippocampus of TDP-43 Tg mice.
As opposed to ALS/FTD patients, the cerebellum is affected in TDP-43 mice and accumulations of poly-ubiquitinated aggregates are also detected in the cerebellum (data not shown). This may be due to the overexpression of TDP-43 in the cerebellum, which does not occur in ALS/FTD patients.
We identified several genes misregulated in TDP-43 mice, including genes involved in neuronal function and oxidative stress. Among the genes misregulated in TDP-43 mice, Rfk could induce cell death when upregulated in primary cultures of hippocampal neurons. However, neuronal death was not robustly induced in the hippocampus of TDP-43 Tg mice, so the mechanism underlying how the increase in the aggregates derived from interneurons remains unknown. Another possible pathway to explain the increase in the number of the aggregates is the impaired clearance of poly-ubiquitinated proteins in TDP-43 Tg mice. Hence, poly-ubiquitinated proteins in the aggregates derived from interneurons are bound to p62, signalling that they should be degraded by autophagy. Autophagy is known to reduce with aging. Deficits in autophagic activity are seen in several neurodegenerative diseases beyond FTD/ALS, including Alzheimer's disease and Huntington's disease 32,33 . In our exon array analysis, we found Atg10, which is a component of the autophagosome, is downregulated (Supplementary Table S1). Further study is necessary to assess the importance of autophagy in the accumulation of aggregates derived from interneurons in TDP-43 Tg mice. The accumulation of poly-ubiquitin-and p62-positive aggregates that we found in hippocampus of TDP-43 Tg mice could physically interfere with normal neuronal circuitry. Eliminating these aggregates by enhancing autophagy in GABAergic interneurons might mitigate the memory deficits observed in TDP-43 Tg mice.
The FTD/ALS model mice we developed show unique and unreported proteinopathy, which refers to the accumulation of debris of interneurons. The degeneration of interneurons seen in our mouse model could be the very early age-accelerated changes observed in the disease. Moreover, it has been reported that inhibitory interneuron deficits links altered network activity and cognitive dysfunction in models of Alzheimer's disease [34][35][36] . Therefore, our TDP-43 Tg mice might be a useful model to study neurological diseases accelerated by aging. The investigation of human postmortem samples will be important for testing if interneuron degeneration occurs in the human brain.
Methods
Generation and maintenance of animals. All the experimental protocols with animals were approved by the Animal Care and Use Committee of the RIKEN Brain Science Institute and Nagoya City University, and the animal experiments were performed according to the guidelines of the Ministry of Education, Culture, Sports, Science, and Technology, Japan. FLAG-tagged wild-type human TDP-43 or C-terminal region of human TDP-43 (208-414) were cloned into the exon 2 of the prion protein promoter region in a PrP vector (a gift from Dr. David Borchelt), and injected into pronuclei of fertilized eggs derived from BDF1 mice. The sequences of primers used to genotype the mice are listed in Supplementary Table S2. All the mice were backcrossed with C57BL/6j mice. The mice were housed in a room with a 12 h light/dark cycle, with unrestricted access to food and water. Behavioural tests were performed between 10:00 am and 6:00 pm at the RIKEN Brain Science Institute. Before all the behavioural tests, the mice were housed in a testing room for at least 30 min to allow acclimatisation to the testing environment. After the test, the testing apparatus was cleaned with 70% ethanol to prevent a bias due to olfactory cues.
Y-maze test. The Y-maze test was performed using the 3-arm Y three independent experiments are combined and plotted. More than 200 cells for each column were counted, and more than 600 cells were counted for day 6. The data are presented as mean ± SEM. Unpaired t-test, *p < 0.05, **p < 0.01.
Contextual and cued fear conditioning test. The fear conditioning test was performed using the fear conditioning test system CL-1020 (O'HARA & CO. LTD). Mice were placed in a chamber and provided with a conditioned stimulus (CS) of 65 dB noises and an unconditioned stimulus (US) in the form of a foot shock (0.15 mA) twice (conditioning). On the following day, the mice were returned to the conditioning chamber and allowed to explore the chamber without either the CS or the US (contextual fear test). Mice were then placed in another chamber with different properties from the conditioning chamber, and the CS was presented (cued fear test).
Electron microscopy. Conventional electron microscopy and Correlative light-and immunoelectron microscopy (CLEM) on Tokuyasu cryosections were performed as described previously 37,38 with some modifications. Briefly, C57BL/6 mice at 25 months were deeply anesthetized with pentobarbital (25 mg/kg i.p.) and fixed by cardiac perfusion either with 2% paraformaldehyde (PA) and 2% glutaraldehyde (GA) or 4% PA buffered with 0.1 mol/L phosphate buffer (PB; pH7.2) for conventional electron microscopy and immunoelectron microscopy, respectively. Brain tissues were further immersed in the same fixative overnight at 4 °C, and 1-mm thick sagittal slices of hippocampal tissue were prepared. For conventional microscopy, samples were postfixed with 2% OsO4 in 0.1 mol/L PB (pH7.2), block-stained in 1% aqueous uranyl acetate, dehydrated with a graded series of alcohol, and embedded in Epon 812 (TAAB, Reading, UK). Ultrathin sections were cut with a Leica UC6 ultramicrotome (Leica Microsystems, Vienna, Austria), stained with uranyl acetate and lead citrate, and observed with a Hitachi HT7700 electron microscope (Hitachi, Tokyo, Japan). For immunoelectron microscopy, samples were washed thoroughly with 7.5% sucrose in 0.1 M PB (pH7.2), embedded in 12% gelatin in 0.1 M PB (pH7.2), rotated in 2.3 M sucrose in 0.1 M PB (pH7.2) overnight at 4 °C, placed on a specimen holder (Leica Microsystems, Vienna, Austria), and quickly plunged into liquid nitrogen until used. Approximately 400 nm thick sections were cut at -80 °C with a Leica UC7/FC7. Thereafter, sections were incubated for 1 hour at room temperature (RT) each in rabbit anti-p62 (1:20, Wako Pure Chemical Industries, Osaka, Japan) primary antibody, and Alexa 488 conjugated donkey anti-rabbit IgG (1:200, Invitrogen Life Technologies, Carlsbad, CA). Following counter staining with DAPI, sections were coverslipped with 50% glycerol in distilled water and observed by a BZ-X700 microscope (Keyence, Osaka, Japan). After fluorescent microscopic identification of punctate p62-positive structures in the hippocampus, ultrathin cryosections (~70-nm) were cut with a Leica UC7/FC7 at about -120 °C, picked up with a 1:1 mixture of 2% methylcellulose and 2.3 M sucrose, and transferred to a nickel finder grid for subsequent electron microscopy observations. Sections were rinsed with PBS containing 0.02 M glycine, treated with 1% BSA in PBS, and incubated overnight at 4 °C with rabbit anti-p62 (1:20), following the sequential incubations for 1 hour at RT with Alexa488 conjugated donkey anti-rabbit IgG (1:200) and protein A gold with 10 nm colloidal gold particles, respectively (1:50, Cell Microscopy Center, University Medical Center Utrecht, Utrecht, the Netherlands). The sections were then fixed with 1% GA in PBS. Grids were independently coverslipped with 50% glycerol in distilled water and observed by BZ-X700 microscope. The coverslips were unmounted from the object slide in a 10 cm dish with distilled water. The grids were embedded in a thin layer of 2% methylcellulose with 0.4% uranyl acetate (pH 4.0), air-dried, and then ultrastructures corresponding to those with coarse granular fluorescence were observed with a Hitachi HT7700 electron microscope For the control experiments, ultrathin sections were reacted only with the gold particle-and/or Alexa 488 conjugated secondary antibody.
SCIENtIfIC RepoRts | 7: 14972 | DOI:10.1038/s41598-017-14966-w Exon array and qRT-PCR. The total RNA was extracted using the mirVANA miRNA isolation kit (Ambion) according to the manufacturer's instructions, and its integrity was assessed using a Bioanalyzer (Agilent). The samples were prepared using the WT Expression Kit (Ambion) and the GeneChip Terminal Labelling Kit (Affymetrix) according to the manufacturer's instructions, and were hybridised to the Mouse Exon 1.0 ST Array (Affymetrix). The data was analysed using GeneSpring GX (Agilent). The detailed method used for quantitative RT-PCR has been previously described elsewhere 39 . Briefly, it was performed using the SYBR green mix (ABI) and gene-specific primers using the Thermal Cycler Dice Real Time System II (Takara Bio). More than three biological replicates per group and three technical replicates were applied. The relative mRNA expression was calculated using the standard curve method by normalizing the absolute expression level to that of ß-actin and relative to that of the control samples. The sequences of primers designed using the Primer3 software are available in Supplementary Table S2.
In situ hybridisation. In situ hybridisation (ISH) was carried out as previously described 40 . Briefly, mice were perfused with 4% PFA in phosphate buffer using DEPC-treated water, and the brain tissue was dissected and post-fixed in the same buffer overnight. The tissue was frozen on the next day, and brain sections (20 μm) were prepared within a week. The sections were treated with 0.1 N HCl and Protease K, fixed, and acetylated. After pre-hybridisation in 50% formamide, 2x SSC, 1x Denhardt's, 10 mM EDTA, 50 mg/ml tRNA, and 0.01% Tween20 at 55 °C for 1-2 h, the sections were hybridised with digoxigenin (DIG)-labelled RNA probes prepared using the DIG RNA labelling kit (Roche, #11175025910) in pre-hybridisation buffer supplemented with 5% dextran sulphate at 55 °C overnight. After RNaseH treatment to digest the unhybridised DIG-RNA, the brain sections were intensively washed with a low ionic buffer, and incubated with AP-conjugated anti-DIG antibody (Roche). The signal was visualised with NBT-BCIP. A fragment (1083-2036 nt) of the mouse Rfk gene (NM_019437) was cloned into pBluescript II and used as the template for probe synthesis.
Cell death assay. The hippocampus was dissected from E18 ICR mice and dissociated with trypsin-EDTA.
The cells were cultured on poly-L-lysine-coated cover glass in Neurobasal medium supplemented with 2% B27, 5% FBS, and Penicillin-Streptomycin. The primary cultured hippocampal neurons were transfected with GFP or GFP-Rfk at DIV (days in vitro) 3, and the cells were fixed and co-stained with anti-GFP and anti-cleaved caspase-3 (CC3) antibodies to assess cell death at DIV 6, 8, and 10. The number of CC3-positive cells divided by the number of GFP-positive was calculated.
Statistical analysis. The unpaired t test, unpaired t test with Welch's correction, and Mann-Whitney U test were used. The data except Fig. 5d are presented as mean ± SEM. The data in Fig. 5d is presented as box and whiskers (min to max). Significance level thresholds of *p < 0.05, **p < 0.01, and ***p < 0.001 were used. The statistical analysis was performed using Excel and GraphPad Prism 6 (GraphPad Software Inc.). Confocal images were prepared with Adobe Photoshop CS4. | 7,655.2 | 2017-11-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Tribology and corrosion behavior of gray cast iron brake discs coated with Inconel 718 by direct energy deposition
Gray cast iron (GCI) is a conventional material used in industrial applications and it is well known for its usage in brake disc fabrication. In the case of vehicles that serve passenger transport, one of the solutions to prolong the lifetimes of brake discs can be to laser clad their surface with harder and corrosion-resistant materials. This research aims to improve the mechanical properties of the brake discs using Inconel 718 (IN718) metallic powder to laser cladding the parts’ surfaces by a direct energy deposition (DED) method. By using the proper process parameters, the interface between the substrate and the deposited material was uniform and adherent and forming a compact bonding of the two materials, without pores or cracks. It was found that the average hardness of the IN718 deposited material was twice higher than the one of the GCI substrates. By comparing the friction coefficient (CoF) of the IN718 coatings with that of the brake discs, a difference of 0.12 in favor of IN718 was assessed. The depth of the wear track was analyzed for both materials and showed that the pin trace on the GCI (−150 µm) surface was 3 times deeper than the trace produced on the surface coated with IN718 (−50 µm). The corrosion resistance of the IN718 coating was superior by four orders of magnitude to that of GCI substrate in 3.5 wt.% NaCl aqueous solution.
coarse grains and inferior surface quality of GCI are the main disadvantages that impede on the widening of the possible applications and relatively reduced service life. An important application of GCI is the brake disc, which is an essential component of the braking system for automobile, aviation, and rail vehicles. In the last decade, the environmental pollution caused by the emission of particles released during the braking process has attracted an increased attention and concern from environmental agencies, following proves from numerous studies and research efforts from all over the world [4][5][6][7]. Due to the fact that GCI has mechanical properties compatible with this type of application, and the components made from it can be cast and machined at a low cost, it will be difficult to identify a similar material to replace it. Aiming to overcome the above disadvantages, surface modification technology could be introduced to improve the performance of GCI. This can be the most suitable solution, as the brake discs made of GCI could be coated with materials for reducing wear and corrosion, while also succeeding to maintain or improve their functional performance [8].
The Fraunhofer Institute for Laser Technology ILT and RWTH Aachen University have developed an improved coating method using laser cladding as the basic technique, called "Extreme High-speed Laser Material Deposition (EHLA)." This technique seems to be extremely efficient in the process of coating brake discs with metallic layers with superior properties both from an economic and technical point of view, due to the use of high processing speeds (up to 8 m/s, compared to 0.008-0.03 m/s) [9,10].
DED is one of the numerous methods of coating metal surfaces with other metallic materials (air plasma spraying (APS), high-velocity oxygen fuel (HVOF), thermal spray, gas tungsten arc welding (GTAW), laser welding (LW), thermal barrier coatings (TBC)) that can obtain excellent performance against wear and corrosion. DED is an additive manufacturing technology which allows coating in a single step of a layer with thickness dimensions between 0.3 and 2 mm. Such a layer made of metallic or composite materials possesses remarkable characteristics, such as reduced heataffected zone, minimal dilution, and improved wear [11][12][13][14] and corrosion resistance [15][16][17][18][19]. DED can be found in literature as laser cladding (LC) or laser melting deposition (LMD) and has huge potential for refurbishing and transforming the surface of crucial components and expanding their service life [20][21][22]. However, the literature studies have shown that the DED technology applied to GCI materials has certain restrictions [23,24]; deposition of Co [25] and NiCrBSi [26] on GCI substrates has been studied using high-power continuous emission laser sources, but cracks were observed in the structure of the deposited substrate. These defects were caused by the residual stresses that occur during the build of the deposited structure.
IN718 is a high-strength precipitation-hardening nickelbase alloy that has exceptional characteristics, including excellent mechanical properties in severe conditions, low creep at high temperatures, high oxidation resistance, and hot corrosion resistance up to 650 °C, as well as good welding behavior [16,27,28]. Therefore, it is extensively used in various industries, such as petrochemical industries aerospace and marine.
Stanciu et al. [29] deposited double layers of NiCrBSi and IN718 by laser cladding on a steel substrate. Thus, they obtained an increase in corrosion resistance, a reduction in dilution with the substrate, and a hardness gradient with increased values on the deposited NiCrBSi layer. Another study showed a successfully deposited NiCo coating on IN718 substrate by pulsed laser cladding. In this case, the hardness of the deposited layer was higher with 21% than the one of the IN718 substrate and the coating's surface residual stress was compressive stress, which prevents crack generation and improves fatigue strength [12]. Using a pulsed laser source, the heat-affected area was smaller [30], cracks were reduced due to low heat input [31], and the hardness of the deposited layer increased with the use of a high pulse rate [32]. To eliminate the voids that occur during the laser cladding process on GCI, a study about remelting of the deposited material was carried out. The results revealed that laser remelting was an effective way to eliminate the voids generated in the interface zone and the microhardness of the hardened region was ~ 600HV [33]. Another research investigated the effects of process parameters on the geometric characteristics, microstructure, and corrosion resistance of the Co-based coating on 42CrMo pipeline steel. The authors obtained excellent corrosion resistance through the adjustment of the process parameters [15]. The microstructure and tribological properties of the deposited layer were studied based on process parameters during laser cladding Ni-based on QT500-7 ductile cast iron. The hardness, corrosion resistance, and tribological properties were significantly improved by optimizing the process. When the WC content is in 5-35%, the microhardness increased 3 times. When the WC content is 20%, the corrosion current is three times lower that the one of the substrates which means that the corrosion resistance of the coating is higher. The wear rate of the substrate is almost seven times higher than the cladding layer with different WC mass fractions [34].
The effects of laser remelting (LR) speed of IN718 alloy layers obtained by LMD have been studied by Xin et al. [35]. Using a high speed of LR process (3 times higher than the speed used in the case of the LMD processing), the mean primary dendrite spacing values of the remelting area decreased from 6.35 to 3.28 μm gradually and the hardness increased with 12 HV. Mazzucatto et al. [36] conducted a research study on the influence of LMD process parameters on the mechanical properties of IN718 depositions at room temperature and several deformation rates (i.e., 0.001, 200, and 800/s) using a split Hopkinson drawbar. The results established an important influence of the LMD process parameters on the mechanical properties of the as-built metal compared to as-cast material and a good repeatability of the process. The influence of process parameters in the case of the laser processing of cast iron in terms of dilution, layer width, and heat-affected zone was studied by Fan et al. [37]. They developed a control scheme for the laser plating process to achieve a low level of dilution and to reduce the width of the heataffected zone. Liu et al. [38] studied the effect of combined heat treatment (solid solution 1050 °C + double aging) on IN718 laser-coated layers. They showed that the hardness increased from 350 to 500HV and the residual stress turned into compression stress after the heat treatment.
When it comes to laser coating GCI, there are many problems to be solved: obtaining a good adhesion between the two materials due to the different chemical compositions and mechanical properties, reducing the dilution at the interface of the deposited layers, limiting the formation of Laves phases in order to reduce cracking and stress [39].
Conventional coating techniques such as electroplating or thermal spraying can produce poorer metallurgical bond between GCI and the protective layer, caused by porosity. Besides, there are some thickness uniformity problems that can be solved only when using expensive plasma equipment. Using DED, it is possible to create a strong metallurgical bond and the remaining powder can be recycled [40]. Additionally, the coating produced by a single track is usually between 0.5 and 1 mm thick, which is more than enough to cover a surface with protective material by one single pass. These advantages make this method relevant for the industrial application and it can become industrially viable in the near future. Therefore, the 3R principals, i.e., "reduce, reuse, recycle" [41], and the circular economy conception are fully aligned with DED technology by combining economic growth and environmental protection, promoting the extension of the useful life of products which have exhausted their physical and/or functional service life and would otherwise be discarded, thus maximizing their utilization capacity and maintaining their value for as long as possible [42].
This study aims to improve the wear and corrosion properties of GCI by adding a layer of IN718 in order to enable the fabrication of long-term brake discs for the automotive industry. In terms of sustainability, this is a method that can be applied for refurbishment of the utilized brake discs instead of replacing them.
One main novelty of the paper is the coating by laser cladding of the brake discs with IN718, a very hard and anticorrosive nickel-based alloy. The behavior of this material when deposited on GCI substrate has not been explored before. Moreover, this is the first report in the literature of functional testing of coated brake discs: two of them were mounted on an automobile and tested in an authorized center for periodic technical inspection of the cars. The improved product performances were considered in accordance with the public road's regulations imposed by the European Union.
Materials and methods
The metallic powder used for this study was nickel-based superalloy IN718 purchased from Hoganas GmbH (Germany, Goslar) with spherical shape particles and 45-90-µm diameter. Before experiments, the material was submitted to a thermal treatment at 60 °C for 4 h in furnace in order to eliminate the absorbed humidity from the atmosphere. The chemical composition of the powder material is listed in Table 1.
For this study, GCI samples obtained by cutting an automobile brake disc into 60 mm × 20 mm × 5 mm coupons were used as substrates. The typical microstructure of GCI is characterized by a dispersed carbon lamella formation surrounded by α-ferrite and pearlite phases. The chemical composition of the GCI substrate is listed in Table 2. The substrate was machined by turning to remove surface debris and impurities and then cleaned with acetone and ethanol in order to completely remove the debris.
Physical and thermal properties of the GCI and IN718 such as specific heat capacity, coefficient of thermal expansion, and thermal conductivity are presented in Table 3.
From the values of physical properties, results show that IN718 powder is compatible with the GCI in terms , were used with the scope to identify the best material combinations. In the case of WC-Co, the results were full of trapped pores inside the deposition layer, and for the other two, the coatings started to crack during and after the processing. Moreover, NiCo alloy showed an ununiform and discontinuous deposition. The experimental set-up used for obtaining IN718 deposition layers by the DED technique consists in a 3-kW Yb:YAG laser source (TruDisk 3001, Trumpf, Ditzingen, Germany) emitting in continuous mode, with wavelength λ = 1030 nm connected by optical fiber to a deposition optics. The optics was mounted on a robotic arm (TruLaser Robot 5020, Trumpf, Ditzingen, Germany) with 8 degrees of freedom using an electro-magnetic plate.
The deposition line is equipped with a three-beam nozzle which ensures a uniform powder distribution, independent of the process motion (Fig. 1). The laser beam is guided and focused on the workpiece through the focusing optics, while the powder stream is blown into the laser spot through 3 nozzles. The deposited layers were obtained by using 600-W laser power, 0.01-m/s scanning speed, 4-g/min powder debit, and 14-slpm gas mix (He-Ar). These process parameters were established based on trial and error of a single line traced. Under the 600 W of laser power, the traced lines were unparallel and discontinuous with adherent unmelted particles, and beyond this value, the temperature exceeded 1500 °C which is undesirable due to the possibility of C release from GCI substrates that can be trapped in the deposited material and lead to pores apparition. A lower scanning
Fig. 1
Schematic representation of laser cladding and C atoms dynamics in the deposited layer speed increased the process temperature and a higher velocity created discontinuity of the deposited material. A smaller quantity of the powder debit was not enough to obtain continuum and parallel borders of the traced layer, while a higher amount of powder created an unnecessary quantity of unmelted powder. The laser beam (Ø = 800 µm) was concentrated on the surface of the substrate via a 200-mm focal distance lens. The trajectory of the deposition layers was designed and programmed into the robot movement code generator, Tru-Tops Cell ® (Trumpf, Ditzingen, Germany). The chosen scanning strategy used for laser cladding of the brake disc was a spiral-like trajectory, to keep a low temperature during the process and to diminish the risk of C evaporation from the substrate as much as possible. In order to obtain a bulk structure, the optimum interspace distance of the spiral was found to be at 1 mm, while the distance between layers was 0.2 mm, which translates in a 40% ratio overlap. Each layer deposited using these process parameters had a height of 600 μm. In order to achieve a 2.4-mm thickness of the deposited material on the final product, 4 layers were successively deposited on top of each other. The optimal number of 4 superposed layers was determined from height measurements performed on cross sections of layers deposited with different numbers of tracks. The number of 4 layers was selected because the coating reached the desired thickness with minimum time, energy, and material consumption. This thickness was established to allow the post-processing of the surfaces in view of leveling. Following this operation, at least a 1-mm-thick deposited layer should remain over the GCI brake disc. To ensure repeatability of the process, 4 brake discs were fully coated and the height was measured in 4 points for each disc to identify the growth difference error that can be seen in Fig. 2. The deposited layer's height was 2.52 ± 0.06 mm. However, these experimental data differ from the theoretical height obtained in the manufacturing software of 2.4 mm. The cause is that the layers are not simply deposited one on top of the other with a precise theoretical height. The layer underneath the next one is remelted by the laser beam and the new layer sinks partly in the previous one.
The temperature of the molten pool was recorded by a noncontact pyrometer, IGA 5 (Fort Collins, Colorado), and the substrate temperature via a chromel-alumel thermocouple wire.
The deposited samples produced by the DED technique on the GCI substrate were cut into small coupons of 10 mm × 10 mm × 10 mm by disc cutting method, which were used to analyze the microstructure, the hardness, the wear, and the corrosion resistance. The surfaces were mirrorlike prepared to reveal the microstructure using silicon carbide sandpaper grinding (400 to 2500 grit paper), diamond abrasive paste polishing (3-0.1 μm particle size), washed with running water, dried via cold air, and then chemically etched using Kalling 2 reagent (ATM, Mammelzen, Germany). The geometrical characteristics of the cross section of the clad areas were analyzed and measured using an Olympus GX51 (Tokyo, Japan) optical microscope equipped with AnalySis software for image processing. The microstructure of cross sections through the laser clad samples was analyzed also by scanning electron microscopy (SEM) using an Inspect S microscope (FEI, Holland) equipped with energy-dispersive X-ray spectroscopy (EDS) sensor type AMETEC Z2E.
The hardness was measured by scratch testing performed using a multi-function Tribometer MFT-2000 (Rtec-instruments, Yverdon-les-Bains, Switzerland) under a constant load of 25 N with 0.15 mm/s speed and a diamond Rockwell 200 µm radius on a 3-mm length at 23 °C temperature environment and 40% relative humidity. The scratch hardness number was calculated using Eq. (1) [43], by dividing the applied normal force on the stylus by the projected area of scratching contact, considering the hemispherical-tipped stylus groove of a radius of curvature r . The projected area of the contact surface is therefore a semi-circle whose diameter is the final scratch width. where: HS P -scratch hardness number [GPa]; P -normal force [N]; and w -scratch width [µm].
The same system was used for pin on disc tribological testing behavior under the following conditions: 60 N applied load for 5 min, 150 rpm speed, and a 6-mm diameter pin with a spherical head made of Steel 440C. The testing machine is equipped with dedicated software for data analyses and interpretation. Prior to measurements of (1) Experimental height of 4 fully coated brake discs the hardness and wear resistance on the IN718 deposited material and the GCI substrate, the surfaces were preliminarily prepared by grinding and polishing. The electrochemical measurements (corrosion tests) were carried out by using the linear polarization resistance (LPR) technique, which is a corrosion rate monitoring method that can give an indication of the corrosion resistance of materials in an aqueous environment. The corrosion tests were carried out in a three-electrode cell using a PARSTAT 4000 (Princeton Applied Research-AMETEK, USA) potentiostat/galvanostat connected to the VersaStudio software produced by the same company. The measurements were conducted according to ASTM G5-94(2011)e1 standard. The working electrode was made of the GCI without and with IN718 coating. The saturated calomel electrode (SCE) was used as the reference, and the recording electrode was made from a platinum mesh (99.8% Pt). Prior to each measurement, the surface of the working electrode was cleaned ultrasonically in pure water and dried at room temperature. The used corrosive environment (electrolyte) was NaCl 3.5%. The potentiodynamic polarization (PDP) curves were recorded by applying a potential in a range of −2000 to 2000 mV, with the scanning rate of 1 mV/s. The PDP curves were used to designate the corrosion potential, corrosion current density, and the slope of the anodic and cathode curve of the tested materials. All measurements were carried out at a temperature of 24 ± 0.2 °C, and to ensure repeatability, the tests were performed three times for each specimen.
Microstructure
Both longitudinal and transversal cross sections through deposited layers were performed and the resulting surfaces were prepared for analyses. Figure 3a displays the optical microscopy image of a transversal cross-sectioned mirrorpolished deposited layer of Inconel 718 on GCI substrate. Four cuts were performed in arbitrary places of the sample in order to check for the presence of defects. Tests were performed for determination of the optimal distance between two lines of deposited material, in order to obtain an esthetically pleasing, but also a defect-free continuous deposition. A 40% overlap ratio between the meander lines when depositing the material was found to be the optimal value for achieving a homogenous, crack, and pores-free surface in the alloyed region. This was consistent for all depositions on four studied samples. A typical cross section is presented in Fig. 3b, showing a defect-free bounding interface between the coatings and substrates. The deposited material thickness varies from 700 µm up to 900 µm for one-layer deposition. Some detailed images from the interface between the IN718 deposition layers and the GCI support material are presented in Fig. 3c, d.
It can be noticed that the graphite lamellae were deformed and coalesced at the interface between substrate and deposited material. In addition, small islands of Ledeburite (Le) were formed in the confluence zone between the deposited IN718 alloy and the GCI substrate. In the transversal cross section of laser deposition (Fig. 4a), the interface line is wavy, indicating the formation of mixing zones with widths corresponding to each passing of the laser beam. In contrast, the interface with the GCI substrate in the longitudinal cross section is quite linear and has a narrow mixing zone (MZ). The structure in the laser deposition area shows a cellular dendritic growth (Fig. 4b), oriented in the direction of heat flow, which is specific to a deposit of molten material on a solid substrate. Laves phases formed in the mixing zone between the laser-deposited layers and the GCI substrate, in the form of small light-colored islands precipitated interdendritically (Fig. 4c). The recorded temperature of the melt pool during the DED process was 1450 ± 50 °C, which leads to the formation of Laves phases that form in case of IN718 at temperatures exceeding 1150 °C, as shown by Liu et al. [44]. The distribution of the chemical elements along a line that crosses the interface between the laserdeposited layer of IN718 and the substrate of GCI is shown in Fig. 4d. A mixing zone of ~ 200 microns wide is formed at the interface, where an obvious decrease of Ni and Cr concentrations can be observed.
Compositional analysis
In Fig. 5 corresponding to Fe, Cr, and Ni, there is a clear mixing zone between the IN718 layer and GCI substrate. This mixing zone is differently colored as compared to the color associated to each element. This different coloring is caused by the reduced proportions of Fe, Cr, and Ni as compared to the bulk IN718 layer caused by mixing with GCI. This mixing zone is 40-50 µm thick and includes preponderantly Fe, Cr, and Ni and traces of Mn, Ti, Mo, Nb, Si, and Al. Apparently, for Fe, Ni, and Cr, the submersion in the GCI substrate stops after 50-µm penetration in depth, while for other elements like Mn, Ti, and Al, it can go for depths of more than 300 µm. At the interface between IN718 and GCI, there is a narrow area of liquid metal, in which the transfer of atoms between the two materials is performed, but the high concentration of Ni determines the buffer layer effect, which greatly limits their inter-diffusion. The high carbon concentration in the GCI matrix also limits the diffusion pathways of the chemical elements from the laser deposition. The temperature on the molten metal areas measured in real time with the thermal camera had values of approximately 1450 °C. This value is close to the melting temperature of For the quantitative highlighting of the dilution effects at the interface between the two materials, EDS spot analyses were performed in different areas: on layers 1 and 2 of IN718 deposition, at the interface with the GCI substrate, in the cast iron at approx. 25 μm below the interface, on the mixing zone at the intersection between layers 1 and 2, on the carbon-rich areas situated on the interface with the GCI substrate (Fig. 6).
EDS measuring points (spots 9 and 10) are placed in the mixing zone between GCI and IN718 deposited layer, while spots 4, 6, 7, and 8 are situated on the interface between clad and the GCI substrate. The chemical composition for each EDS measurement point is shown in Table 4. Small amounts of Cu (of 0.47wt% to 0.58wt%) from GCI are present in the EDS analysis, but were not taken into account in Table 4.
According to EDS chemical analysis data, the Ni concentration decreased from the standard values of 52.8 wt% (according to the spectral chemical analysis on powder material IN718, Table 1) to less than 31 wt% at the interface with the GCI substrate, and the iron content increases as mixing effect of the two materials. The variation of Ni concentration in different points of the cross sections is due to the dilution effect. The maximum dilution effect occurs in the mixing zone of IN718 deposited on the gray cast iron substrate (about 31.33 wt% Ni). For the first layer, the dilution is lower, and the Ni content reaches 45.86 wt% (spot 1), while in the next layer the Ni concentration reaches the nominal value specified in the powder quality certificate. Below the GCI substrate separation line, the Ni concentration reaches about 0.1 wt%, indicating the lack of diffusion effect of this element in the substrate.
A similar evolution can be observed in the case of Cr, whose concentration decreased from 19.2 wt% in standard composition (Table 1) to 18.09 wt% in the second lasercoated layer and then to 15.68 wt% in the first laser-clad layer. Small quantities of Cr have been detected below the interface with the substrate (0.29 to 0.83 wt%).
Other chemical elements from IN718 alloy, such as Nb, Mo, Ti, and Al, have a decrease in chemical concentration as they approach the substrate interface. However, on the interface area (spot 4, Table 4), an increase in the concentration of Nb (7.23 wt%) has been obtained, due to the formation of Laves phases or carbides [45]. No fragile compounds and no cracking effects have been detected at the clad interface with the GCI substrate.
In the mixing zone of the overlap molten layers (spot 10, Fig. 6a), a decrease of Ni and Cr concentration was observed, together with an increase of Fe concentration (75.26 wt%). This evolution is due to mixing effects between the layers during DED. When the first laser layer is deposited, the IN718 powder and a surface layer of GCI are melted together. Some of the alloying elements from IN718, like Cr, Nb, and Ti, form carbides and Laves phase. The convection currents from the molten metal allow carbon from GCI to rapidly react with the high carbon affinity elements (Cr, Nb, Ti) from IN718, in the interface area. The carbon atoms cannot diffuse too much into the deposited metal, this phenomenon being limited by the low diffusion coefficient of C in the Ni-rich alloy and by the short interaction time with the laser beam.
In carbon-rich (2-6.67wt%) Fe alloys such as GCI, this element is found either free (nodular graphite, lamellar, stellate, vermicular) or in cementite or ledeburite. The Marangoni convection model can be used to explain the mechanism of carbon migration from the graphite lamella through the melt pool and scattered in the deposited structure, close to the interface. Due to rapid heating-cooling cycles, the C structures can coalesce and migrate from the GCI in the deposited structure and be trapped inside the new layer during solidification. However, we demonstrated that using the optimal process parameters the free carbon accumulation in the deposited layer can be almost completely ruled out.
Hardness
Sets of ten measurements were performed in 6 different of each sample in order to achieve an accurate value of the deposited material's hardness. The average hardness of GCI in a commercial brake disc was 290 ± 9 HV, while the hardness of the IN718 deposited material, conducted on the polished surface of the deposited layer, showed a 430 ± 16 HV value, which is higher by ~ 50%, as compared to the substrate. It must be mentioned that tests were performed in various locations of the coated samples and on multiple samples, indicating a homogeneous hardness of the coating. The values for casted IN718 commercially available alloys are ≤ 380 HV [46]. In the case of selective laser melting (SLM), lower microhardness values of around 281 ± 18 HV [47] or 330-390 HV [48,49] were reported. For DED, the hardness increases by 10% as compared to the highest reported values in case of SLM or casted samples could be due to the high Nb content close to the surface. It is known that the increase of the Nb content favors the formation of hard phases in Inconel alloys [50,51]. Nb is soluble in the Ni-based austenite and it strengthens it by producing both an elastic stress caused by the atomic size difference and an internal stress by the difference of elasticity modulus between austenite and Nb atoms [52,53]. From Table 4, one can see that closer to the surface the Nb content is ~ 3 w.t.%, while in the vicinity of the interface with the GCI substrate, it was not detected by EDS.
Tribological properties
The wear resistance of GCI substrate and IN718 deposited layers was tested at room temperature under dry sliding conditions and the friction coefficient evolution is shown in Fig. 7. The IN718 surface presents a superior wear resistance compared to the substrate specimen at the same wear parameter conditions. The CoF between the round head of the Steel 440C stylus and the GCI substrate was 0.536 with the tendency to decrease after 200 s of continuous testing, while the coated substrate in steady. This is a sign that the surface was leveled by the stylus and became smoother after repeated passages over the same area. Oppositely, the IN718 coatings displayed a 0.12 higher CoF vs the 440C pin, as compared with the substrate specimens. In this case, the COF was not affected by the repeated interaction between the pin and the coating over the 300-s testing interval, which was indicative of a higher hardness. The pin had not succeeded to induce a smoothening of the surface after repeated passage over the same surface in the testing interval.
The wear tests performed on the GCI substrate and IN718 superalloy deposited layers were finalized with an inspection of the worn areas via three-dimensional mapping with a confocal microscope (Fig. 8a, b -left). Figure 8a, b (right) represent a cross section of the worn surface in a specific point of the map. The circular wear reciprocating test showed significant tribological differences between the uncoated and IN718coated samples, tested at room temperature. Figure 8a reveals a low value of the wear track depth for the IN718 deposited layers (−50 µm), while the wear track depth of the GCI-uncoated substrate (Fig. 8b) was 3 times deeper (−150 µm) compared to the samples coated with IN718, which demonstrate the improved wear resistance of the coated material.
Based on the wear depth and the mass loss (Fig. 9a), the wear rate can be calculated by Eq. (2) [34]: where w r is the wear rate, V is the wear volume, S is the total friction distance, and F is the load. The wear rate graphs of the IN718 deposited layer and the GCI substrate are presented in Fig. 9b and show how the wear rate of the coated GCI surface is smaller by an order of magnitude compared to the uncoated GCI. This result confirms once again the excellent wear resistance of the IN718 deposited layers.
Corrosion properties
It is expected that the IN718 layer has superior corrosion properties to GCI substrates, as the IN718 is mainly a Ni-Cr alloy, both elements being known for their ability to form a natural oxide protective layer against corrosion. The following tests aim to show the magnitude of the IN718 corrosion protection superiority as compared to GCI. The corrosion tests were conducted in 3.5% NaCl aqueous solution. Figure 10a shows the variation of the open circuit potential, while Fig. 9b presents the PDP Tafel curves for the investigated samples. From the Tafel curve, the corrosion potential (Ecorr), corrosion current density (icorr), the slope of the anodic (βa), and cathode (βc) curve of the tested materials were determined. The values of the corrosion parameters of the tested materials are listed in Table 4. From Fig. 10b and Table 5, it can be observed a significantly high value difference of the Ecorr, which indicates the improved corrosion resistance of the IN718-coated material compared with the bare GCI. The higher the corrosion potential of the material, the lower the corrosion current, resulting in a superior corrosion resistance. Also, a smaller value of the corrosion current density (icorr) indicates a better corrosion resistance and it validates once again that the IN718-deposited layers protect the brake disc from corrosion. The lower Ecorr for the deposited are performed on the surface of the coatings, it is normal that Ecorr of the oxide is much lower than Ecorr of the metals.
Based on the parameters presented in Table 5, we can calculate the polarization resistance according to ASTM G59-97 (2014) standard, given by Eq. (3) [54]: where.
Rp is the polarization resistance; βa is the slope of the anodic curve of the tested materials; βc is the slope of the 3i corr a + c cathode curve of the tested materials; icorr is the corrosion current density.
The polarization resistance is defined as the resistance of a sample to corrosion during the application of an external current.
Based on the corrosion current density icorr, the CR can be calculated using Faraday law as Eq. (4) [55]: where. Fig. 9 The mass loss (a) and wear rate (b) of the coated with IN718 and bare GCI Fig. 10 The open circuit potential evolution (a) and the potentiodynamic polarization Tafel curves (b) of the IN718 deposited layers and the GCI substrate k1 is a constant that defines the units for the corrosion rate and its value is 3272 mm/(A cm year); EW Metal is the equivalent weight in grams, which for metallic compounds is 27.92; is density, which for GCI is ~ 7 g/cm³ and for IN718 is 8.12 g/cm³ The resulted values of polarization resistance ( R p ) and corrosion rate (CR) are presented in Table 6 for both investigated materials, IN718 deposited by DED and the GCI substrate.
It was found that the surface coated with IN718 by DED increased by four times the polarization resistance of the electrode. This information is validated by the corrosion rate of the GCI, which is three times lower than the CR of the IN718 deposited layers, in 3.5 wt.% NaCl aqueous solution.
Functional tests
Once the IN718 deposited layers demonstrated to have improved mechanical properties, the optimal parameters were used to fully coat two brake discs by DED (Fig. 11a). After deposition, a post-processing step by turning machining (Fig. 11b) was inherent in order to achieve the dimensions specified in the technical drawing.
Brake discs coated by DED with IN718 layers and postmachined by turning were tested for measuring the braking force on a roller brake tester at an authorized car service. The discs were mounted on a Renault Megane and tested in conformity with the European Union legislation for vehicle circulation (Fig. 11c). The European Parliamentary Research Service has developed a transport policy oriented towards safety and security through common standards and rules [56].
The friction force and the braking coefficient are the interest parameters for these tests. In the case of brake discs, the friction force can be calculated using Eq. 5 [57]: where.
Ff is the friction force; cf is the friction coefficient; Pf is the pressing force of the brake pad. The test results showed a value of ∼2700 N.
The braking coefficient represents the ratio between the sum of the braking forces on the wheels on which the brake is applied, the effectiveness of which is verified and the where. Fis (daN) -braking force on the left wheel on the left side of the front axel i; Fid (daN) -the braking force on the right wheel on the right side of the front axel i; n -number of front axels; G (daN) -the weight of the vehicle.
The braking coefficient value for the DED IN718-coated brake discs was 71. We mention that the critical value over which a braking system of a vehicle is considered reliable is 58, according to the legal regulations for circulation of vehicles on the public road [58].
Overall, the braking system consisting of the brake disc and brake pad fulfilled the testing standards for vehicles imposed by European regulations and a conformity certificate was obtained.
Conclusions
In this paper, the properties of a IN718 metallic deposited layer obtained by a direct energy deposition laser method on GCI substrates were analyzed. The coating material IN718 alloy was initially in form of spherical powder, which was guided by a three-beam nozzle and melted by the laser beam, obtaining an adherent, compact, and uniform layer after solidification, on the surface of the GCI substrate.
The results reveal improved mechanical properties of GCI coated with IN718 by DED and the following conclusions can be drawn: • The interface between substrate and the deposited material was uniform, forming a compact bonding of the two materials. The 40% overlap ratio was selected as the optimal value in order to achieve a homogenous, crack, and pores-free surface in the alloyed region. The diffusion of main chemical elements (Ni, Cr, Nb, Ti) from IN718 laser-deposited layers on the GCI substrate was negligible and no fragile compound that can produce cracks has been detected at the interface. • The average hardness of GCI is 290 ± 9 HV, while the hardness of the IN718 deposited material was 430 ± 16 HV value, which is ~ 50% higher. • The GCI substrate had a friction coefficient (CoF) of 0.5 and dropped to 0.4 during testing because of the surface leveling following multiple passes of the pin over the same area, while the IN718 coatings presented CoF higher by 0.12 as compared to the substrate material and were not affected significantly by erosion during test- ing. The depth of the wear track for the layers deposited with IN718 was −50 µm, while the depth of the wear track of the exposed substrate GCI was 3 times higher (−150 µm). IN718 succeeded to protect the substrate against wear, which translates into an extension of the brake disc lifetime. • The corrosion resistance of the coating was superior to that of the GCI substrate in 3.5 wt.% NaCl aqueous solution. It was found that the surface coated with IN718 by the DED method increased by four times the polarization resistance of the electrode and displayed a 40% smaller value of the corrosion current density (i corr ), which indicates a better corrosion resistance. Therefore, this coating method has the potential to improve the corrosion behavior of the brake discs, hence prolonging their lifetime. • The couple between the coated brake discs and the brake pads was tested by measuring the braking force and the friction coefficient in a real environment in an authorized car testing facility. They fulfilled the testing standards for vehicles imposed by European regulations and received a conformity certificate.
The brake disc coating with a hard and highly resistant to corrosion alloy could be an interesting solution for the automobile's wheels engineering industry. The cost is kept low, as the main part of the disc is made of reasonable GCI, while the deposited layer adds a prolonged lifetime of the part and reduces pollution caused by metal particles expelled in the atmosphere during braking.
In terms of future work, there are still some unanswered questions for future scientific testing and functional tests in view of product development: (i) The high content of C in GCI can be affected close to the surface during laser irradiation, causing a changing microstructure and/or composition of the GCI. On the long term, how this affects the performances of the substrate is not yet known. (ii) The brake discs are very difficult to process post-cladding of IN718, so a reliable method for surface finishing without extensive costs has to be found. (iii) Moreover, the discs have not yet been tested on a track circuit for a long period of time in various driving conditions: weather, driving style, on different brake pad materials. | 9,318.4 | 2022-07-13T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Effects of Cooling Process of Al2O3- water Nanofluid on Convective Heat Transfer
Research has been done to investigate the convective heat transfer and pressure drop of nanofluid, using alumina-water nanofluid under laminar flow regime. The test section was using 1.1 m long and 5 mm inner diameter tube of a doublepipe heat exchanger with constant wall temperatures. The hot nanofluid is flowing inside tube, while the cold water flows outside. The volume concentration of the nanoparticles varied from 0.15%, 0.25% and 0.5%. Experiment shows that the convection heat transfer increases remarkably with the increase of the nanoparticles concentration under various values of Reynolds number. The Nusselt number increases about 40.5% compared to pure water under 0.5% volume concentration. The pressure drop of nanofluid increases slightly with increasing volume concentration. However, compared with using pure water the difference of the pressure drop is insignificant, so that the use of nanofluid has little penalty on pressure drop.
INTRODUCTION
Various methods that have been done to enhance heat transfer, such as modifying surface roughness as turbulence promoter, flowing fluids through micro channels, and using nanofluids. In the past 20 years many researchers have been studying the properties of nanofluids, and it's expected to be the next generation of heat transfer technology due to the better thermal performances compared to that of traditional heat transfer fluid [1]. Nanofluid can be defined as a fluid in which solid particles with the sizes under 100 nm are suspended and dispersed uniformly in a fluid. The base fluid used the same as traditional heat transfer fluids, e.g, water, oil, and ethylene glycol.
Many researchers observed the phenomenon of higher thermal conductivity of various nanofluids compared to that of the base fluids. However, differences between the results were observed, i.e., some showed that the increase of thermal conductivity of nanofluids is an anomaly that cannot be predicted by the existing conventional equation [2,4]; while some others showed that the increase is not an anomaly and can be predicted by using the existing conventional equation [5]. Regarding the convection heat transfer, Li and Xuan [3] reported that in laminar and turbulent flow regime in forced convection, the heat transfer coefficient of Cu-water nanofluids flowing inside a uniformly heated tube remarkably increased. The heat transfer coefficient increased by around 60% for 2 vol.% nanoparticle concentration compared to that of pure water. Furthermore, it was observed that the increase of nanoparticle concentration would also increase the heat transfer coefficient. Interestingly, the experimental results showed that there is no significant increase in pressure drop compared to that of water. Thus, it is no need to be worried about the drawback of pumping power increase.
Experimentally, enhancement of laminar flow convection coefficient of Al 2 O 3 -water nanofluids under constant wall temperature in heating process is much higher than that predicted by single phase heat transfer correlation used in conjunction with the nanofluids properties, as in [6]. It was also concluded that the heat transfer enhancement of nanofluids is not merely due to the thermal conductivity increase of nanofluids which means other factors may contribute to this phenomenon.
Duangthongsuk and Wongwises [7] have presented the heat transfer and flow characteristics of nanofluid consisting of water and TiO2 nanoparticles at 0.2% volume concentration in double-tube heat exchanger. The result showed that the convective heat transfer coefficient of nanofluid was only slighty higher than of the base fluid by about 6-11% and has little increase in pressure drop.
Anoop et al [8] carried out convective heat transfer experiments with Al 2 O 3 -water nanofluids in the developing region of pipe flow with constant heat flux to evaluate the effect of particle size on convective heat transfer coefficient. In their work, two particle sizes (45 nm and 150 nm) were used and it was observed that the nanofluid with 45 nm particles showed higher heat transfer coefficient than that with 150 nm particles.
Hojjat et al [13] conducted experiments to investigate heat transfer of Al 2 O 3 , TiO 2 and CuO in aqueous solution of carboxymethyl cellulose (CMC) was used as the base fluid tested in fully developed turbulent flow regime. The results showed that all nanofluids have higher heat transfer coefficient than those of the base fluid. Enhancement of heat transfer coefficient increases with an increase in both of the Peclet number and the nanoparticle concentration. The heat transfer coefficient enhancement is much higher than attribute to the thermal conductivity.
The goal of the experiment by Asirvatham et al [10] is to investigate heat transfer of silver-water nanofluid which was varied from 0.3 % to 0.9 % volume concentration in double tube heat exchanger. The result showed the enhancement of heat transfer coefficient by 28.7% and 69.3% for 0.3% and 0.9% volume concentration, respectively, and [11] investigated heat transfer and pressure drop of TiO 2 -water nanofluid in circular tube at 0.05% to 0.25% volume concentrations. The results indicated that addition of small amounts of nanoparticles to the base fluids, heat transfer coefficient of nanofluids increased. At the Reynolds number of 5000, for 0.25% volume concentration, there was increase of about 22% in the heat transfer coefficient. The pressure drop of nanofluid increased with increasing the volume concentrations of nanoparticles. The maximum pressure drop was about 25% greater than of pure water which occurred in 0.25% volume concentration at the Reynolds number of 5000.
In this study, the enhancement of convective heat transfer and pressure drop was done using Al 2 O 3 -water nanofluid flowing in a concentric double pipe heat exchanger with counter flow. The effect of the rate of cooling process of nanofluid, concentration of Al 2 O 3 nanoparticles, thermal resistance and flow rates on convective heat transfer and pressure drop were investigated.
Nanofluid preparation
The γ Al 2 O 3 nanoparticles size of 20-50 nm produced by Zhejiang Ultrafine powder&Chemical Co, Ltd China were used. Nanofluid was prepared by dispersing Al 2 O 3 nanoparticles with different volume concentrations in distilled water as base fluid. The mechanical mixer (magnetic-stirring) was used for dispersing nanoparticles. The solutions of water-alumina nanoparticles were prepared by the equivalent weight of nanoparticles according to their volume and was measured and gradually added to distilled water while being agitated in a flask. No sedimentation was observed after 5 hour in the low concentrations used in this work.
Thermo-physical properties of nanofluid
The thermo-physical properties as such as density, viscosity, specific heat and thermal conductivity of the nanofluid are calculated using the following correlations: Density [15]: Viscosity [12]: Specific heat [16]: Thermal conductivity [17]: Where φ is the volume particles concentration, and the subscript b, p and nf represent the base fluid, nanoparticle and nanofluid, respectively.
In this study, Al 2 O 3 -water nanofluids with particle volume concentrations of 0.125%, 0.25%, and 0.5% are used to evaluate the heat transfer performance of nanofluids. Table 1 shows the lists of the thermophysical properties of these nanofluids at the characteristic temperatures of 40 o C.
Experimental apparatus
The experimental system used for this study is shown schematically in Figure 1, which comprises two flow circuits, cold water and hot nanofluid. It consisted of a flow loop, a heating unit, a cooling unit, a flow measuring unit and a pressure drop measuring unit. The flow loop contained a reservoir, pump, valve for controlling the flow rate and a test section. The test section made from smooth brass tube is a 1.1 m long with 5 mm inner diameter and a 6.24 mm outer diameter, while the outer tube is a stainless steel tube and with 38.5 mm outer diameter and 3 mm thickness. The experiments were done using a counter flow mode in a horizontal double pipe heat exchanger, with hot nanofluid flowing inside the inner tube while cold water flows through the annulus. The temperature of the nanofluid at the inlet was maintained at 40 o C. The stainless steel outer tube was thermally isolated using Aeroflex tube 35 mm diameter and 10mm thickness. Four K-type thermocouples were mounted at differential longitudinal positions on tube surface of the wall, two K-type thermocouples were inserted into the flow at the entrance and exit of the test section to measure the bulk temperatures of nanofluid and two K-type thermocouples were measured the flow the entrance and exit of the cold water flows in the annular temperatures. A differential pressure transducer from Brand Omega PX 409-005DWU5V was used for measuring the pressure drop along the test section. The electric heater with a PID controller was installed to keep the temperature of the nanofluid constant.
The cooler tank with a thermostat is used to keep the nanofluid temperature constant. The nanofluid flow rate and cold water were controlled by adjusting the bypass flow valve, which was measured by Dakota rotameters. To create constant wall temperature boundary condition, the cool water was circulated with high flow rate. During all experiments, the inlet and outlet temperature of the nanofluid, the wall temperature at the various positions were measured and recorded using the thermocouple data acquisitions module (Omega TC-08). The average of all data was used in the present study.
Data Analysis
The heat transfer performance of nanofluid through tube was defined in terms of the convective heat transfer coefficient calculated as follows:
Uncertainty analysis
Uncertainty of experimental results was determined by measurement deviation of the parameters, including weight, temperature, flow rate and pressure drop. The weight (W) of nanoparticles was measured by a precise electronic balance with the accuracy of ± 0.001 g, the precision temperature data acquisitions (T) is ± 0.1o C, flow rate (V) was measured by a rotameter with the full scale accuracy of ±5% and pressure drop (P) was measured by a pressure transducer with the accuracy of 2%. The uncertainty of experimental results could be expressed as follows [20]: Therefore, the uncertainty of the experiment was less than ± 6.0%.
Convective heat transfer
To make the comparison for the results of convective heat transfer using nanofluid, similar experiment was done using pure water as the working fluid. Figure 2 shows the experimental results of the pure water in laminar flow regime prediction from Seider and Tate [9], which is defined as: Very good agreement between experimental data and the Seider Tata equations results were obtained, which emphasizes the accuracy and reliability of the experiments. It should be in equation (10) the Re and Pr are the Reynolds number and Prandtl number, respectively which are defined as follows: where, Nu is the Nusselt number, , , , , w p U C and k are the viscosity of fluid at wall, the viscosity of water, density, specific heat, average velocity and thermal conductivity, respectively.
As shown in Figure 3 the Nusselt number increases with increases in the Reynold number as well as in the particle volume concentration. It can be clearly seen that the Nusselt number of the nanofluid is higher than that of the base fluid (water) at a given Reynold number. For example, at the Reynolds number 2000 the Nusselt number increases from 6.12 to 8.6 (40.5%), for volume concentration 0% (pure water) to 0.5%. The thermal conductivity enhancement according equation (4) is about 1.9% for the same volume concentration. It can be concluded from this figure that other mechanism beside thermal conductivity increase can be responsible for heat transfer enhancement. The convection heat transfer caused by the presence of nanoparticle is speculatively attributed to the interactions of nanoparticle with the wall as well as with the surrounding fluids. It is considered by [18] that the interactions between nanoparticles and solid wall play an important role in the convection heat transfer nanofluid. The nanoparticles, serving as 'heat carrier', frequently collide with the wall tube. With the increase in the nanoprticles concentration, the interaction and collisions between nanoprticles and the wall become frequent, and thus cause a much higher heat transfer and the Nusselt number.
The experimental results obtained in this study using Al 2 O 3 -water nanofluid with laminar flows, were then compared with two other correlations with forced convective heat transfer under laminar flow which are, an empirical correlation by Li and Xuan [3] and a numerical correlation in tube by Maiga et al [14] in equations (13) and (14) As can be seen in Figure 4, at volume concentration 0.5% of Al 2 O 3 -water, experiment study lies between those prediction from Maiga and Li& Xuan correlations. Figure 5 shows, the variation of the thermal resistance versus the Reynolds number for the nanofluid with different particle volume concentration. It is found that for both the pure water and the Al 2 O 3 -water nanofluids, the thermal resistance decreased with increasing in the Reynolds number. The decreasing rate thermal resistance with increasing Reynold number was relatively fast at the lower Reynold number, but became slow with the increase in the Reynold number. At the same Reynolds number (Re=2000), the thermal resistance decreased with increase in the particle concentration. As can be seen, the nanofluid with volume concentration of a 0% (pure water) to 0.5%, the thermal resistances decrease from 0.105 o CW -1 to 0.083 o CW -1 (20.9%).
Pressure drop of nanofluid
The accuracy of fluid flow in experimental method, the predicted pressure drop for laminar flow was compared to the correlation the Heggen-Poiseuille as follows: Figure 6 shows the pressure drop of the hot water inside the tube as a function of the Reynold number at 40 o C temperature. Good agreement exists between experimental data and the theoretical model for laminar flow. Figure 7 shows the variation in the pressure drop with the Reynolds number for the nanofluid with different particle volume concentration. It can be clearly seen that the hot water flow has insignificant effect on the pressure drop of the Al 2 O 3 -water nanofluid. This is due to the increase in the hot water flow rate having a slight effect on the viscosity in the nanofluid, which leads to a tiny change in the measured pressure drop at low volume concentration Al 2 O 3 -water nanofluid.
The pressure drop is dependent on viscosity of the working fluid inside the tube. With increasing in the particle concentration in the nanofluid the viscosity increases, but temperature gradient on radial axis of the working fluid causes the viscosity distribution of the nanofluid, and it is an important parameter in pressure loss. Especially, viscosity at the wall significantly affects pressure loss. Since, the viscosity of nanofluid is a strong function of temperature, the pressure drop of nanofluid depends on temperature of the working fluid.
CONCLUSIONS
The convective heat transfer and pressure drop characteristics of three nanofluids flowing in a circular tube in the laminar flow regime was experimentally investigated. The effects of the flow on the Reynolds number and the volume concentration of nanofluid were important point. The following conclusions have been obtained. (1) Convective heat transfer of the Al 2 O 3water nanofluids are significantly higher than those of the pure water. (2) The convective heat transfer increases with increasing both the Reynolds number and volume concentration. (3) The pressure drop of the Al 2 O 3 -water nanofluids are approximately the same as those of pure water in the given conditions. This implies that the nanofluid incurs no penalty of pump power and may be suitable for practical application.
Re
Reynolds number | 3,585.4 | 2014-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Real-World Laboratories Initiated by Practitioner Stakeholders for Sustainable Land Management—Characteristics and Challenges Using the Example of Energieavantgarde Anhalt
Real-world laboratories have gained substantially in importance as a format in sustainability and transformation research in recent years in Germany. This increase in significance is associatedwith the expectation of fostering and experimentally investigating transformations towards sustainability under real-world conditions in a bid to gain knowledge of their dynamics, to identify characteristics of successful transformation processes, and to be able to transfer this knowledge to other cases. Real-world laboratories are usuallymanagedby a scientific partner, enabling use to be made of established procedures and methods in areas such as knowledge integration. Where responsibility for coordinating real-world laboratories lies with practitioner stakeholders, there is promising potential for their deployment. However, it also gives rise to situations, processes and challenges that are new to all parties involved and that have yet to be explored. In principle, experimental approaches that are characteristic of real-world laboratories are not new in the field of sustainable landmanagement and spatial development. However, they are not traditionally alluded to as the real-world For reasons of readability and simplicity, no use is made of the masculine and feminine form when referring to people or job titles in this publication. However, all genders are implied at all times. H. Kanning (B) · B. Richter-Harm Sustainify GmbH, Institut für Nachhaltige Forschung, Bildung, Innovation, Große Düwelstr. 28, 30171 Hannover, Germany e-mail<EMAIL_ADDRESS>B. Richter-Harm e-mail<EMAIL_ADDRESS>B. Scurrell Schefferweg 2, 12249 Berlin, Germany e-mail<EMAIL_ADDRESS>Ö. Yildiz Department of Environmental Economics, Technische Universität Berlin, Straße des 17. Juni, 10623 Berlin, Germany e-mail<EMAIL_ADDRESS>© The Author(s) 2021 T. Weith et al. (eds.), Sustainable Land Management in a European Context, Human-Environment Interactions 8, https://doi.org/10.1007/978-3-030-50841-8_11 207 208 H. Kanning et al. laboratory format. The two desiderata above provide the starting point for the present article. The aim of this article is to classify and reflect on the possibilities generated by real-world laboratories that have been initiated by practitioner stakeholders. A prime example of such real-world laboratories are those developed by Energieavantgarde Anhalt. This registered association wishes to contribute to sustainable land management in the context of the energy transition in rural areas, featuring small and medium-sized towns. A comparative analysis of these real-world laboratories is conducted using core characteristics from the scientific debate on real-world laboratories. As a result, the insight gained from this analysis can be used for future development and research.
for transformation research has only just begun. 2 Real-world laboratories are "in their infancy" (Beecroft and Parodi 2016: 4), and a detailed methodological and theoretical concept has yet to be developed (Grunwald 2016: 204f). Although it can be assumed that real-world laboratories initiated by practitioner stakeholders have high deployment potential on account of the practical approach they take, they have attracted little attention in the scientific debate to date (Engels and Rogge 2018;Menny et al. 2018).
Real-world experiments are generally considered to be the core of the real-world laboratory approach (Schäpke et al. 2017: 3). This idea is not new in the field of sustainable land management. In fact, the approach of experimentally investigating social change processes at the urban level goes back to the sociological Chicago School of the 1920s (Gross et al. 2005: 65ff;Schneidewind 2014: 3). In land management, for example, there are many projects and model projects of an experimental nature that are not, however, termed real-world laboratories. Examples include the International Building Exhibitions (IBA), state and national flower shows, the regional structural aid measures in North Rhine-Westphalia called the REGIONALE, and various regional development processes in the context of European regional assistance (see De Flander et al. 2014: 285;Hohn et al. 2014). As yet, the experience gained in these measures is largely detached from the real-world laboratory debate, meaning that a great deal of research is needed to bring together these aspects (De Flander et al. 2014: 285).
These two desiderata provide the starting point for the present article. The objective is to integrate into the scientific debate real-world laboratories that have been initiated and coordinated by practitioner stakeholders for the purpose of sustainable land management, and to reflect on the possibilities and limitations of those approaches. In the process, questions that have not been addressed in the scientific discussion to date are of importance: What challenges arise when real-world laboratories are initiated by practitioner stakeholders? Are those challenges similar to those arising in real-world laboratories designed by scientists? How do they differ? What opportunities do they offer, and what added value can be expected from the real-world laboratory format? To answer these questions, Sect. 11.2 provides an account of the real-world laboratories initiated by the Energieavantgarde Anhalt (EAA) association, which include the urban laboratories undertaken within the joint research project "The re-productive town" 3 funded by the Federal Ministry of Education and Research (BMBF). The core characteristics of real-world laboratories are then identified from the scientific literature. These characteristics provide the theoretical basis for discussing the special features of real-world laboratories that have been initiated by practitioner stakeholders (Sect. 11.3). This discussion is based not 2 Concerning classification in transdisciplinary research and transformation research, see, e.g. Wittmayer and Hölscher (2016), Rogga et al. (2018). 3 The project entitled "The re-productive town. Changing towns for achieving the energy and sustainability transition" [original in German: Die re-produktive Stadt. Die Stadt verändern, um die Energieund Nachhaltigkeitswende zu schaffen] receives BMBF funding under the FONA/Social-Ecological Research: "Sustainable Transformation of Urban Areas" funding line from August 2016 to July 2019; see https://re-produktive-stadt.energieavantgarde.de. only on the experience gained from the BMBF-funded project "The re-productive town", but also on insights from three workshops held with EAA members and other interested participants (business representatives, especially from the utilities sector; representatives from local government and politics and from science) in 2017. The article concludes with a critical analysis and an outlook for future developments (Sect. 11.4).
Real-World Laboratories Initiated by the Practitioner
Stakeholder Energieavantgarde Anhalt e.V.
Energieavantgarde Anhalt (EAA) is an association that acts as a network of stakeholders comprising civil activists, municipalities and rural districts, companies and other institutions in the Anhalt-Bitterfeld-Wittenberg region. This network is committed to accelerate the energy transition in the region in cooperation with national and European partners. The approach was developed in the context of the profound, multiple socio-economic transformation processes that pose huge challenges to towns and regions, such as the closure of businesses and the loss of livelihoods, demographic change, and the energy transition. The direct impact of these processes is very much apparent in the Anhalt-Bitterfeld-Wittenberg region: high cost pressure relating to infrastructure, a sharp fall in property prices, the demolition of entire neighbourhoods. A wide range of technical, economic and socio-cultural innovations are needed to meet the challenges associated with these developments in this region dominated by lignite mining and the chemical industry. These innovations radically change land uses, creating new decentralised, interconnected and energy systems based on renewables, as well as new urban-rural relations. In the process, EAA places particular emphasis on the regionalisation of energy production and energy use and on sector coupling. To achieve this, developments in the area of prosumer models and demand-side management measures should encourage not only resource efficiency, but also system-supporting, flexible energy consumption behaviour, and enable as many citizens as possible to participate in the regional energy transition through regional added value and democratic processes. Since there is no ready guidance on how to meet these challenges and since a wide range of individual issues need to be resolved, the association calls this largescale regional experiment the "Anhalt Real-World Laboratory" (www.energieavant garde.de). In this laboratory, partners engaged as practitioners in the region and scientists join forces to design a variety of sub-laboratories and experimental setups. With this in mind, the association brings together within its framework not only local authorities, public utility companies and technology companies from the renewables sector, but also civil society interest groups. The projects in the region initiated by EAA are generally based on the experience and issues raised by association members within their everyday operations and on collaboration with research institutions in other projects. In their role as project initiators and project coordinators in the region, members of the association explicitly represent the interests of the association and of the regional stakeholders it represents. As a result, the focus is on searching for workable approaches for promoting sustainable development by using renewable energies and achieving high resource efficiency. Considering contributions from the current scientific real-world laboratory debate, this regional institution could also be characterised as a "real-world laboratory as a whole" [own translation] (Beecroft et al. 2018: 80) where various transdisciplinary sustainability projects are implemented.
The joint research project entitled "The re-productive town", which has received BMBF funding for three years, is one of the outcomes of EAA's activities in the Anhalt region. Initiated by EAA, the research alliance comprises EAA and Bitterfeld-Wolfen Town Council as its practice partners, and Brandenburg University of Technology (BTU) Cottbus-Senftenberg/Chair of Urban Technical Infrastructure, the Fraunhofer Institute for Microstructure of Materials and Systems (IMWS) and inter 3 Institute for Resource Management as its science partners. The project is accompanied scientifically by sustainify Institut für nachhaltige Forschung, Bildung, Innovation. In the research project, the town of Bitterfeld-Wolfen is taken as an example of the urban planning challenges associated with transformation. This town seems to be particularly suited to develop and test new approaches for social-ecological urban development. The starting point of the project is the energy sector, from which inroads are made into agriculture and forestry, architecture and building services, industry and finance, citizenry, the urban economy and the urban landscape. Possibilities are systematically sought to consider unexploited resources such as brownfield sites, sun, wind and green waste as well as secondary resources such as waste heat and refuse as a starting point for something new. These innovations are then reutilised for the benefit of the town and its inhabitants or the processes that generate them are directly changed. Conceptually, the approach refers back to the concept of (re)productivity proposed by Biesecker and Hofmeister (2006). According to this concept, (urban) production and consumption processes must be designed in such a way that the town maintains or even improves its material/energy and economic/social reproductive capability in order to remain sustainable or to ensure its long-term survival. The aim is to use the systematic improvement of the material/energy and economic/social reproductive capability of Bitterfeld-Wolfen Town to develop a blueprint for a possible transformation path for a new, yet very common type of town as a result of territorial reforms: an extensive, medium-sized, polycentric town that can be expected to offer new starting points for energy and sustainability transition and, as a result, new townscapes and urban landscapes.
Urban laboratories are a core format. Urban laboratories are site-specific participatory and communication platforms that map ongoing local transformation processes and enable broad participation. They provide the experimental basis for developing, negotiating and implementing into urban design solutions for the use of secondary resources in urban spaces in cooperation with the population, companies and the administration. This is undertaken in work phases of living labs or real-world laboratories such as co-design, co-creation, co-exploration, coexperimentation/testing and co-evaluation steps. More specifically, four urban laboratories representing neighbourhoods typical of medium-sized towns were selected in consideration of characteristics such as resource potentials and stakeholder constellations. These neighbourhoods are • A central, inner-city area, characterised by a combination of brownfield and industrial areas (neighbourhood type-inner-city brownfield in a central location: "Am Plan" urban laboratory) • A detached housing estate, including listed buildings, that faces extensive changes in ownership structure (neighbourhood type-existing housing estate with a garden city character: "Gartenstadt" urban laboratory) • A new housing estate with detached houses and multiple dwellings on an urban open space (neighbourhood type-new residential area: "Krondorfer Wiesen" urban laboratory) • A multiple dwelling demolition area characterised by industrial housing construction as well as demographic and socio-economic challenges (neighbourhood type-industrial prefabricated large housing estate: "Wohnkomplex 4/4" urban laboratory).
A Comparison of Core Characteristics
As outlined in the introduction, there is as yet no uniform theoretical and detailed methodological concept of real-world laboratories (Grunwald 2016: 204f), and therefore no uniform definition either. However, several scientific institutions are currently performing further groundwork, especially also in the context of research in support of the real-world laboratories (BaWü Labs) funded in Baden-Württemberg. 4 On the international arena, there are also a multitude of other approaches that are similar to the real-world laboratory format or that were used as its basis. These include living labs, sustainable living labs and urban transition/living labs (for a comparative overview, see Schäpke et al. 2017: 28ff;Schäpke et al. 2018b). Furthermore, an almost inflationary (and simultaneously unspecific) use is currently being made of the term "lab" in other fields. According to a definition originally introduced by Schneidewind (2014), a realworld laboratory generally describes "… a societal context in which researchers carry out interventions in the sense of 'real-life experiments' in order to learn about social dynamics and processes" [own translation] (Schneidewind 2014: 3). Real-life experiments are considered to be the core of the real-world laboratory approach (Schäpke et al. 2017: 3 with reference to WBGU 2014Schneidewind 2014;De Flander et al. 2014;MWK 2013;Wagner and Grunwald 2015). The idea is to transfer the term "laboratory", as used in the natural sciences context, to the analysis of social and political processes in concrete social contexts (Schäpke et al. 2017: 4). According to the definition coined by Gross et al. (2005), a hybrid form of the experiment is associated with this term, ranging between the production and application of knowledge and situation-specific and controlled boundary conditions (Schneidewind 2014: 2). Conceptually, real-world laboratories therefore build on the 'experimental turn' in the social, economic and sustainability sciences, and are similar to other transdisciplinary and participatory research approaches such as transdisciplinary case studies, participatory action research, fieldwork, intervention research or transition research (Schäpke et al. 2017: 4, referring in each case to prominent representatives of the approaches named).
In the scientific landscape, the concept of the real-world laboratory is therefore easily expandable and currently formative. In practice, however, the term is rejected by some because it evokes associations that experiments are being performed on the participants (Grießhammer and Brohmann 2015: 22). The term "urban laboratory" proved to be useful for work in the "The re-productive town" project. This is because the term is used in the field of urban development, albeit with diverse and different meanings, such as for educational institutions with an experimental laboratory character.
In order to shed light on what characterises real-world laboratories that have been initiated and largely shaped by practitioner stakeholders, we refer below to the core characteristics listed by Parodi et al. (2016): research orientation, normativity, transdisciplinarity, transformativity, civil society orientation/participation, long-term nature and laboratory character (see Table 11.1). These core characteristics largely correspond to or overlap with characterisations proposed by other authors such as WBGU (2016), Schäpke et al. (2017), Defila and Di Giulio (2018a). We add another core feature-continuous processes of reflection and learning with regard to one's own research practice and social effect; these characterise the research process (e.g. Schäpke et al. 2018b;Schneidewind and Singer-Brodowski 2015).
Based on these core characteristics, an outline is given below of how real-world laboratories initiated and coordinated by EAA can be characterised, whether and how they differ from those real-world laboratories that are initiated and coordinated by stakeholders from science, whether they face challenges and, if so, what those challenges are. The definition of the relevant core characteristics is given in Table 11.1.
Regarding Research Orientation
In the Anhalt Real-World Laboratory 'as a whole', the EAA association offers interested researchers the region's problems concerning energy design, energy policy and energy management, some of which have already been formulated and structured; its contacts with regional practitioner stakeholders; and its expertise in the development of a sustainable regional energy system for transdisciplinary research. In this sense, EAA serves as an institution for sustainability and transformation research; application-oriented research is explicitly mentioned in the association's statutory objectives. However, whether or to what extent these institutions for real-world laboratories must always have a scientific character is not deemed absolutely necessary in the previous discussion. Instead, the association focuses primarily on practical orientation and on the search for practicable approaches towards sustainable development in its areas of interest; in the process, analytical work and the consideration of internal scientific interests are accepted as prerequisites for joint research and Schneidewind and Singer-Brodowski (2015) development activities. This principle of strong practical orientation can be adapted accordingly to current circumstances and needs at the specific project level. One example are the urban laboratories in the BMBF joint research project entitled "The re-productive town"; these feature an explicit research orientation. The research questions were defined jointly by the scientific partners and the practice partners (co-design); they are accessible for scientific analysis and for practical changes for transformation towards sustainability.
Regarding Normativity
The normative orientation towards sustainability is one of the association's implicit statutory objectives and, as such, of the Anhalt Real-World Laboratory. This orientation is specified in the association's statutes on objectives such as to contribute to environmental protection and climate action, and the objective to conserve the natural basis of life.
In urban laboratories, the normative assumptions, principles and objectives regarding the reference to the concept of re-productivity by Biesecker and Hofmeister (2006) are made explicit. In this context, the following insight was gained from ongoing work: scientific partners proceed in accordance with elaborated sustainability concepts in real-world laboratories, and it would be helpful for companies, civil society organisations, other institutions and local authorities to use or develop concrete tools in the practical implementation of real-world laboratories. Examples that would make the integrative concept of sustainability tangible for practitioners in the process include environmental protection concepts, corporate social responsibility standards, a local climate action plan, the European Energy Award and other quality management systems. It may also be helpful in this context to adapt for practical use scientific-theoretical sustainability approaches such as the concept of re-productivity in an intermediate step, and to prepare such approaches for practical implementation in the real world (Yildiz et al. 2012).
Regarding Transformativity
In the Anhalt Real-World Laboratory, the EAA association focuses primarily on the shaping of society in terms of the energy transition, mainly by way of local experimental contributions. To do this, EAA draws on findings resulting from sustainability research. The work of EEA backs two aspects of real-world laboratories: first, the Anhalt Real-World Laboratory sees itself as an element of various niche experiments embedded in structuring processes somewhere between the niche level and the regime level. Second, in line with its objective, the work performed by the association should help further develop transformative sciences by portraying and investigating the abstract format of transformation in the real-world laboratory as a physical environment.
Owing to their origins, urban laboratories are likewise primarily practiceoriented. Potential changes in the practices of resource utilisation by municipal stakeholders play a key role in the selection of neighbourhoods, and therefore also in the constitution of the problem, the institutions and people involved, and the methods and intensity of participation. This framework also yields a wide range of options for sustainability research and scientific evidence (e.g. the methodological operationalisation of the characteristics of re-productivity in criteria for assessing technical and socio-economic solutions) (Schön et al. 2013).
The strong practical orientation in the EAA real-world laboratories necessitates a careful reflection and evaluation of the approaches taken so as to be able to make statements on the effect of interventions and on the course of transformation processes in real-world laboratories. The small scale and reach of the measures that can be implemented concerning sustainable urban development generally make it difficult to formulate transferable results, which is currently being hotly discussed as a general phenomenon of real-world laboratories. 5 One cannot help but suspect that a specific contribution to resource efficiency or a viable use of renewable energies in a certain neighbourhood arises more by accident than by design due to a certain constellation of problems and stakeholders. It is then impossible to repeat such a success at other locations. The result is that strong practical orientation represents a restriction, especially for scientific partners. Then again, precisely these small-scale changes can occur and be documented in the Anhalt Real-World Laboratory. These are the small steps that represent the details of social transformation, which is ultimately of greater significance from a practical point of view.
Regarding Civil Society Orientation, Participation
This characteristic exhibits the biggest difference between the state of the scientific discussion and the approaches taken by the EAA real-world laboratories. The drive to initiate the Anhalt Real-World Laboratory came from civil society, which involved scientists in the project as strong partners. The initiative for the BMBF joint research project and its urban laboratories also came from EAA. As such, the direction of activity is opposite to the characteristic portrayed in the literature. The idea for a scientifically supported, experimental transformation of the regional energy system arose from the realisation that the special constellation of stakeholders seeking change and the decisive issue of regional energy supply involving the broader shaping and economic participation of the population became apparent as an opportunity for innovative action. Although science, lobby institutions and financial backers were then involved in the subsequent establishment of the Anhalt Real-World Laboratory at the very beginning, the format can be described as a laboratory initiated by practitioner stakeholders, because: • Practitioner stakeholders from the Anhalt region raised the issue of the regional energy transition, and had already addressed this issue with their own commitment using the resources available to them for more than three years, • The establishment of the real-world laboratory was only conceivable and feasible due to the active work of key regional stakeholders seeking to change the existing system, and • It was only possible to address additional practitioner stakeholders with success because of the trusting relationships among regional stakeholders that had been in existence for several years.
Against this backdrop, an important finding that is a compelling case for the establishment and long-term operation of real-world laboratories by practitioner stakeholders is the fact that it takes a long time to establish successful participatory constellations. This longer-term option is missing in urban laboratories (see the long-term nature criterion). To achieve effective cooperation in the transformation of society, all stakeholders must also act proactively so as to position their issues and other concerns. After all, a form of cooperation that always expects the drive and organisation to come from the same partner will soon show signs of fatigue. It is clear that the involvement of local stakeholders remains a challenge, even if the real-world laboratory is initiated by practice partners. Even if a region has activists who are interested in transformative research, this does not mean that all of the stakeholders needed to tackle a specific issue are willing to get involved. At best, the initiating practice partner will be powerful, influential and well networked, enabling it to organise the constructive participation of the necessary stakeholders.
Regarding the Long-Term Nature and Laboratory Character
The Anhalt Real-World Laboratory is designed for the long term. The association seeks to establish transdisciplinary infrastructure with adequate physical and personnel resources (criterion laboratory character) to be able to ensure the best, most stable possible conditions for experimental research and observation in complex real-world contexts (see Parodi et al. 2016). In contrast, urban laboratories are based on a three-year time frame and are project-related, despite ideally desiring their longer-term and autonomous establishment. The availability of sufficient resources is a prerequisite for this. However, the non-profit association has very limited resources. One possibility would be to raise funds by providing services, but this would imply a change of role to that of a market economy stakeholder like an energy agency, planning office or consulting agency. However, if EAA represents its own business interests, it runs the risk of losing credibility with regard to the handling of sensitive data. This is likely to affect the trust required to acquire regional cooperation partners for transformative research, and the quasi-public role of mediating between possibly competing partners could only be played to a very limited extent in the best case (Yildiz and Schön 2014).
As a result, in order to maintain transdisciplinary infrastructure in the long run, other ways of obtaining sustainable funding must be found by EAA in this specific case and by other practitioner stakeholders seeking to establish real-world laboratories. One option could be a system of mixed financing, comprising continuous funding from state and local resources, together with the acquisition of external funding for research to ensure the independence and impartiality of the real-world laboratory. In this way, the Anhalt Real-World Laboratory could be stabilised as an independent sponsor of transformative research and regional development, akin to an (economic) development agency. With regard to ensuring sustainable infrastructure, the challenge is principally to ensure continuous work processes. This is not possible in the case of project funding alone. After all, funding shortfalls will inevitably occur between a funding project and the next funding projects, ideally following straight on from the first. Unlike research institutions, which are equipped with basic funding, practitioner stakeholders are particularly affected by such shortfalls. Moreover, subsequent funding is uncertain, and there are limitations to the capability of the content to tie in with previous funding, due to the fact that support programmes are usually themed. Besides the (political) will to establish experimental spaces and to actively co-create them, the funding issue therefore becomes a key issue for the establishment of longer-term, viable infrastructures for real-world laboratories (see Kanning and Scurrell 2018).
Regarding Continuous Processes of Reflection and Learning
To assess the processes of reflection and learning, emphasis is placed below on the level of interdisciplinary and transdisciplinary cooperation (Singer-Brodowski et al. 2018) as well as the associated role of accompanying research (Defila and Di Giulio 2018b). When it established the Anhalt Real-World Laboratory, EAA already made provision for accompanying research. The discussions about the topic proved to be difficult. This was because partners with previous experience in transdisciplinary research expected to be closely involved in the real-world laboratory, while the majority of partners assumed that traditional observational research would be conducted. The accompanying research was thus established in the context of a Ph.D. project at the Berlin Social Science Center (WZB), financed by the real-world laboratory. In the light of the findings on relations between researchers, accompanying researchers and financial backers presented in the meantime by Defila and Di Giulio (2018b), it is now possible to make a more detailed assessment of this issue.
The existing accompanying research in the Anhalt Real-World Laboratory is indeed geared towards producing knowledge on the processes that take place in the real-world laboratory. According to Defila and Di Giulio (2018b), however, the relation to individual projects in the real-world laboratory can be described as a relation to the "object of research" that is characterised by dependence and an unequal distribution of power. This relation to the object is very much apparent in the real-world laboratory. The strong substantive involvement of the association's main financial backer gives it access to information about the individual projects. What is more, in addition to the geographical proximity of the accompanying research to the association's sponsors (both from outside the region), the financial backer's interests are close to those of the research institution, namely the effectiveness of energy policy and the national recognition of achievements. As such, the tensions described by Defila and Di Giulio (2018b) do not occur in the sponsor's relation to accompanying research, but there are tensions in both their relations to regional stakeholders. There is a realisation that the establishment of the Anhalt Real-World Laboratory, driven by practitioner stakeholders, could well have benefited from accompaniment experienced in transdisciplinary research in order to cope with integrating the different bases and forms of knowledge.
In the BMBF research project entitled "The re-productive town", within which urban laboratories are initiated and developed, this was achieved by contracting out support in the experience process, the knowledge process and the process of transferring results-although it was not possible to describe this that clearly at the time of the application. The knowledge of experienced transdisciplinary researchers is necessary to integrate knowledge bases from practice and science; to produce transferable knowledge; and, not least, to ensure the continuous self-reflection of different, sometimes changing roles in the transdisciplinary learning process. Ideally, such knowledge should be involved as early as at the stage of conceptually designing real-world laboratories. In this case, scientific accompaniment by a neutral moderator such as sustainify GmbH proved to be successful in the joint research project, ensuring the integration of different bases and forms of knowledge as well as the self-reflection of the practice and scientific partners involved. This insight is consistent with the recommendations already made by Parodi et al. to ensure "co-created accompaniment that supports real-world laboratories in a cooperative, advisory manner" [own translation] (2018: 179).
A Summary Critical Appraisal and Outlook
Real-world laboratories are a relatively young and yet highly diverse format that is interpreted and shaped in a strongly divergent manner by practitioners in some cases. The debate has only just begun and is still being shaped. In principle, many of the characteristics of real-world laboratories discussed are not new for sustainable land management, such as the development of common problem definitions and solutions (co-design, co-production), as is often the case especially in informal processes of sustainable urban and regional developments. What is more, knowledge on participation in planning processes also virtually serves as a role model for real-world laboratories (Eckart et al. 2018: 131ff;Kanning 2018). As such, real-world laboratory formats are compatible with sustainable land management, and also offer added value. Especially real-world laboratory formats that are initiated and coordinated by practitioner stakeholders offer specific implementation potential and, at the same time, are faced with particular challenges.
In our opinion, the direct and explicit integration of objectives for practice and research associated with the real-world laboratory format (Defila and Di Giulio 2018c: 40) represents particular added value over common participatory land management processes. In real-world laboratories, all of the stakeholders involved, whether practitioners or scientists, are considered to be "researchers" [own translation] (Eckart et al. 2018: 105f) who jointly define the solution to the problem and produce new knowledge (co-design, co-production), integrating different specialist disciplines as well as science and practice. In contrast to the planning and development approaches established in land management, an extended self-conception can be identified that could help bridge the oft-criticised gap between theory and practice (e.g. Lamker et al. 2017). In real-world laboratories, the transformation approach is oriented to radical innovations and change processes towards sustainability in a much more proactive manner than is often the case to date in sustainable urban and regional development processes (see Heyen et al. 2018: 26). Where the principle of sustainable development is reflected critically and understood integratively in correspondence with the state of scientific knowledge in real-world laboratories, this goes beyond the current prevailing understanding of sustainability in land management. The latter focuses primarily on the safeguarding or creation of ecological qualities, and pays little attention to the original core of the idea of sustainability, i.e. the transformation of social, economic action in a social-ecological direction (see Kanning 2005;Hofmeister 2014). In this connection, it would also be necessary to include in the discussion the generally inherent, unquestioned concept of material growth (see Fröhlich and Gerhard 2017: 28ff). As such, the real-world laboratory format-in line with the design currently featuring strongly in the scientific discourse-could help establish experimental spaces for radical innovations in which the various areas of expertise in transformation and planning (science) are brought together for sustainable land management and, ideally, linked to educational objectives (see Beecroft et al. 2018: 78).
Real-world laboratories that are initiated and coordinated by practitioner stakeholders also offer special deployment potential due their practical approach, and at the same time face special challenges. On that point, a number of insights and hypotheses can be summarised from EAA's experiences and discussions for further scientific discourse and practical development: Practitioner stakeholders must satisfy various conditions and have certain skills to be able to initiate real-world laboratories. Among other things, they must be capable of organising a research alliance; making their results publicly accessible; and participating in scientific discourse. They must also either have their own financial resources for conducting research or at least have a strong position in the relevant stakeholder network, enabling them to generate the financial resources needed to operate a real-world laboratory.
Scientific and practitioner stakeholders often face the same challenges when establishing the research process, because the interests of many stakeholders must be accommodated when it comes to complex transformation processes. There is always a need to formulate issues in a practically relevant as well as scientifically interesting and challenging manner at the constituent stage of the project, irrespective of the real-world laboratory initiator's institutional background. It is only the weighting of the practical and scientific relevance that may vary to a certain extent. The good position of a research-affine practitioner stakeholder may make it easier to develop stable stakeholder networks for the purpose of achieving cooperation among the relevant practitioner stakeholders, but the various aspects of effective participation must be borne in mind nonetheless. On the other hand, practitioner stakeholders find it particularly challenging to find scientific partners from several disciplines who go along with a joint problem definition and who are not only interested in obtaining data or conducting purely scientific experiments. In addition, social or economic practitioners who have initiated real-world laboratories tend to be challenged more by the need for experience in methods of knowledge integration and modelling for the purpose of transferring results. One solution for this may be to seek support from experienced transdisciplinary researchers and to involve these experts in the conceptual design stage of the real-world laboratory.
Based on these findings, real-world laboratories led by practitioner stakeholders offer particularly favourable conditions when they are backed by local authorities or public bodies. Strong local governments have excellent links; they know the stakeholders' interests; they have experience in planning participatory processes, which can be largely transferred to real-world laboratories (Eckart et al. 2018: 131ff); they can perpetuate transdisciplinary research, ensuring continuity and, on this basis, learning processes. However, small and medium-sized towns, and towns undergoing socio-economic structural change that feature disproportionate demographics are often under financial supervision and rarely have the human resources capacity to be able to undertake the research that is urgently required for their strategic realignment. Such local authorities therefore tend to be unable to support real-world laboratories, which means that they are only rarely able to incorporate their particular problems in research projects. Consequently, support structures are required to make real-world laboratories accessible to all local authority types.
Against this backdrop, real-world laboratories should not only be financed by research funding in the future, but at least in equal parts by structural support from the relevant ministries (e.g. the Ministry of Energy and/or the Ministry for Economic Affairs). After all, besides producing effects in research and science, real-world laboratories (are supposed to) actively drive forward transformation towards a sustainable society (see Kanning and Scurrell 2018).
Several recent changes in science and structural policy will improve the conditions for real-world laboratories initiated by practitioner stakeholders in future. These include a greater shift towards citizen science, including its transformation towards to more complex civic research beyond mere data collection. Citizens are more frequently involved in the formulation of research issues, and the definition of criteria for data collection and analysis. Various institutions besides universities and research institutes give citizens the possibility to participate in research. The "Green Paper Citizen Sciences Strategy 2020 for Germany", published in 2016, provides guidance on activating citizen science for the purpose of transformation that is appreciated, acknowledged and embraced by society and science alike (Bonn et al. 2016: 25). From the perspective of practitioner stakeholders, it is important to create stronger links in future between citizen science formats, ranging from data collection to active codesign and active co-production (ibid.), to the original real-world laboratory format developed by science, creating synergies. In addition, since the 2017 Bundestag elections at the latest, greater attention is being paid in structural policy to the development of rural regions and their small and medium-sized towns. If such attention can also be translated into supporting measures for the sustainable development of rural areas, there will be new financial leeway for real-world laboratories, which can be established and used as experimental spaces for sustainable land management.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 8,698.2 | 2020-08-29T00:00:00.000 | [
"Economics"
] |
Retrieving Tarnished Daguerreotype Content Using X-ray Fluorescence Imaging—Recent Observations on the Effect of Chemical and Electrochemical Cleaning Methods
Tarnished Content Using X-ray Fluorescence Imaging—Recent Observations on the Effect of Electrochemical Abstract: We report a study on the effect of chemical and electrochemical cleaning of tarnished daguerreotypes observed using X-ray fluorescence (XRF) microscopy with a micro-focussed X-ray beam from a synchrotron source. It has been found that, while both techniques result in some success depending on the condition of the plate and the experimental parameters (chemical concentration, voltage, current, etc.) the effect varies, and cleaning is often incomplete. The XRF images using Hg L α , β at an excitation energy just above the L 3 edge threshold produce fine images, regardless of the treatment. This finding confirms previous observations that if the bulk of the image particles remains intact, the surface tarnish has little effect on the quality of the original daguerreotype image retrievable from XRF.
The Daguerreotype
Daguerreotypes are the earliest form of photography, produced on a silver-coated copper plate, which was used predominantly in the mid to late 1800s [1,2]. The process of creating a daguerreotype was developed by Louis-Jacques-Mande Daguerre and perfected in 1839. As creating daguerreotypes was a lengthy and costly practice, only people of high status could afford to have their photo taken this way [3,4]. Nevertheless, daguerreotypes offer snapshots, literally, for the very first time, into this era of human activities, and are of artistic, cultural, and historical significance. Efforts to preserve and restore these images have aroused considerable interest in the art preservation community [5].
The production of a daguerreotype image requires several steps. The result is a high contrast, one-of-a-kind photograph [1][2][3][4]. The process begins with making a finely polished silver-coated copper plate. This is followed by the exposure of the surface to iodine, making the plate photosensitive upon the formation of silver iodide. Later variations utilized alternative halogens, chlorine, bromine, or a combination of these, in order to increase the sensitivity of the surface to light. The photosensitive plate is mounted in the lightproof interior of a camera. When the photo to be taken is appropriately framed, the lens cap is removed, exposing the plate to light. This step causes the formation of silver image particles that result from the photolysis of the silver halide on the silver surface, creating the image. Areas with dense distributions of image particles of a rather consistent shape and size correspond to a high light intensity, whereas areas exposed to a low light intensity display thin and nonuniform image particles. Bright regions are the result of the particles scattering light, and where there are few image particles, only light from specular reflection can be seen. This is why the light and dark areas can change if the daguerreotype is tilted [1]. After the image has formed on the surface, the plate is then exposed to mercury vapor, which fixes the image on the silver-coated copper plate. This is the crucial step in the entire process and, as we shall show below, the presence of Hg on the image's particles allows us to retrieve the fine image details from a tarnished daguerreotype. Excess halides are removed with a salt solution, such as sodium thiosulfate. This makes the surface insensitive to light, and halts the creation of additional image particles, which could cloud the image [1,2]. Next, the silver-coated copper plate is washed with distilled water, and a gold chloride solution is poured on the daguerreotype to ensure the longevity and durability of the image [1]. Note, the addition of gold chloride, known as the gilding step, was later introduced into the final stages of the daguerreotype procedure. Finally, the plate is heated to dry the surface. This process produces a completed daguerreotype image of the subject.
Deterioration of the Daguerreotype
Daguerreotypes are subject to the formation of surface tarnish, which, in the extreme, can completely obscure the image. Surface corrosion will alter the shape and refractivity of the image particles, and hence the direction and intensity of the scattered light. The most frequent tarnishes are silver halides, silver oxides, and sulfur compounds originating from incomplete washing during the original preparation, exposure to atmospheric gases, and deterioration of the cover glass [5,6]. Studies have also focused in great detail on the effects of the daguerreotype storage and exposure conditions. Extreme temperatures and/or humidity can have a negative influence on the integrity of the surface. When a daguerreotype is stored, a glass cover is usually present on the image side of the surface; however, deterioration of the glass cover can be another factor contributing to the degradation of the plate [7]. Many original glass cover plates contained sodium and potassium, which have been shown, over time, to diffuse from the glass and leave deposits on the daguerreotype surface. This leads to highly localized spots of corrosion across the daguerreotype. In addition, the glue that was used to adhere the glass cover to the plate also contributed to corrosion at the perimeter of the daguerreotypes [8,9]. Compounds such as oxides and various sulfides may have been formed because of this practice [10].
Conservation and Preservation Methods
Daguerreotypes are fragile, and Daguerre himself recommended that the plate be protected with a cover glass. About 20 years after their invention, the commercial production of daguerreotypes ceased; there was little effort to preserve them and they became collectors' items. It was not until the twentieth century that archives and art institutions began to collect and preserve daguerreotypes [1]. Many preservation and restoration processes have been tried with varying success. As a result of the variation in the methods of preparation, as well as diverse storage conditions, many daguerreotypes have unique damage requiring tailored cleaning methods. As a result, there is not a single method guaranteed to restore these images. While progress has been made towards a more universal cleaning procedure, these procedures completely depend on the original quality of the surface [5]. There are two general cleaning techniques: chemical cleaning and electrocleaning [1,2]. There are also two common methods in electrocleaning, sometimes referred to as the Wei method and the Barger method [11,12]. The Wei method simply applies a cathodic polarization to the daguerreotype plate in a cleaning solution [11]. The Barger method applies both oxidizing and reducing polarizations to induce anodic and cathodic currents on the daguerreotype, switching between the two throughout the process. It is hoped that by manipulating the surface chemistry, the tarnish will be removed, while the image particles (nano particles of Ag coated with Hg forming an amalgam) will remain intact. Various studies have proven that both methods can help restore the daguerreotype image to some extent. However, as each daguerreotype is unique in terms of the elemental composition of the tarnish and how deteriorated it is, the methods are not always effective. In some cases, electrocleaning treatments have further damaged the daguerreotype. To remove the tarnish from the surface with electrocleaning methods, adjustments of the potential are made. This potential is biased on the daguerreotype surface, and monitored by a reference electrode, as described below.
XRF Imaging Using a Micro Focused X-ray Beam
It was recently reported that using a micro-focused X-ray beam from a synchrotron light source and selecting the excitation energy to just above the L 3 absorption edge of Hg and tracking the intensity of the Hg L α and L β lines in a two-dimensional scan across the daguerreotype plate could retrieve the original image from the daguerreotypes tarnished beyond recognition [7]. These results show that it is the integrity of the silver image particles that were formed upon the photochemical reaction of the photosensitizer, silver halide, when exposed to the object, and the subsequent exposure to hot Hg vapor that determines the quality of the daguerreotype [1,2]. Thus, Hg vapor stabilizes the image particles of silver clusters, forming an amalgam that defines the image, and the image particles are preserved by the presence of Hg. If surface reactions or adventitious contaminants such as the organic molecules that tarnish the plate only affect the surface and the near-surface region (on the order of < nm), this would have very little effect on the image retrieved from Hg L α fluorescence X-rays. This is also why chemical and electrocleaning would work if the bulk of the image particles remained largely undisturbed after cleaning. The objective of this work is to proceed with chemical cleaning and electrocleaning methods under various conditions on a single daguerreotype plate, and to then conduct Hg XRF imaging to further confirm this notion [3,7].
The Daguerreotype Plate and Cleaning Solutions
The daguerreotype plate under investigation, "Little Girl, Pretty Purse", was provided by the Canadian Conservation Institute, National Gallery of Canada (Ottawa, Canada). The reagents used for the chemical cleaning portion included reagent grade ammonium hydroxide (NH 4 OH, Caledon Laboratory Chemicals) and reagent grade sodium thiosulfate Na 2 S 2 O 3 , 99% assay (Sigma Aldrich). Solutions containing specific reagent concentrations were prepared, including a sodium thiosulfate solution of 3% mass/volume (0.190 M) and 1% and 0.5% mass/volume of ammonium hydroxide (0.294 M and 0.147 M, respectively). Additionally, 0.01 M and 0.1 M reagent grade potassium chloride (KCl BioShop), and 0.01 M reagent grade potassium sulfate (K 2 SO 4 , 99% assay, Caledon Laboratory Chemicals) solutions were created for the electrochemical cleaning process. All of the dilution schemes and rinsing processes were done with Type 1 water (18.2 MΩ × cm resistivity).
Chemical and Electrochemical Solution Cell Fabrication
To clean the daguerreotype, two cells were designed using Autodesk Inventor Professional 2020, and were 3D printed from UV-cured resin (ANYCUBIC Photon UV 3D Printer, ANYCUBIC 405 nm Resin Green) for chemical and electrochemical cleaning. These cells were designed to be clamped against the face of the daguerreotype, compressing an O-Ring and creating a tight seal with the surface. Both designs exposed a small surface area of the sample surface to the solution, allowing for multiple reagents and techniques to be tested on localized regions of the same specimen. The chemical cleaning device was composed of three solution wells in proximity, allowing for the simultaneous testing of multiple solutions on a localized area. The electrochemical cell incorporated a much larger solution volume, accommodating a three-electrode electrochemical setup [13]. These arrangements are displayed in Figure S1.
Chemical Cleaning and Electrocleaning
The chemical cleaning cell was clamped onto the daguerreotype surface, as shown in Figure S1c. Each of the three wells had a different solution pipetted into them and were left for 1 h. After that, the solutions were removed, and each well was gently rinsed with Type 1 water. The daguerreotype was then rinsed and patted dry with Kimwipes. Each site was then inspected at Surface Science Western with a VHX-6000 optical microscope (Keyence), as well as VP-SEM (Hitachi SU3900) and EDX (Oxford Instruments Ultim Max 65).
Electrocleaning and measurements were performed with a Solartron 1287 potentiostat on the daguerreotype plate. A three-electrode setup was used, as shown in Figure S1a,b, where the daguerreotype was the working electrode, Ag/AgCl was the reference electrode, and platinum foil was the counter electrode. The solution cell was clamped onto the daguerreotype, then filled with one of the several electrolytes being studied, and a 5 min open circuit potential (OCP) measurement was performed. The OCP measurement determines the resting potential of the working electrode. This was followed by an electrolytic cleaning step using the Wei method [11] or the Barger method [12], described as follows.
In the Wei method, we applied a constant cathodic polarization (constant negative potential) for 90 s, as seen in the potential versus time graph (Figure 1, top-left). In the corresponding current versus time graph (Figure 1, top-right), a typical current response is shown; the current started at a rather negative value, while reducible species were abundant on the surface, and then gradually approached zero, suggesting that the oxidized surface species were becoming depleted as the cleaning procedure progressed.
SEM and EDX Characterization
The morphology and elemental distribution of the plate before and after the cleaning were examined with SEM and EDX, respectively [14]. A Hitachi SU3900 Large Chamber Variable Pressure SEM combined with an Oxford ULTIM MAX 65 SDD X-ray analyzer was used. High resolution (up to 100 k X magnification) [15] FE-SEM imaging was performed using a Hitachi SU8230 Regulus Ultra High-Resolution Field Emission SEM. Selected areas on the daguerreotype were imaged using FESEM (image resolution of 0.6 nm at 15 kV acceleration or 0.8 nm at 1 kV acceleration).
XRF Imaging Using Synchrotron Radiation
XRF images of the plate were recorded at the microprobe station at CLS@APS at the ID beamline of sector 20 of the Advanced Photon Source at Argonne National Laboratory In the Barger method, we applied a modified version in which the applied potential was alternated between anodic and cathodic polarizations in 2-s intervals for 80 s, followed by a 10-s cathodic cleaning step, also seen in Figure 1, bottom-left, in the potential versus time graph. The corresponding current versus time graph (Figure 1, bottom-right) does not show the same approach to zero current seen in the Wei method, for several reasons. First, the electrode was not given enough time during any of the cathodic stages to achieve a steady state. Second, the anodic phase preceding each cathodic phase of the oscillation generated more oxidized species [7] for reduction during the subsequent cathodic phase, and finally, the current during the anodic phase should never be eliminated because it could correspond to the oxidation of the Ag that makes up the bulk of the daguerreotype. Following the application of one of these cleaning profiles, the electrolyte solution was immediately removed, and the daguerreotype surface was gently flushed with Type 1 water. Each site was then analyzed optically, as well as by VP-SEM and EDX.
SEM and EDX Characterization
The morphology and elemental distribution of the plate before and after the cleaning were examined with SEM and EDX, respectively [14]. A Hitachi SU3900 Large Chamber Variable Pressure SEM combined with an Oxford ULTIM MAX 65 SDD X-ray analyzer was used. High resolution (up to 100 k X magnification) [15] FE-SEM imaging was performed using a Hitachi SU8230 Regulus Ultra High-Resolution Field Emission SEM. Selected areas on the daguerreotype were imaged using FESEM (image resolution of 0.6 nm at 15 kV acceleration or 0.8 nm at 1 kV acceleration).
XRF Imaging Using Synchrotron Radiation
XRF images of the plate were recorded at the microprobe station at CLS@APS at the ID beamline of sector 20 of the Advanced Photon Source at Argonne National Laboratory [16]. The ID line was equipped with a Si(111) double crystal monochromator and a KB mirror capable of focussing the X-ray beam down to 5 micrometres routinely. We used an excitation energy of 13 keV; this energy is just above the Hg L 3 (12,284 eV) and Au L 3 edge (11,919 eV), producing Hg Lα 1,2 (9989 eV and 9898 eV) and Au Lα 1,2 (9713 eV and 9628 eV) X-ray fluorescence lines, as well as other lines of interest, e.g., K α of first row transition elements [17]. The incident beam was tuned to a spot size of~50 µm to optimize the data acquisition efficiency. The experiment was conducted when APS was running in a top-up mode, 24 bunches with a total current of 100 mA. This mode, together with the incident focussed beam (I o ) being monitored with an ionization chamber, ensured the beam stability and proper normalization, which is essential as it normally takes several hours to scan the entire plate. In this run, the photon flux was approximately 10 12 photons per second over a spot size of~50 µm with a step size of 50 µm. The illumination time was 50 ms per pixel, number of pixels of the map was 881 X 1061, and the total scan time was 13 h 24 min and 23 s. It should be noted that synchrotron XRF imaging has been widely used and the scope of its application can be found in a recent contribution [18].
The daguerreotype plate was mounted on a three-axis platform and the scanning was done by moving the plate across the beam pixel by pixel. The XRF image was obtained by setting the energy window of interest, e.g., Hg L α and Au L α , in the X-ray fluorescence spectra collected by a four-element Vortex-Me4 solid state detector (~250 eV energy resolution) and stored in a multichannel analyser (MCA). The experimental set-up is shown in Figure S2, and a snapshot of the MCA display is shown in Figure S3, where the fitting of the overlapping fluorescence X-rays with which we could obtain more accurate elemental distribution and better contrast were obtained.
Results
The optical images of the "Little Girl, Pretty Purse" daguerreotype before cleaning and after cleaning, as well as the XRF image from collecting Hg L α fluorescence X-rays, are shown in Figure 2. It is apparent from Figure 2 that the chemical and electrochemical cleaning methods are generally effective. We will inspect several selected areas and discuss the effects below.
of the overlapping fluorescence X-rays with which we could obtain more accurate ele-mental distribution and better contrast were obtained.
Results
The optical images of the "Little Girl, Pretty Purse" daguerreotype before cleaning and after cleaning, as well as the XRF image from collecting Hg Lα fluorescence X-rays, are shown in Figure 2. It is apparent from Figure 2 that the chemical and electrochemical cleaning methods are generally effective. We will inspect several selected areas and discuss the effects below.
Chemical Cleaning
Let us examine the three different sites cleaned using the three-well cell assembly (small oval marks) on the left of Figure 2b, each with its own solution, namely: 3% sodium thiosulfate, 1% ammonium hydroxide, and 0.5% ammonium hydroxide. An ammonium hydroxide solution was chosen as it removes halides from the surface by forming soluble ammonia silver complexes, and sodium thiosulfate was historically used in the production of daguerreotypes to remove silver-halides from the material surface. As the haze on a daguerreotype surface is normally attributed to halide formation, sodium thiosulfate would also be a good chemical cleaning method in addition to ammonium hydroxide [2]. The results of the chemical cleaning are closely examined in Figure 3.
From the left panel of Figure 3, we see that all cleaning solutions were successful in removing the foggy haze from the surface. The 3% Na2S2O3 solution produced a noticeable reduction in surface clouding ( Figures 3A,B). A similar increase in sample clarity was observed with the application of 0.5% NH4OH ( Figure 3C,D). The 1% NH4OH ( Figure 3F) uncovered the masked floral print with fine details and good contrast. EDX maps ( Figure S4) revealed the presence of Ag, Au, Hg, S, and Cl, as well as Hg-coated image particles that were slightly less than a micrometre. After cleaning, they remained intact, while the Cl and S signals were reduced.
Chemical Cleaning
Let us examine the three different sites cleaned using the three-well cell assembly (small oval marks) on the left of Figure 2b, each with its own solution, namely: 3% sodium thiosulfate, 1% ammonium hydroxide, and 0.5% ammonium hydroxide. An ammonium hydroxide solution was chosen as it removes halides from the surface by forming soluble ammonia silver complexes, and sodium thiosulfate was historically used in the production of daguerreotypes to remove silver-halides from the material surface. As the haze on a daguerreotype surface is normally attributed to halide formation, sodium thiosulfate would also be a good chemical cleaning method in addition to ammonium hydroxide [2]. The results of the chemical cleaning are closely examined in Figure 3.
Electrochemical Cleaning with Cathodic Method
A preliminary testing with silver-coated copper wires was performed to ensure that the applied potentials would not damage the daguerreotype. Although previous studies employed much higher potentials in their electrochemical cleaning procedures, our initial testing indicated that a potential of −0.9 V would be sufficient to cathodically clean the surface (the Wei method) [11] without causing any noticeable surface damage.
To test the effectiveness of the cathodic electrochemical cleaning methods, we used different electrolytes that were applied to selected sites, as illustrated in Figure 4, where the optical images before and after cleaning are shown. The Wei method involved the ap- From the left panel of Figure 3, we see that all cleaning solutions were successful in removing the foggy haze from the surface. The 3% Na 2 S 2 O 3 solution produced a noticeable reduction in surface clouding ( Figure 3A,B). A similar increase in sample clarity was observed with the application of 0.5% NH 4 OH ( Figure 3C,D). The 1% NH 4 OH ( Figure 3F) uncovered the masked floral print with fine details and good contrast. EDX maps ( Figure S4) revealed the presence of Ag, Au, Hg, S, and Cl, as well as Hg-coated image particles that were slightly less than a micrometre. After cleaning, they remained intact, while the Cl and S signals were reduced.
Electrochemical Cleaning with Cathodic Method
A preliminary testing with silver-coated copper wires was performed to ensure that the applied potentials would not damage the daguerreotype. Although previous studies employed much higher potentials in their electrochemical cleaning procedures, our initial testing indicated that a potential of −0.9 V would be sufficient to cathodically clean the surface (the Wei method) [11] without causing any noticeable surface damage.
To test the effectiveness of the cathodic electrochemical cleaning methods, we used different electrolytes that were applied to selected sites, as illustrated in Figure 4, where the optical images before and after cleaning are shown. The Wei method involved the application of a constant negative potential (Figure 1) to the daguerreotype plate, therefore constantly reducing the surface. The electrolytes were selected such that they had a negligible chemical cleaning effect. Then, 0.01 M, 0.1 M KCl, and 0.01 M K 2 SO 4 solutions were used. All of the electrochemically cleaned sites showed an optically improved image ( Figure 4B,D,F, left panel). Most remarkably, the hands ( Figure 4E,F, left panel) revealed greater details than before any cleaning attempts.
Electrochemical Cleaning with Cathodic Method
A preliminary testing with silver-coated copper wires was performed to ensure that the applied potentials would not damage the daguerreotype. Although previous studies employed much higher potentials in their electrochemical cleaning procedures, our initial testing indicated that a potential of −0.9 V would be sufficient to cathodically clean the surface (the Wei method) [11] without causing any noticeable surface damage.
To test the effectiveness of the cathodic electrochemical cleaning methods, we used different electrolytes that were applied to selected sites, as illustrated in Figure 4, where the optical images before and after cleaning are shown. The Wei method involved the application of a constant negative potential (Figure 1) to the daguerreotype plate, therefore constantly reducing the surface. The electrolytes were selected such that they had a negligible chemical cleaning effect. Then It should be noted, however, that cleaning with a KCl solution (0.1 M), introduced Clions into the solution. This site showed a greater amount of residual white haze (formation of AgCl) following cleaning than the site using the lower concentration KCl solution (0.01 M, Figure 4D, left panel). This was the result of the common ion effect reducing the solubility of AgCl. An improved image is observed for Figure 4B, left panel, where 0.01 M K 2 SO 4 solution was used. All three sites showed great improvement optically after only 90 s of cleaning. EDX maps were also obtained before and after cleaning, showing results like those obtained from the chemical cleaning experiment noted above.
Electrochemical Cleaning Using Chemical Cleaning Solutions
We have also explored the effect of using chemical cleaning solutions, such as NH 4 OH and Na 2 S 2 O 3 , as the electrolytes for the electrochemical cleaning procedures with both the Wei and Barger methods. In Figure 5, the optical images of the daguerreotype before and after the application of the combined electrochemical and chemical cleaning are shown. The improvement of the overall image quality when cleaning was performed with a solution of 0.15% NH 4 OH can clearly be seen. This procedure was applied with a 0.19% Na 2 S 2 O 3 solution with similar results. Figure 5B,D, right panel, correspond to the Barger method with an applied potential range of −0.9 V to 0.9 V, and −1.2 V to 1.2 V versus Ag/AgCl, respectively. Again, all of the images after cleaning showed more detail than before.
It should be noted, however, that cleaning with a KCl solution (0.1 M), introduced Clions into the solution. This site showed a greater amount of residual white haze (formation of AgCl) following cleaning than the site using the lower concentration KCl solution (0.01 M, Figure 4D, left panel). This was the result of the common ion effect reducing the solubility of AgCl. An improved image is observed for Figure 4B, left panel, where 0.01 M K2SO4 solution was used. All three sites showed great improvement optically after only 90 s of cleaning. EDX maps were also obtained before and after cleaning, showing results like those obtained from the chemical cleaning experiment noted above.
Electrochemical Cleaning Using Chemical Cleaning Solutions
We have also explored the effect of using chemical cleaning solutions, such as NH4OH and Na2S2O3, as the electrolytes for the electrochemical cleaning procedures with both the Wei and Barger methods. In Figure 5, the optical images of the daguerreotype before and after the application of the combined electrochemical and chemical cleaning are shown. The improvement of the overall image quality when cleaning was performed with a solution of 0.15% NH4OH can clearly be seen. This procedure was applied with a 0.19% Na2S2O3 solution with similar results. Figure 5B,D, right panel, correspond to the Barger method with an applied potential range of −0.9 V to 0.9 V, and −1.2 V to 1.2 V versus Ag/AgCl, respectively. Again, all of the images after cleaning showed more detail than before. For example, in Figure 5C, right panel, it is difficult to see what is present, but after cleaning it becomes apparent that there is a very intricate table cloth ( Figure 5D, right), where many details within it are now evident. According to the EDX maps, both the Barger and Wei method can remove both the halides and sulfides present on the surface.
XRF Imaging
A closer look at Figure 2 shows the optical images before and after cleaning, as well as the Hg Lα image. It is apparent from Figure 2C that the XRF image has revealed a finely detailed portrait of the little girl and the pretty purse with a very clean background, as if all the tarnish were removed. The most noticeable difference is that, while the various cleaning methods applied to the plate show nonuniform cleaning in the optical image For example, in Figure 5C, right panel, it is difficult to see what is present, but after cleaning it becomes apparent that there is a very intricate table cloth ( Figure 5D, right), where many details within it are now evident. According to the EDX maps, both the Barger and Wei method can remove both the halides and sulfides present on the surface.
XRF Imaging
A closer look at Figure 2 shows the optical images before and after cleaning, as well as the Hg L α image. It is apparent from Figure 2C that the XRF image has revealed a finely detailed portrait of the little girl and the pretty purse with a very clean background, as if all the tarnish were removed. The most noticeable difference is that, while the various cleaning methods applied to the plate show nonuniform cleaning in the optical image depending on the cleaning condition and the region of interest, removal of tarnish from the daguerreotype is complete in the XRF image, which reveals fine details everywhere across the portrait.
The XRF image can be fitted using the software package PyMCA [19,20], in which the X-ray fluorescence peaks are fitted and the area under the curve is the intensity contributing to the image. This procedure removes contributions from the overlapping peaks, such as Au L α in the case of Hg L α . An XRF image can also be obtained using Hg L β , Au L α , and Au L β . The images from the Hg L PyMCA fit, and Hg L β and Au L α without fitting, are shown in Figure 6. It is interesting to note that the image is also finely revealed in the Hg L β map and is noticeable in the Au map, albeit thinly veiled. The presence of a veiled Au image indicates that a gilding step using a gold chloride solution was applied after the image particles were formed and fixed when the plate was made, so that Au was found all over the plate, tracking how the gliding was done at the time. An image can still be observed from the Au fluorescence, suggesting that Au tracks the density and the distribution of the image particles, as well as the featureless region of the Ag plate. are shown in Figure 6. It is interesting to note that the image is also finely revealed in the Hg Lβ map and is noticeable in the Au map, albeit thinly veiled. The presence of a veiled Au image indicates that a gilding step using a gold chloride solution was applied after the image particles were formed and fixed when the plate was made, so that Au was found all over the plate, tracking how the gliding was done at the time. An image can still be observed from the Au fluorescence, suggesting that Au tracks the density and the distribution of the image particles, as well as the featureless region of the Ag plate. When comparing the Hg Lα image in Figure 2C with the Hg L PyMCA and Lβ images ( Figure 6), it appears that while the Lα and Lβ images without fitting were of equally good quality, the PyMCA image showed a slightly better spatial resolution and contrast at closer scrutiny. This is because the Hg and Au Lα lines could not be completely resolved with the solid-state detector (SSD) without fitting. We also tracked the Cu Kα line, which did not show any image as the signal came from the Cu plate, and Cu was not involved in the formation of image particles. The Ag L emission was too weak at this excitation energy to be detected, and did not reveal any image either. It will be of interest to track Ag with tender X-ray excitation at just above the Ag L3 edge (3351 eV) [3].
Conclusions
We conducted chemical and electrochemical cleaning procedures on various sites from a single, partially tarnished daguerreotype. We found that both methods of cleaning were effective at removing the tarnish and restoring the image. While all procedures seemed to improve the image by some degree, the tarnish was never removed uniformly or entirely. The chemical cleaning procedures were sufficient to remove the halides/white haze from the surface. Optical images taken after cleaning still appeared to have areas with a brown/orange tinge on the surface. Electrochemical cleaning was sufficient at removing the sulfides from the surface in addition to the halides, and did it faster. Both the Barger and the Wei electrocleaning methods improved the visual appearance of the image. Again, they did not always remove the orange/brown tarnish colour form the surface. When comparing the Hg L α image in Figure 2C with the Hg L PyMCA and L β images ( Figure 6), it appears that while the L α and L β images without fitting were of equally good quality, the PyMCA image showed a slightly better spatial resolution and contrast at closer scrutiny. This is because the Hg and Au L α lines could not be completely resolved with the solid-state detector (SSD) without fitting. We also tracked the Cu K α line, which did not show any image as the signal came from the Cu plate, and Cu was not involved in the formation of image particles. The Ag L emission was too weak at this excitation energy to be detected, and did not reveal any image either. It will be of interest to track Ag with tender X-ray excitation at just above the Ag L 3 edge (3351 eV) [3].
Conclusions
We conducted chemical and electrochemical cleaning procedures on various sites from a single, partially tarnished daguerreotype. We found that both methods of cleaning were effective at removing the tarnish and restoring the image. While all procedures seemed to improve the image by some degree, the tarnish was never removed uniformly or entirely. The chemical cleaning procedures were sufficient to remove the halides/white haze from the surface. Optical images taken after cleaning still appeared to have areas with a brown/orange tinge on the surface. Electrochemical cleaning was sufficient at removing the sulfides from the surface in addition to the halides, and did it faster. Both the Barger and the Wei electrocleaning methods improved the visual appearance of the image. Again, they did not always remove the orange/brown tarnish colour form the surface. As noted in the introduction, daguerreotypes can vary significantly depending on when they were made, the methods and equipment of the artist who made them, and the conditions under which they were stored. By performing these experiments on small regions of the same daguerreotype, we tried to obtain the most consistent possible initial conditions for comparison. The effectiveness of the treatment will depend entirely on the original integrity of the daguerreotype. They are not uniform. This work confirms previous observations that the only sure way to retrieve the complete contents of the daguerreotype is through synchrotron radiation X-ray fluorescence imaging. This technique will ensure that events of historical significance from a tarnished plate can be retrieved. With the XRF method, even if the daguerreotype is severely tarnished, provided there is still sufficient mercury on the image particles on the surface, the image in its entirety can still be reconstructed through digitizing the XRF images. The cleaning methods have been shown to improve the image optically; nevertheless, this should be undertaken with caution. Once the Hg is gone, the image will be lost forever. To refine the daguerreotype cleaning methods even more, further research would need to be performed to determine the detailed chemical composition of the substrate plate, the image particles, the tarnish, and its interplay with the environment, such as the daguerreotype housing and the protecting glass.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/heritage4030089/s1. Figure S1: Experimental setup for electrochemical cleaning and chemical cleaning. (a) Schematic for the three-electrode set up for electrocleaning. (b) Actual set up for electrocleaning; the area of interest is confined by the perimeter of the cell, which leaves an oval mark on the plate after cleaning. The working electrode (daguerreotype), counter electrode (Pt), and reference electrode (Ag/AgCl) are noted. (c) Setup for chemical cleaning with the three-well cell clamped down on the plate. This setup leaves behind three small oval marks on the plate. (see Figure 2, middle panel and text). Figure S2: Experimental arrangement for the XRF imaging. The focussed beam (yellow line from left to right) with a spot size of 30 µm × 20 µm is stationary. The plate is mounted on a three-axis stage that moves the plate across the beam with submicrometre precision, pixel by pixel. The fluorescence X-rays are collected with a four element SSD (VotexME4). The data are stored in a multichannel analyzer (MCA). Desired energy windows are set to collect element-sensitive maps (see Figure S4). Figure S3: A snapshot of the MCA display during a scan (top); the abscissa is photon energy and the ordinate axis is intensity in a semi-log plot. The Hg L intensities fit using PyMCA are shown in the bottom panel (both L α and L β are used, black dotted curve). Figure S4: EDX maps of Ag, S, Au, Hg, and Cl, and the backscattered (BSE) SEM image (black and white) for the chemical cleaning solutions discussed in Figure 3A,B. A: before cleaning; B: after 3% Na 2 SO 3 . | 8,458.8 | 2021-08-05T00:00:00.000 | [
"Materials Science"
] |
Gravimetric geoid for Egypt based on properly application of Gravimetric geoid for Egypt based on properly application of Helmet's method of condensations with window remove-restore Helmet's method of condensations with window remove-restore technique technique
This study uses the window method for merging the gravity fi eld wavelengths inside the remove-restore technique (RRT) to get the gravimetric geoid for Egypt using Helmert ' s models of condensation. In this case, the window approach (Abd-Elmotaal and Kühtreiber, 2003) was utilized to avert taking the topographic-condensation masses into account twice inside the DataFrame. A gravimetric geoid for Egypt has been calculated utilizing Helmert ' s fi rst and second condensations approach and the Airy-Heiskanen model. Within the context of the geoid computation, a thorough comparison between the various Helmert approaches and Airy-Heiskanen has been made. The comparison uses the estimated geoid indicators before and after scaling to the GPS/leveling geoid as well as the residual gravity anomalies that remain after the removal step. The outcomes demonstrated that the window technique reduced gravity anomalies are the smooth, most objective, and have the narrowest range for all used gravity reduction techniques. In addition, the gravity anomalies that were reduced as a result of Helmert ' s fi rst condensation method are nearly identical to those that were reduced using the Airy-Heiskanen technique. The results of the geoid computed through Helmert ' s fi rst condensation and the Airy-Heiskanen technique are identical. Furthermore, although the indirect effect is minimal in the case of Helmert second condensation method, it is essential for determining the 1 cm geoid precision. Finally, geoid undulations computed from the Helmert second condensation method are better than those calculated from the other two methods.
Introduction
U sing gravimetric data as the boundary value and solving a geodetic boundary value problem (GBVP), such as the Stokes or Molodenskii problem, one can calculate the local geoid.The gravimetric technique for local or global gravimetric geoid determination refers to such a procedure.Both determining the global geoid and determining the localized geoids are impossible.The Stokes' integral solution can be improved by an alternative approach known as the modified Stokes' integral (Rstone et al., 1998).Gravimetric data, gravitational model, and Digital Height Model (DHM) are used as input observations (see, e.g.(Denker et al., 1986;Tziavos et al., 1992)).In this case, gravity anomalies and geoid undulations are divided into three parts known as short, medium, and long wavelength contributions.The long wavelength components are represented by a global geopotential model (GGM).This part can be calculated mathematically.The socalled residual gravity anomalies are obtained.The short and medium wavelength components are calculated using Stokes' integral with the residual gravity anomalies of the local area.By restoring the topographic masses and long wavelength components of geoid heights, final geoid heights can be attained.Here, it is worth pointing out that FFT or FHT techniques can be used to solve the Stokes' integral.The benefit of the remove-restore technique is that only the gravimetric observation data in a limited area is needed (Torge, 1991).So far, it has been extensively studied for precise determination of local geoid particularly in mountain regions (see, e.g.(Forsberg, 1991;Flury and Rummel, 2009;Sideris and Forsberg, 1990;Sideris and Li, 1992)).
The precision of such derived geoid heights depends on the precision of the three types of wavelength ingredients.Error sources and their effect on geoid determination using RRT are given in Li and Sideris (1994).Error in the long wavelength components is presented by the spherical harmonic coefficients.They cannot be removed using dense gravimetry data around the determination point.Error in medium and short wavelength components depends on the density and precision of the local gravity data, coverage, and spacing of the digital terrain model and improper modeling of the terrain.The relative accuracy of the three components has been given by Schwartz et al. (1987).It is clear that the most significant error results from the geopotential model.The remaining two errors are rather minor and can be reduced or avoidable by proper modeling of the topographic effect and the utilization of dense gravity anomalies and heights.It was concluded in the literature that the accuracy of a gravimetric geoid can be estimated by comparing geoid undulations with those obtained from GPS/ leveling in an absolute or relative sense (e. g. (Denker and Wenzel, 1987;Mainville et al., 1992)).
It is widely acknowledged that the geoid is the equipotential surface of the gravity field at the mean sea level It provides a consistent height system of topography on land and sea.As the majority of measurements used in geodesy are referred to the Earth's gravity field, the determination of the geoid, the physical surface of the Earth, or the figure of the Earth becomes one of the major objectives of geodesy.Thus, it has been widely studied since the 1880s (Torge, 1991).The earliest definition of the geoid may date back to Gauss in 1828, Listing in 1873, and Helmert in 1880/1884 (Torge, 1991).From then, scientists have focused on their research interests on the determination of the geoid and its applicationsdgeophysical interpretations (Bowin et al., 1986;Bowin, 1994;Chao, 1994;Fotiou et al., 1988;Hayling, 1994;Livieratios, 1994;Pick, 1994).Many methods have been developed for the determination of the local geoid.These methods can be categorized into three types: the geometric method, the gravimetric method, and a combination of various types of geometric and gravimetric data (the hybrid method).Astronomical, satellite altimetry, and GPS techniques are the direct methods.The geoid determination by solving the GBVP using gravity anomalies is known as the gravimetric method.The combined processing of gravimetric and geometric data can be performed by techniques, such as least-squares collocation (LSC), least-squares spectral combination (LSSC), and least-squares adjustment.
Since the geoid is the fundamental reference surface for the orthometric height of points, the accurate computation of the local geoid is of great significance (interest) for mapping and surveying.In recent decades, the geoid precision has been significantly improved ( (Ayhan, 1993;Denker, 1991;Torge et al., 1989;Tscherning and Forsberg, 1986;Tziavos, 1987;Vani cek and Kleusberg, 1987); 1994 (Denker and Torge, 1993;Denker et al., 1994Denker et al., , 1995;;Milbert, 1993;Sideris and She, 1995;Vani cek et al., 1995)); however, specific applications still require more precision to be introduced.Geometric leveling is commonly used to fix the heights of points on the surface of the Earth.However, many efforts have been carried out in recent decades to develop alternate techniques and technologies as it is highly demanding of labour and time.Currently, the relative positioning accuracy of the Global Positioning System (GPS) is a few millimeters plus 1e2 ppm.If the geoid has sufficient accuracy, GPS-derived ellipsoidal heights can be transferred to orthometric heights (GPS/leveling).The accurate local geoid determination will enable surveys to use GPS to its fullest capability, replacing geometric leveling.The probability of determining orthometric heights without leveling has been widely studied in recent decades.Examples may be found in Schwartz et al. (1987), Sideris (1993), andEngelis et al. (1985).
However, the determination of orthometric heights by GPS/leveling is not the only application of the geoid.The precise computation of local geoids makes it possible to study oceanography.The geoid surface can be used to determine sea surface topography and the moving features of sea currents (e. g. (Engelis et al., 1985;Nerem and Koblisky, 1994)).The sea surface topography (from the dynamic ocean surface to the geoid) is the signal that carries crucial information about the ocean circulation patterns.
Gravity data
The gravity anomalies data Fig. 1 illustrates irregular distribution with wide gaps, particularly on land.The coverage at the Red Sea is also good.The gravity data is extended over the region's 19 0 4 35 0 N latitude and 22 0 l 40 0 E longitude.There are 102419 stations, and they have gravity anomalies in the range of [À210.6,À315.0] mgal.These points are irregularly distributed with many significant gaps.The marine data has been taken from the National Geophysical Data Center (NGDC) Marine Trackline Geophysics database.The land data has been provided by the National Gravity Standardization Base Net (NGSBN77), Egyptian Survey Authority (ESA), and the General Petroleum Company (GPC).
GPS benchmarks
The present work uses the GPS data collection to validate the recommended methodologies, which consist of 30 GPS stations with known geoid undulations in Egypt.The overall total number of GPS points is too low compared with the land surface of Egypt, despite these stations being evenly dispersed across the whole nation (Fig. 2).
Digital height models
Determining potential and its first derivative of the topographic masses often call for a collection of fine and coarse digital terrain models.The effect of topographic/isostatic (T/I) or topographic/condensation (T/C) masses has been calculated using the TC program (Forsberg, 1984).The fine digital height model EGH13S03 3" Â 3 00 and the coarse one EGH13S30 30 00 X 30" DHMs have been used (Abd-Elmotaal et al., 2013).They are covered the window À 18:5 0 4 35:5 0 N and 21:5 0 l 40:5 0 E Á , Fig. 3.
Window-remove-restore technique (WRRT)
The attraction of T/I or T/C masses is often reduced from the free-air gravity anomalies and then restored to the resultant geoidal undulations using the RRT.In this situation, the reduced gravity anomalies are calculated by (Abd-Elmotaal and Kühtreiber, 2003) the equation where Dg F is the free-air gravity anomalies; Dg GM are the anomalies computed from the geopotential gravitational field; and Dg h denotes the effect of T/I or T/C masses on gravity and will be studied in detail later.Dg GM can be determined from spherical harmonic coefficients (see (Heiskanen and Moritz, 1967)).
Then, as part of the conventional RRT, the final calculated geoid N is provided by (Heiskanen and Moritz, 1967) where N GM denotes the reference field's contribution; N Dg denotes the impact of the gravity anomalies after reduction; and N h denotes the indirect effect on geoid undulation (The removal or shifting of masses underlying the gravity reductions change the gravity potential and, hence, the geoid.This change of the geoid is an indirect effect of gravity reductions).
The geoid undulation determined from the geopotential gravitational model can be found in Heiskanen and Moritz (1967).
The gravimetric technique of geoid computation involves the evaluation of the Stokes' integral which is given by (Abd-Elmotaal and Kühtreiber, 2003) where N Dg represents the geoidal undulation; ds refers to the integration's surface element over the unit sphere; R stands for the reference ellipsoid mean radius; g is the normal gravity of the reference ellipsoid; and Dg red refers to the reduced gravity anomaly on the gird.The geocentric angle j is the angle between the radius vectors of the computation point r P ðR; 4; lÞ and the running point r Q ðR; 4 0 ; l 0 Þ given by cos j ¼ sin 4 sin 4 0 þ cos 4 cos 4 0 cosðl 0 À lÞ ð 4Þ where 4; l represent the geodetic latitude and longitude of the computed station and 4 0 ; l 0 denote the geodetic latitude and longitude of the running element.
The Stokes' function, SðjÞ, in Eq. ( 3), can be expressed as (Heiskanen and Moritz, 1967) The conventional gravity reduction for the influence of the (T/I or T/C) masses is schematically depicted in Fig. 4.This process of eliminating the impact of the topographic and condensation masses' impact has a theoretical issue.Since it is a part of the global reference field, some of the masses' influence is double removed.This causes a portion of the T/I or T/C masses to be given more thought.This can be summarized in the following.For a point P within the circle, the short-wavelength part based on the masses is calculated.The global masses, seen in Fig. 4 as a shaded rectangular area, are in charge of the long-wavelength ingredient of the Earth's gravitational potential reference field.Removing their impact often entails removing their influence.The masses within the circle (which are double-hatched) are considered twice.
A potential solution to this problem is to modify the gravitational model for a fixed DataFrame to account for the influence of the masses.Schematically, Fig. 5 illustrates the benefit of the WRRT.Take an observation at point P. The short-wavelength component, dependent on the (T/I or T/C) masses, currently can be estimated using the masses of the entire data region (small rectangle).The T/I or T/C masses of the data window impact the reference field coefficients.Potential coefficients are subtracted to produce the modified reference field.Therefore, using this modified reference field to eliminate the long wavelength component, a portion of the T/I or T/C masses are not taken into account twice (no region with two hatches in Fig. 5).
Therefore, according to Abd-Elmotaal and Kühtreiber ( 2007), the removal step of the WRRT is as follows: where Dg GMadapt is the adjusted reference field's contribution.The WRRT restoration step is represented by the following syntax: Here N GMadapt represents the contribution made by the modified reference field.
The gravimetric geoid determination after the window technique is summarized in Fig. 6.
Computation procedures
According to Heiskanen and Moritz (1967), the Newton integral for the potential of the terrain masses is written as follows: where G is the Newton gravitational constant and r is the topographic masses density or the difference in density between the terrain and water in ocean regions.The Digital Terrain Model (DTM) approximates the Earth's surface in planar space.In this situation, the effects of T/I or T/C can be calculated in three dimensions (prism integration) or two dimensions (Gauss quadrature or the Gauss-Legender Quadrature method).In the present research, Eq. ( 8) may be represented in the following way when DTM is thought of as prisms with constant density: where P denotes the computation point concerning a Cartesian coordinate system, and Q denotes the source point for the same Cartesian coordinate system, with coordinates x', y,' and z' (see Fig. 7) and is the Euclidean distance between Q and P with Dx ¼ x À x 0 , Dy ¼ y À y 0 and Dz ¼ z À z 0 .Inserting Eq. ( 10) into Eq.( 9) gives The integral of Eq. ( 11) can be found in Ref (Nagy, 1966;Nagy et al., 2000): Also, the first derivative of Eq. ( 12) which denotes the attraction is given by (Nagy, 1966) In our computations, the surface of the computation is assumed to be the plane for the prisms which lay near the computation point.The Earth's curvature is considered for elements (prisms) that are far away from the point under consideration using the following equation: where S represents the distance between the point under consideration and the source point and R is the mean radius of Earth.The superelevation computed above gives the z-shift of the selected prism below the tangential plane.The value of superelevation is an approximate value, and it is valid for a few Kilometers only.
For the purpose of simplicity, let's assume that the coordinate system's origin is at P (x, y, z).If this is the case, coordinates defining the prism must be transformed using a simple 3D shift if the coordinate system's orientation is unchanged.
According to Smith et al. (2001), Eq. ( 9) will be changed to the following equation to determine the condensation effect in the context of Helmert's models of condensation: where ds ¼ dx 0 dy 0 and k is the surface density which is the function of the topographic height (see Fig. 8).
The surface density k can be determined from the constant density of the prism in linear approximation as This supports the theory of mass conservation based on a local mass balance inside any terrain column.
The impact of condensation masses on potential and the first derivative of the condensed masses may therefore be expressed as follows: where and z 1 is the height of the computation point.In this case, Dz is equal to the orthometric height and z* ¼ H (x,y) in case of Helmert's 1st approach of condensation and z ¼ ÀD À Hðx; yÞ in case of Helmert's 2nd approach of condensation.
Topographic-condensation masses harmonic analysis
In the Helmert condensation reduction, the terrain masses are relocated along the local vertical and compressed on a parallel surface located 21 km (the original surface of Helmert's 1st condensation Heck (2003)) below the geoid (Helmert's 1st condensation method).The masses are immediately condensed onto the surface of the geoid in the 2nd Helmert condensation reduction instance.With the coefficients (derived from (Heck, 2003)), the condensed compensated topography resulting from these models can be represented once or more as a series of spherical harmonics as follows: where R C represents the radius of the (approximate) condensation sphere; it can be identified as R C ¼ RÀ 32km in the case of Helmert's 1st condensation method, while it holds R C ¼ R in the case of Helmert's 2nd method.r cr represents the crust density, and r is the mean density of the Earth.If the topographic heights of the Earth's surface are stated in terms of their corresponding rock heights, the aforementioned equations can be simplified (Rummel et al., 1988).Planar approximation allows the following expression to be used: :
Gravity reduction computations
Egypt's residual gravity anomalies and geoid heights were conducted using the following parameter set: The geopotential model GO_CONS_GCF_2_-TIM_R3 (Pail et al., 2011) has been used up to degree and order 250 to implement the conventional RRT.An adapted reference field has been generated by eliminating the harmonic coefficients of the T/I or T/C masses of the data window determined by Eq. ( 20) from the GO_CONS_GCF_2_TIM_R3 coefficients.The WRRT has been applied using this adjusted reference field.The statistics for gravity reduction for conventional and WRRT for various gravity reduction strategies are shown in Table 1 after each reduction phase.It is worth noting that the reduced anomalies are approximately the same for all different gravity reduction techniques.The window strategy provides the best minimized gravity anomalies, as Table 1 demonstrates.The standard deviation has decreased by nearly 30%, and the range has shrunk by one-third.In addition, the reduced anomalies are centered and impartial.Because of this characteristic, windowtechnique reduced anomalies are especially well suited for all geodetic applications and perform effectively in them.
Geoid computations
In the current study, three different methods have been utilized to calculate the gravimetric geoid for Egypt considering WRRT.Those are: (1) Window geoid for Airy-Heiskanen model (2) Window geoid for Helmert's 1st model of condensation.
The information of geoid undulation for the three approaches is displayed in Table 2.
Every calculated geoid is contrasted with a GPS/ leveling geoid.First, the polynomial structure of the differences in absolute geoid between the GPS/ leveling geoid and traditional RRT (Helmert first method for Example) are shown in Fig. 9.The absolute geoid discrepancies range between À2.8 m and 9.2 m with an average of 1.8 m and standard deviation of 3.0 m.The large-value residual has resulted actually from using the GPS data to determine the geoid.But these absolute values can be minimized when using a proper technique of best fitting for geoid.Second, Fig. 10 indicates the absolute geoid discrepancies between the GPS/leveling geoids and the window approach utilizing the same method of gravity reduction (Helmert's first method of condensation).The polynomial structure for the differences in Fig. 10 is better than it was in the case of the conventional RRT, Fig. 9.The standard deviation decreases to around 20░cm, and the range of discrepancies to about 1.2░m.One can see that the value ranges between À3.4░m and 7.7░m with an average of 0.6░m and standard deviation of 2.8░m.
Furthermore, Table 3 displays the information on the absolute geoid variances between the determined geoids in the present study and the GPS/ leveling geoid.It can be observed that the window remove-restore technique gives approximately the same differences to the GPS/leveling geoid (in terms of either the mean difference or the range/standard deviation) for the three gravity reduction techniques used in this investigation.
Conclusions
The combination of the geoid wavelengths cannot be handled appropriately by the traditional Stokes' method with an unmodified Stokes' kernel in the remove-restore scheme.In this research, the WRRT is applied to three models of gravity reduction techniques.These are the Airy-Heiskanen model and the Helmert's first and second scheme of condensations.The geopotential model GO_CONS_ GCF_2_TIM_R3 has been used for dealing with the long wavelength contribution to gravity anomalies and geoid determination.This research comprises two main objectives; in the first objective, the necessary theoretical backgrounds have been studied.Then, gravity reduction techniques have been used for geoid determination for Egypt.For performing the computation, the TC program (Forsberg, 1984) has been used after modification to calculate the attraction of condensation masses for Helmert's first and second method of condensation.The condensation surface is taken to be 32 km below the mean sea level in the case of Helmert's first method of condensation.It has been presumed that the crustal density is constant and equal to 2.67░ g=cm 3 .The results of the first step in our computations revealed that the reduced anomalies using the window approach are smooth and oscillating around zero for the three methods of gravity reduction.These anomalies can be used with more efficiency to determine the geoid and can be used in geophysical interpolation.Also, the geoid determined from the three methods of gravity reduction is approximately the same (there are some minor differences).This means that the WRRT handles the gravity reduction for all gravity reduction techniques properly.Furthermore, although the indirect effect is very small for Helmert's 2nd method of condensation, it is very important for determining the 1 cm geoid level.Finally, geoid undulations computed from Helmert's 2nd condensation method are better than those computed from the other two methods.The results of this study confirm that, in principle, all gravity reductions are equivalent and should yield the same geoid if the indirect effect has been considered, and also the indirect effect should be as small as possible.
In addition, the anomalies resulting from this research fulfill the requirements (the anomalies are small and smooth and oscillate around zero), which should be considered when deciding to apply a reduction method to compute the geoid.Also, the results from this investigation may also be used to simulate the Earth's internal gravity using geophysical inversion producers to determine the terrain's true density.Finally, by the indirect effect and by proper application of the gravity reduction technique, there are some minor differences between geoid undulations computed from different kinds of gravity reduction techniques.Such differences may come from the lack of gravity data in land areas.
Fig. 9 .
Fig.9.The differences in absolute geoid between the GPS/leveling geoid and the traditional remove-restore (Helmert's first method of condensations) approach; 50 cm is the contour interval.
Table 1 .
Statistics of the reduced anomalies for different gravity reduction techniques.
Table 2 .
Statistics of geoid undulations for Egypt for different gravity reduction techniques.Units are in (m).
Table 3 .
Statistics of the residuals between the calculated geoids and the GPS/leveling geoid for all different gravity reduction techniques.Units are in (m). | 5,265.4 | 2023-08-20T00:00:00.000 | [
"Geology",
"Geography"
] |
Exploratory Analysis of Urban Climate Using a Gap-Filled Landsat 8 Land Surface Temperature Data Set
The Landsat 8 satellites have retrieved land surface temperature (LST) resampled at a 30-m spatial resolution since 2013, but the urban climate studies frequently use a limited number of images due to the problems related to missing data over the city of interest. This paper endorses a procedure for building a long-term gap-free LST data set in an urban area using the high-resolution Landsat 8 imagery. The study is applied on 94 images available through 2013–2018 over Bucharest (Romania). The raw images containing between 1.1% and 58.4% missing LST data were filled in using the Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm implemented in the sinkr R packages. The resulting high-spatial-resolution gap-filled land surface temperature data set was used to explore the LST climatology over Bucharest (Romania) an urban area, at a monthly, seasonal, and annual scale. The performance of the gap-filling method was checked using a cross-validation procedure, and the results pledge for the development of an LST-based urban climatology.
Introduction
The increasing population and the permanent quest for comfort and safe shelter have triggered intense urbanization processes taking place all over the world, mainly in the 20th and 21st centuries. The built-up areas substantially modify the environment, and the features of the local atmospheric envelope are changed to the point that a new type of climate is formed. The urban climate is the resultant of a different composition of the radiation budget, higher temperature, and lower humidity values than the surrounding rural areas. The high heterogeneity of the urban environment is replicated in a corresponding diversity of the local climate conditions within a city. For example, green areas are cooler and more humid than impervious patches, urban canyons and squares disturb the wind flow, and building heights and density are so influential for the urban climate that they form the base of the definition of local climate zones (LCZs) [1][2][3]. As a consequence, urban climate modelling and weather forecasting require adequate data relevant for different urban microclimates.
Due to some insurmountable challenges related to the deployment of sensors, such as implementation and maintenance costs, it is very difficult to capture the climate of all distinct spatial tracks in a city using ground measurements. In recent years, urban meteorological networks (UMNs) have been implemented in several cities [4][5][6][7], and citizen observatories are increasingly used in urban climate research [8,9], but a full coverage of the climatic instances of a city is still problematic. From a climatic perspective, the main differences between the satellite remote sensing products refer to the spatial-temporal resolutions and time span of available data. For urban climate studies, the ideal product will have ≤ a 1-km spatial resolution [11], daily frequency, and temporal continuity over the area of interest. In this respect, one can remark that the Landsat missions jointly operated by NASA and the U.S. Geological Survey have provided a series of Earth observation satellites since 1972. The spatial resolution of the Landsat series reached 30 m (resampled) and contributed significantly to the development of urban climate research [12][13][14]. The availability of such data has stimulated detailed investigations of the surface urban heat island (SUHI) and its correlation with the land use/land cover [15,16]. Nevertheless, the low frequency of full-coverage images, with all or the majority of pixels valid over certain areas, and the coarse temporal resolution (i.e., one image at 8 or 16 days) bias the potential benefits to use the Landsat series for urban climate research. Different solutions have been proposed to address such limitations or to improve the quality of the outputs. Cristóbal et al. (2009) [17] presented an enhanced methodology to retrieve LST from Landsat 4 TM, Landsat 5 TM, and Landsat 7 ETM+ using different water vapor ranges.
Sparse or even singular Landsat images were often used in urban climate studies [14,15], while temporally and spatially continuous data sets have been rarely employed simply because they are not available. For example, Lemus-Canovas et al. (2020) [18] estimated Barcelona's metropolitan daytime hot and cold extremes using the LST retrieved from 24 Landsat 8 imagery, Tsou et al. (2017) [15] assessed the SUHI of Shenzen and Hong Kong based on 4 Landsat 8 images, and Kaplan et al. (2018) [14] explored the case study of the Skopje SUHI using only 2 July images. The value of such investigations on the urban climate is doubtless, but one should also acknowledge the shortcomings of the results due to the limited number of samples.
Comparatively, this study addresses the need for temporal and spatial continuity of remote sensing data, and it explores the climatology of the LST in a large urban area (Bucharest, Romania) based on 94 high-spatial-resolution images retrieved from Landsat 8. We used the Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm [19] in order to solve the inherent problem of missing data within satellite images, so that the exploratory study is ultimately based on a gap-free data set. Zhou et al. (2017) [20] showed that the DINEOF reconstruction method can capture the impact of land cover types on LST, pledging for a reasonable spatial pattern, which is particularly important in urban climatology. The temporal extent and the data objectively available for this study support a methodological approach, which can certainly be extended when more data are collected in the future over the same area. In the end, the seasonal and annual variations of the LST were explored.
High-spatial-resolution data sets with full coverage over an urban area have multiple applications beyond scientific research. For example, the temperature distribution over smaller administrative units within a city can provide useful information for urbanism, health risk assessment, or building industry. In this respect, we illustrate the distribution of the LST values over census units of Bucharest.
The manuscript is structured in four sections, as follows: After the introduction (1), we briefly present relevant geographical and climatic facts about the study area, and we detail the data and methods (2), which fundament the results (3), and we supply a set of concluding remarks (4).
Study Area
The case study focuses on Bucharest, the capital and the largest city of Romania, with an estimated population of about two million inhabitants [21], lying over approximately 240 km 2 . The climate of Bucharest is monitored by three WMO stations, namely one 'urban' weather station, i.e., Bucures , ti-Filaret, placed within the city limits; and two 'peri-urban' weather stations, i.e., Bucures , ti-Băneasa, at 10 km N of the downtown, and Bucures , ti-Afumat , i, at approximately 11.5 km NE of the downtown (Figure 1). According to the Köppen-Geiger climate classification [22], Bucharest has a hot-summer humid continental climate (Dfa) with the coldest month, January, averaging below 0 • C, while in July and August, the average temperature is above 22 • C, and above 10 • C from April to October. The wettest period is April-July ( Figure 2). The manuscript is structured in four sections, as follows: After the introduction (1), we briefly present relevant geographical and climatic facts about the study area, and we detail the data and methods (2), which fundament the results (3), and we supply a set of concluding remarks (4).
Study Area
The case study focuses on Bucharest, the capital and the largest city of Romania, with an estimated population of about two million inhabitants [21], lying over approximately 240 km 2 . The climate of Bucharest is monitored by three WMO stations, namely one 'urban' weather station, i.e., București-Filaret, placed within the city limits; and two 'peri-urban' weather stations, i.e., București-Băneasa, at 10 km N of the downtown, and București-Afumați, at approximately 11.5 km NE of the downtown ( Figure 1). According to the Köppen-Geiger climate classification [22], Bucharest has a hot-summer humid continental climate (Dfa) with the coldest month, January, averaging below 0 °C, while in July and August, the average temperature is above 22 °C, and above 10 °C from April to October. The wettest period is April-July ( Figure 2). The manuscript is structured in four sections, as follows: After the introduction (1), we briefly present relevant geographical and climatic facts about the study area, and we detail the data and methods (2), which fundament the results (3), and we supply a set of concluding remarks (4).
Study Area
The case study focuses on Bucharest, the capital and the largest city of Romania, with an estimated population of about two million inhabitants [21], lying over approximately 240 km 2 . The climate of Bucharest is monitored by three WMO stations, namely one 'urban' weather station, i.e., București-Filaret, placed within the city limits; and two 'peri-urban' weather stations, i.e., București-Băneasa, at 10 km N of the downtown, and București-Afumați, at approximately 11.5 km NE of the downtown ( Figure 1). According to the Köppen-Geiger climate classification [22], Bucharest has a hot-summer humid continental climate (Dfa) with the coldest month, January, averaging below 0 °C, while in July and August, the average temperature is above 22 °C, and above 10 °C from April to October. The wettest period is April-July ( Figure 2).
Data and Methods
The Landsat 8 mission provides timely high-quality visible and infrared images, sufficiently consistent with other satellite data in terms of acquisition geometry, calibration, coverage, and spectral characteristics [23]. We used the LST derived from the bands 10 (10.60-11.19 µm) and 11 (11.50-12.51 µm), resampled at 30 m, from Landsat 8 TIRS (Thermal Infrared Sensors) instruments. The Landsat 8 TIRS products use split window algorithms and techniques for correcting atmospheric disturbances, such as absorption and emission, or surface emissivity inferred from MODIS land-cover calculations. Emissivity is a critical variable for the LST estimation. This study used Landsat 8 LST with NDVI-based emissivity, estimated from the Landsat visible and near-infrared bands and typical emissivity values, retrieved from http://rslab.gr/downloads.html, with full technical details available online [13].
The scenes were retrieved between 08:58 and 09:04 UTC, and the area of interest covers the administrative perimeter of Bucharest and its surroundings within a rectangle ranging between 25.90 • and 26.32 • East, and 44.32 • and 44.59 • North. This study was based on 94 Landsat 8 imagery ranging between 2013 and 2018, with an average percentage of data coverage of 91.4% (Table 2). Table 3 shows the degree of completeness of the 94 images when downloaded. In order to secure the quality of the results and to minimize the sampling and processing errors, the data beyond the 1st and 99th percentiles were filtered out. Missing data is a major barrier for satellite meteorology and especially for climate applications, which require long-term and good spatial coverage of information, and constant efforts address this issue. Henn et al. (2013) [24] compared the performance of five techniques used to fill in missing temperature data, namely (a) spatiotemporal correlations based on empirical orthogonal functions (EOFs), (b) time series diurnal interpolation, and (c, d, e) three variations of lapse rate-based filling. They found that the spatiotemporal correlations using EOF reconstruction were most accurate for a large number of stations and missing data. Long et al. (2020) [25] combined MODIS and SEVIRI LST in order to generate a complete daytime data set, and Zhao et al. (2020) [26] also used MODIS to obtain all-weather conditions data. However, high-spatial-resolution data sets allow in-depth analysis of the LST at an urban and intra-urban scale. Considering the results provided by Henn et al. (2013), this study addressed such limitations by reconstructing complete LST data sets for the selected Landsat images through 2013-2018 using the DINEOF algorithm [27][28][29]. The selection of this method was justified by its relatively simple application in situations when relevant co-variables are not available.
Essentially, the missing LST values of each Landsat 8 image were reconstructed using the function DINEOF implemented in sinkr R package [30]. This method does not require a priori knowledge about the statistics of the full data set [31], which is an advantage for analyzing extensive data sets.
The DINEOF method consists of an iterative method to calculate the field values at missing positions. It implies the application of the following routine [19,29,32]:
1.
The spatial and temporal mean of the observation data are removed from the raw LST dataset.
2.
The 'no observation' pixels are replaced with zero values.
3.
The resulting dataset is used to compute the first EOF, and the values obtained during the EOF decomposition are used to replace the missing data.
4.
Sequential EOFs are calculated iteratively until a user-defined convergence criterion is reached.
5.
The procedure is repeated by computing the two EOFs, three EOFs, etc. 6.
The total number of EOFs is determined by the results of the cross-validation procedure, commonly checked with 1% of the valid data selected at the beginning of the procedure.
The use of only one selected EOF configuration may be considered a limitation of the DINEOF algorithm [33,34], but the accuracy of the results reported here pledge for applicability in urban climate research. Overall, the cross-validation pixels were utilized to calculate the accuracy by comparing the pixels of the raw data sets with the pixels of the filled data sets. Figure 3 exemplifies the visual performance of the DINEOF method for filling in the missing LST values of two Landsat 8 images over the urban areas of Bucharest (Romania).
Sensors 2020, 20, x FOR PEER REVIEW 5 of 14 4. Sequential EOFs are calculated iteratively until a user-defined convergence criterion is reached. 5. The procedure is repeated by computing the two EOFs, three EOFs, etc. 6. The total number of EOFs is determined by the results of the cross-validation procedure, commonly checked with 1% of the valid data selected at the beginning of the procedure.
The use of only one selected EOF configuration may be considered a limitation of the DINEOF algorithm [33,34], but the accuracy of the results reported here pledge for applicability in urban climate research. Overall, the cross-validation pixels were utilized to calculate the accuracy by comparing the pixels of the raw data sets with the pixels of the filled data sets. Figure The complete working flow of this study, from retrieving the Landsat 8 LST images to potential applications of the resulted gap-free data set, is provided in Figure 4. The approach was applied here for the city of Bucharest, but it is clear that a similar flow would be applicable for any urban area, and various applications may be proposed. The complete working flow of this study, from retrieving the Landsat 8 LST images to potential applications of the resulted gap-free data set, is provided in Figure 4. The approach was applied here for the city of Bucharest, but it is clear that a similar flow would be applicable for any urban area, and various applications may be proposed.
Gap Filling the Landsat 8 Land Surface Temperature Data Set: Results and Validation
By applying the DINEOF gap-filling method, we obtained 94 Landsat 8 images with full coverage of the study area. Between 2013 and 2018, the annual number of available images ranges between 13 and 21, while the monthly number of images varies between 2 in February, and 13 in July and August ( Figure 5). More frequent images are available during the warm season (i.e., 9 to 13 images from March to October) due to the lower cloudiness. In order to evaluate the accuracy of the DINEOF gap-filling method, we artificially simulated gaps in the original LST data set, and the estimated values were compared to the raw LST values. The artificial gaps were created on a subset representing 50% of the selected LST images, and 1000 randomly selected pixels were assigned as not available (NA) for each image. The DINEOF gapfilling method was tested on the entire data set, namely on all the images containing artificially created gaps. Figure 6 shows the relationship between estimated and original LST values. One can notice that the DINEOF algorithm results in a gap-filled LST data set that is very well correlated and statistically consistent with the initial LST data set, i.e., Pearson's correlation coefficient r 2 = 0.979, and only −0.3 °C difference between the average values (Table 4).
Gap Filling the Landsat 8 Land Surface Temperature Data Set: Results and Validation
By applying the DINEOF gap-filling method, we obtained 94 Landsat 8 images with full coverage of the study area. Between 2013 and 2018, the annual number of available images ranges between 13 and 21, while the monthly number of images varies between 2 in February, and 13 in July and August ( Figure 5). More frequent images are available during the warm season (i.e., 9 to 13 images from March to October) due to the lower cloudiness.
Gap Filling the Landsat 8 Land Surface Temperature Data Set: Results and Validation
By applying the DINEOF gap-filling method, we obtained 94 Landsat 8 images with full coverage of the study area. Between 2013 and 2018, the annual number of available images ranges between 13 and 21, while the monthly number of images varies between 2 in February, and 13 in July and August ( Figure 5). More frequent images are available during the warm season (i.e., 9 to 13 images from March to October) due to the lower cloudiness. In order to evaluate the accuracy of the DINEOF gap-filling method, we artificially simulated gaps in the original LST data set, and the estimated values were compared to the raw LST values. The artificial gaps were created on a subset representing 50% of the selected LST images, and 1000 randomly selected pixels were assigned as not available (NA) for each image. The DINEOF gapfilling method was tested on the entire data set, namely on all the images containing artificially created gaps. Figure 6 shows the relationship between estimated and original LST values. One can notice that the DINEOF algorithm results in a gap-filled LST data set that is very well correlated and statistically consistent with the initial LST data set, i.e., Pearson's correlation coefficient r 2 = 0.979, and only −0.3 °C difference between the average values (Table 4). In order to evaluate the accuracy of the DINEOF gap-filling method, we artificially simulated gaps in the original LST data set, and the estimated values were compared to the raw LST values. The artificial gaps were created on a subset representing 50% of the selected LST images, and 1000 randomly selected pixels were assigned as not available (NA) for each image. The DINEOF gap-filling method was tested on the entire data set, namely on all the images containing artificially created gaps. Figure 6 shows the relationship between estimated and original LST values. One can notice that the DINEOF algorithm results in a gap-filled LST data set that is very well correlated and statistically consistent with the initial LST data set, i.e., Pearson's correlation coefficient r 2 = 0.979, and only −0.3 • C difference between the average values (Table 4). The performance of the matching between the gap-filled and raw LST data sets was also summarized in terms of the correlation, root-mean-square (RMS) errors, and amplitude of their variance (standard deviations) using a Taylor diagram [35], computed for four distinct land-cover categories derived from the Urban Atlas LCLU 2012 (https://land.copernicus.eu/local/urbanatlas/urban-atlas-2018), namely urban, rural, forest, and water ( Figure 7). One can indicate the following deduction: (a) The performance of the DINEOF method is very good and similar for the land-cover categories (i.e., very high correlation coefficients, and low RMS errors); (b) based on the standard deviation values, the gap-filled data are closer to the raw data for land-cover categories closer to nature (i.e., forest and water) than for the more anthropic ones (i.e., rural and urban); and (c) there is a clear distinction between the more natural categories and more anthropic ones in terms of RMS errors, but the values are low in all the cases (±2.0 °C).
Figure 7.
Statistical summary of the matching performance obtained using the DINEOF gap-filling method. The gap-filled data set was compared to observations using a validation subset containing 50% of the selected LST images, and 1000 randomly selected pixels assigned as NA for each image. The performance of the matching between the gap-filled and raw LST data sets was also summarized in terms of the correlation, root-mean-square (RMS) errors, and amplitude of their variance (standard deviations) using a Taylor diagram [35], computed for four distinct land-cover categories derived from the Urban Atlas LCLU 2012 (https://land.copernicus.eu/local/urban-atlas/ urban-atlas-2018), namely urban, rural, forest, and water ( Figure 7). One can indicate the following deduction: (a) The performance of the DINEOF method is very good and similar for the land-cover categories (i.e., very high correlation coefficients, and low RMS errors); (b) based on the standard deviation values, the gap-filled data are closer to the raw data for land-cover categories closer to nature (i.e., forest and water) than for the more anthropic ones (i.e., rural and urban); and (c) there is a clear distinction between the more natural categories and more anthropic ones in terms of RMS errors, but the values are low in all the cases (±2.0 • C). The performance of the matching between the gap-filled and raw LST data sets was also summarized in terms of the correlation, root-mean-square (RMS) errors, and amplitude of their variance (standard deviations) using a Taylor diagram [35], computed for four distinct land-cover categories derived from the Urban Atlas LCLU 2012 (https://land.copernicus.eu/local/urbanatlas/urban-atlas-2018), namely urban, rural, forest, and water ( Figure 7). One can indicate the following deduction: (a) The performance of the DINEOF method is very good and similar for the land-cover categories (i.e., very high correlation coefficients, and low RMS errors); (b) based on the standard deviation values, the gap-filled data are closer to the raw data for land-cover categories closer to nature (i.e., forest and water) than for the more anthropic ones (i.e., rural and urban); and (c) there is a clear distinction between the more natural categories and more anthropic ones in terms of RMS errors, but the values are low in all the cases (±2.0 °C).
Figure 7.
Statistical summary of the matching performance obtained using the DINEOF gap-filling method. The gap-filled data set was compared to observations using a validation subset containing 50% of the selected LST images, and 1000 randomly selected pixels assigned as NA for each image.
Figure 7.
Statistical summary of the matching performance obtained using the DINEOF gap-filling method. The gap-filled data set was compared to observations using a validation subset containing 50% of the selected LST images, and 1000 randomly selected pixels assigned as NA for each image.
Climatic Analysis of the Land Surface Temperature over Bucharest Using the Gap-Filled Landsat 8 Data Set
Climate research requires long-term data and full spatial coverage, if possible. While a 30-year period is recommended for climate prediction purposes, shorter time intervals may perform as effectively as 30-year averaging periods and provide useful overall climate information [36]. It is undoubtedly an advantage to have early climate information over an area even if it is consequently from shorter periods, than to delay climate research and wait for the completion of a 30-year period of data. In this respect, the spatial completeness and the temporal extent of the reconstructed Landsat 8 LST data series supports a preliminary climatic analysis over Bucharest at monthly, seasonal, and annual scales. Figure 8 shows the average LST values through May-September, namely the months with more than 9 images each along the period 2013-2018. Figures 9-11 illustrate the seasonal and, respectively, annual and multi-annual LST averages integrating all the images available over the period analyzed in this study. One can remark that the LST seasonality is perfectly captured by the Landsat data (Figures 8 and 9). For example, June, July, and August (JJA) are the hottest months; the LST values decrease in spring (March, April, and May-MAM) and autumn (September, October, and November-SON); and winter is the coldest season (December, January, and February-DJF). Moreover, this is consistent with the thermal climate over the area of interest, as described in Section 2.1. Mean daily LSTs above 30 • C prevail from May to August, while 40-44 • C are common LSTs over the urban area of Bucharest during the daytime in July (Figure 8). In the central part of the city, the multiannual average LST values of the warm seasons range between 22 and 27 • C, and over the urban periphery the LST is 20-22 • C ( Figure 11). The estimated SUHI intensity of 2.0-5.0 • C is in perfect agreement with previous work based on MODIS LST products [37].
Climatic Analysis of the Land Surface Temperature over Bucharest Using the Gap-Filled Landsat 8 Data Set
Climate research requires long-term data and full spatial coverage, if possible. While a 30-year period is recommended for climate prediction purposes, shorter time intervals may perform as effectively as 30-year averaging periods and provide useful overall climate information [36]. It is undoubtedly an advantage to have early climate information over an area even if it is consequently from shorter periods, than to delay climate research and wait for the completion of a 30-year period of data. In this respect, the spatial completeness and the temporal extent of the reconstructed Landsat 8 LST data series supports a preliminary climatic analysis over Bucharest at monthly, seasonal, and annual scales. Figure 8 shows the average LST values through May-September, namely the months with more than 9 images each along the period 2013-2018. Figures 9-11 illustrate the seasonal and, respectively, annual and multi-annual LST averages integrating all the images available over the period analyzed in this study. One can remark that the LST seasonality is perfectly captured by the Landsat data (Figures 8 and 9). For example, June, July, and August (JJA) are the hottest months; the LST values decrease in spring (March, April, and May-MAM) and autumn (September, October, and November-SON); and winter is the coldest season (December, January, and February-DJF). Moreover, this is consistent with the thermal climate over the area of interest, as described in Section 2.1. Mean daily LSTs above 30 °C prevail from May to August, while 40-44 °C are common LSTs over the urban area of Bucharest during the daytime in July (Figure 8). In the central part of the city, the multiannual average LST values of the warm seasons range between 22 and 27 °C, and over the urban periphery the LST is 20-22 °C (Figure 11). The estimated SUHI intensity of 2.0-5.0 °C is in perfect agreement with previous work based on MODIS LST products [37]. One can remark that the spatial pattern of the LST (Figures 8-11) is strongly connected with the land cover-land use characteristics of the urban area ( Figure 1) for all the temporal scales tackled in this study. The warmest areas are concentrated in the central part Bucharest, overlapping the urban fabric districts with high building density, but spots with high LST values occur towards the edges of the administrative perimeter too, over industrial or residential urban fabric. The water and forest surfaces may be 8-10 °C colder than the urban fabric in terms of the multi-annual average ( Figure 11).
In order to confirm the accuracy of the method and the reliability of the results, the Landsat 8 LST values were compared with the corresponding air temperature retrieved at the same time of the day from the pixels containing the three WMO meteorological stations of Bucharest (i.e., București-Afumați, -Băneasa, and -Filaret). According to Jin and Dickinson (2010) [38], the difference LST-Ta is higher at noon (i.e., 15 °C), and lower by nighttime (i.e., <5 °C), while Mbuh et al. (2019) [39] found differences of up to 5 °C in Chicago and Minneapolis, based on Landsat 4, 5, and 7 images from 1984-2016. For Bucharest, the LST values retrieved from Landsat 8 OLI and TIRS over the pixels corresponding to the weather stations București-Filaret, -Băneasa, and -Afumați are higher than Ta, with 0.1 to 4.0 °C in 30% of the cases, and 4.1 to 6.0 °C in almost 30% of the cases (Figure 12). One can notice a strong correlation between LST and Ta, with Pearson's correlation coefficients (R 2 ) above 0.9 over each of the three meteorological stations analyzed here ( Figure 13). One can remark that the spatial pattern of the LST (Figures 8-11) is strongly connected with the land cover-land use characteristics of the urban area ( Figure 1) for all the temporal scales tackled in this study. The warmest areas are concentrated in the central part Bucharest, overlapping the urban fabric districts with high building density, but spots with high LST values occur towards the edges of the administrative perimeter too, over industrial or residential urban fabric. The water and forest surfaces may be 8-10 • C colder than the urban fabric in terms of the multi-annual average ( Figure 11).
In order to confirm the accuracy of the method and the reliability of the results, the Landsat 8 LST values were compared with the corresponding air temperature retrieved at the same time of the day from the pixels containing the three WMO meteorological stations of Bucharest (i.e., Bucures , ti-Afumat , i, -Băneasa, and -Filaret). According to Jin and Dickinson (2010) [38], the difference LST-Ta is higher at noon (i.e., 15 • C), and lower by nighttime (i.e., <5 • C), while Mbuh et al. (2019) [39] found differences of up to 5 • C in Chicago and Minneapolis, based on Landsat 4, 5, and 7 images from 1984-2016. For Bucharest, the LST values retrieved from Landsat 8 OLI and TIRS over the pixels corresponding to the weather stations Bucures , ti-Filaret, -Băneasa, and -Afumat , i are higher than Ta, with 0.1 to 4.0 • C in 30% of the cases, and 4.1 to 6.0 • C in almost 30% of the cases (Figure 12). One can notice a strong correlation between LST and Ta, with Pearson's correlation coefficients (R 2 ) above 0.9 over each of the three meteorological stations analyzed here ( Figure 13). One can remark that the spatial pattern of the LST (Figures 8-11) is strongly connected with the land cover-land use characteristics of the urban area ( Figure 1) for all the temporal scales tackled in this study. The warmest areas are concentrated in the central part Bucharest, overlapping the urban fabric districts with high building density, but spots with high LST values occur towards the edges of the administrative perimeter too, over industrial or residential urban fabric. The water and forest surfaces may be 8-10 °C colder than the urban fabric in terms of the multi-annual average ( Figure 11).
In order to confirm the accuracy of the method and the reliability of the results, the Landsat 8 LST values were compared with the corresponding air temperature retrieved at the same time of the day from the pixels containing the three WMO meteorological stations of Bucharest (i.e., București-Afumați, -Băneasa, and -Filaret). According to Jin and Dickinson (2010) [38], the difference LST-Ta is higher at noon (i.e., 15 °C), and lower by nighttime (i.e., <5 °C), while Mbuh et al. (2019) [39] found differences of up to 5 °C in Chicago and Minneapolis, based on Landsat 4, 5, and 7 images from 1984-2016. For Bucharest, the LST values retrieved from Landsat 8 OLI and TIRS over the pixels corresponding to the weather stations București-Filaret, -Băneasa, and -Afumați are higher than Ta, with 0.1 to 4.0 °C in 30% of the cases, and 4.1 to 6.0 °C in almost 30% of the cases ( Figure 12). One can notice a strong correlation between LST and Ta, with Pearson's correlation coefficients (R 2 ) above 0.9 over each of the three meteorological stations analyzed here ( Figure 13). The gap-filled high-spatial-resolution LST data set based on the Landsat 8 imagery can be combined with other information and derive products that are very useful in different applications. For example, the distribution of the LST over census units of Bucharest ( Figure 14) is fundamental information for assessing the thermal hazard risk at a fine scale, and it will be investigated as a followup of this study. Such details cannot be retrieved unless high-spatial-resolution data, such as the one obtained from gap-filling Landsat 8 imagery, is available. One could summarize several clear benefits and inherent limitations related to the DINEOFbased gap filling of satellite imagery data, i.e., Landsat 8. The main benefits consist of (1) the delivery of a data set at high spatial resolution, (2) with full coverage over remote and heterogeneous areas where in situ weather stations' measurements are lacking or are not continuous, which are (3) obtained at a relatively low cost for the end user [40]. The main advantage of using the DINEOF gapfilling procedure is that the method is easily applicable and does require supplementary variables (4). The coarse temporal resolution when using Landsat 8 imagery is an insurmountable barrier for some applications, such as operational forecasting, monitoring the rapid development of phenomena, or analysis of nighttime processes. However, the Landsat 8 data sets can be used for climate monitoring or risk studies related to more stable variables, such as LST [41].
Conclusions
Gap-free satellite remote sensing products at a fine resolution may be an excellent compromise between the need for full spatial coverage and temporal continuity of climate data in urban areas.
The results of this study demonstrated that 30-m-spatial-resolution Landsat 8 imagery can be The gap-filled high-spatial-resolution LST data set based on the Landsat 8 imagery can be combined with other information and derive products that are very useful in different applications. For example, the distribution of the LST over census units of Bucharest ( Figure 14) is fundamental information for assessing the thermal hazard risk at a fine scale, and it will be investigated as a follow-up of this study. Such details cannot be retrieved unless high-spatial-resolution data, such as the one obtained from gap-filling Landsat 8 imagery, is available. The gap-filled high-spatial-resolution LST data set based on the Landsat 8 imagery can be combined with other information and derive products that are very useful in different applications. For example, the distribution of the LST over census units of Bucharest ( Figure 14) is fundamental information for assessing the thermal hazard risk at a fine scale, and it will be investigated as a followup of this study. Such details cannot be retrieved unless high-spatial-resolution data, such as the one obtained from gap-filling Landsat 8 imagery, is available. One could summarize several clear benefits and inherent limitations related to the DINEOFbased gap filling of satellite imagery data, i.e., Landsat 8. The main benefits consist of (1) the delivery of a data set at high spatial resolution, (2) with full coverage over remote and heterogeneous areas where in situ weather stations' measurements are lacking or are not continuous, which are (3) obtained at a relatively low cost for the end user [40]. The main advantage of using the DINEOF gapfilling procedure is that the method is easily applicable and does require supplementary variables (4). The coarse temporal resolution when using Landsat 8 imagery is an insurmountable barrier for some applications, such as operational forecasting, monitoring the rapid development of phenomena, or analysis of nighttime processes. However, the Landsat 8 data sets can be used for climate monitoring or risk studies related to more stable variables, such as LST [41].
Conclusions
Gap-free satellite remote sensing products at a fine resolution may be an excellent compromise between the need for full spatial coverage and temporal continuity of climate data in urban areas.
The results of this study demonstrated that 30-m-spatial-resolution Landsat 8 imagery can be One could summarize several clear benefits and inherent limitations related to the DINEOF-based gap filling of satellite imagery data, i.e., Landsat 8. The main benefits consist of (1) the delivery of a data set at high spatial resolution, (2) with full coverage over remote and heterogeneous areas where in situ weather stations' measurements are lacking or are not continuous, which are (3) obtained at a relatively low cost for the end user [40]. The main advantage of using the DINEOF gap-filling procedure is that the method is easily applicable and does require supplementary variables (4). The coarse temporal resolution when using Landsat 8 imagery is an insurmountable barrier for some applications, such as operational forecasting, monitoring the rapid development of phenomena, or analysis of nighttime processes. However, the Landsat 8 data sets can be used for climate monitoring or risk studies related to more stable variables, such as LST [41].
Conclusions
Gap-free satellite remote sensing products at a fine resolution may be an excellent compromise between the need for full spatial coverage and temporal continuity of climate data in urban areas.
The results of this study demonstrated that 30-m-spatial-resolution Landsat 8 imagery can be extremely useful for retrieving the LST in urban areas, despite the spatial and temporal discontinuity of the data sets. Although missing data, due to factors like cloudiness or a limited number of satellite passages, has been an important barrier for the proper use of Landsat LST in urban climate research, this study employed an efficient solution for obtaining more extended and better data sets, regarding both temporal and spatial coverage. This is an important advantage, especially in remote or heterogeneous areas, where filling the observational spatiotemporal gaps in data becomes more crucial [40]. This study demonstrated the advantage of applying the DINEOF method for filling LST gaps in satellite imagery in order to take full advantage of the high spatial resolution of the Landsat 8 data sets.
The DINEOF gap filling algorithm was applied to each Landsat 8 image with at least 40% valid LST values over Bucharest (Romania), and generated a consistent data set of 94 images with complete spatial coverage in the urban area through 2013-2018. The DINEOF procedure was validated by filling artificially created gaps in Landsat 8 images, and evaluating the results against the initial values in terms of statistic parameters, such as the correlation, RMS error, and variance. The returned LST values corresponding to the artificial gaps were found to be very well correlated and statistically consistent with the original data, pledging for an efficient gap-filling procedure. The land cover biases the performance of the gap filling, and the method performed slightly better for natural land cover categories, very likely due to their higher temperature homogeneity, but the results were very similar for all the categories.
The climatic analysis of the gap-free LST data set illustrated the role of the geographical setting and urban land-cover on the local climate. The comparison between LST and Ta at three WMO stations monitoring the climate of Bucharest (i.e., Bucures , ti-Afumat , i, -Băneasa, and -Filaret) returned strong correlation coefficients (R 2 > 0.9) and approximately 70% of the differences were less than 6 • C, in very good agreement with the previous studies.
Further research may be envisaged aiming to complete this study with other data retrieved from sources like the updated Landsat 8, previous Landsat or other missions, ground-based data, modelling outputs, and ancillary data. Relevant improvements of the outputs in terms of the accuracy may be pursued by more complex validation campaigns using ground-based LST measurements and other satellite products. The LST distribution over census units of Bucharest exemplifies that by combining (1) high-spatial-resolution Landsat 8 images, with full urban coverage, and (2) detailed ground information, one can derive very useful products and applications.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 9,249.8 | 2020-09-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
STRATEGIC INTELLIGENCE ANALYSIS OF RELIGION-BASED HATE SPEECH IN SOCIAL MEDIA: A CASE STUDY OF THE DIRECTORATE INTELLIGENCE AND SECURITY AT POLDA METRO JAYA
The development of communication technology continues to advance rapidly. Social media is able to present individual voices that have never been heard through mainstream media coverage before. In Indonesia, the changes in the world have become increasingly clear when the era of communication has flooded the lives of religious communities. Religious discourse in Indonesia in recent years has been colored by accusations of religious intolerance in the form of hate speech through social media. The prohibition on the construction of houses of worship, prohibition of book discussions, attacks on certain groups, heresy from certain religious groups, threatening expressions of hatred, and so on are a series of acts of religious intolerance so that the potential for social conflict appears clearly. The Police Intelligence and Security as an institution that has the obligation to carry out early detection of threats must play an active role in making prevention and anticipation efforts. This research examines the Strategic Intelligence Analysis of Religious-Based Hate Speech on Social Media by the Directorate of Intelligence and Security at Polda Metro Jaya.
INTRODUCTION
The development of communication technology continues to advance rapidly. Access to communication technology makes it easier for people to socialize. These conveniences are offered through the emergence of many social media that can be used by everyone to interact. The development of social media itself is moving very fast. If in the past we only knew my space, now we have been spoiled by the existence of Facebook, Twitter, Instagram, WhatsApp and others, all of which have a positive impact on adding insight and knowledge, spreading the value of solidarity, tolerance and optimism.
Social media is able to present individual voices that could never be heard before through the coverage of mainstream media. In Indonesia, the presence of social media also has an impact The government's attitude in the phenomenon of hate speech (hatespeech) is described in several articles that are ready to be blamed on the spreaders of hate speech (hatespeech), including the Criminal Code, Law No.11 of 2008 concerning Electronic Information and Transactions (ITE), Law No. .40 of 2008 concerning the Elimination of Racial and Ethnic Discrimination. Not only that, hate speech spreaders (hate speech) can also be subject to articles related to hate speech that have been regulated in the Criminal Code and other laws outside the Criminal Code. The National Police itself seems to have viewed hate speech (hate speech) as a problem that must be resolved immediately because of its quite dangerous impact on the life of the nation and state. This can be seen from the issuance of the Chief of Police Circular Number: SE / 6 / X / 2015 dated 8 October 2015, which states that hate speech can be in the form of a criminal offense regulated in the Criminal Code (KUHP) and other criminal provisions outside the Criminal Code. , which take the form of: 1) insult; 2) defamation; 3) blasphemy; 4) unpleasant actions; 5) provoke; 6) instigate; 7) spreading fake news; and all of the above actions have the purpose or could have an impact on acts of discrimination, violence, loss of life and social conflict.
The issue of hate speech is getting more and more attention from the public both nationally and internationally along with the increasing concern for the protection of human rights (HAM), so it is not surprising that the National Police Chief issued this circular. The biggest potential and is the biggest source of triggers for hate speech, namely through social media such as Twitter, Facebook, and independent blogs, the existence of which is the biggest innovation in the early 21st century. Social media is not only a medium for connecting and sharing, it can also be used in politics and other fields.
Religious discourse in Indonesia in recent years has been marked by many phenomena of religious intolerance. The prohibition of building houses of worship, prohibiting discussion of books, attacking certain groups, deviating from certain religious groups, threatening hate speech, and so on are a series of acts of religious intolerance. This is even expressed in the figures showing an increasing and worrisome trend. The perpetrators who carried out the action were caused by a misperception and provoked by statements of religious leaders who spread hatred towards other groups with different beliefs. In other words, violence in the name of religion may occur because of the perception of rampant acts of hate speech, whether through someone's words or writing in public. These acts of violence are aimed at spreading and inciting hatred against other groups of different races, religions, beliefs, gender, ethnicity, disabilities and sexual orientation.
In Indonesia, especially DKI Jakarta as a barometer, hate speech based on religious issues often causes divisions both internally and between groups as happened in Indonesia after the 2014 Presidential Election (Pilpres) and the 2017 DKI Jakarta Regional Head Election (Pilkada). data, in the very multicultural capital city of DKI Jakarta, various collective violence in the name of religion was recorded which was instigated by the existence of hatespeech. In a religious society, religion is something that is very sensitive and untouchable. Not infrequently, horizontal conflicts originate from feelings of hatred by adherents of a religion who consider adherents of another religion, or of one religion but with different sects, to do something that is deemed insulting to a religion that they believe is true. Like two sides of a coin, religion on the one hand creates a common bond, both at the level of members of society, and in social obligations. The spread of hatred on the basis of religion can be carried out in various forms, either directly insulting certain religions, or by spreading negative stigma against followers of certain religions, as well as spreading negative issues against religious activities of followers of certain religions. That is, the scope of the target of hatred is very broad and flexible so that it can develop in any form.
However, the functionalization of society in preventing the rate of hate speech on the basis of religion will not work effectively without proactive steps from functionaries of religions, to stand at the forefront of guarding dialogue between religious communities, so that it runs well and solutions. Furthermore, the State is responsible for making educational programs for the community, regarding the importance of fostering harmonious relationships based on the values of tolerance, and the dangers of statements and actions that contain blasphemy on the basis of religion for the continuation of religious harmony.
Hate speech contains dangerous characteristics and can be a threat to the pluralism of Indonesian society. There are several reasons that underlie the above, namely First, the act of spreading hate speech is carried out by people or groups who are intolerant of the existence of other groups. Second, hate speech contains a message that certain groups are sub-human citizens and therefore not only dangerous but also do not deserve equal treatment by the state. It can be said that hate speech is basically anti-free speech because hate speech demands restrictions on speech or speech that supports pluralism (pluralistic speech). Third, hate speech has a direct and indirect relationship with discrimination, hostility and violence. Fourth, in other words, hate speech exists precisely to narrow and prevent a person or group of people from having an opinion and expression so that it is contrary to the continuity of democracy.
Currently, the National Police Headquarters and its ranks at Polda, Polres to the Polsek level have made efforts to combat hate speech (hate speech). Preparation of tactical and strategic anticipation with the current time span must be carried out immediately then continued with anticipatory steps in the future or in the future, starting with intelligence activities and cyber patrols against the perpetrators, makers and disseminators of hate speech (hate speech). In principle, the role of intelligence is to carry out early detection and early warning. Intelligence will look for data and process it into intelligence information that will be used by decision makers. Information obtained by intelligence is information that is threat detection. This information can be used as an early warning to users to make decisions as well as actions to prevent these threats from occurring. In the context of overcoming hate speech (hatespeech), especially those based on religion, intelligence seeks, processes and provides information to leaders that are preventive against hate speech (hatespeech) and information that misleads the public and has the potential to cause conflict. Intelligence assists the implementation of operations carried out by the police cyber with the main objective of minimizing losses and preparing the needs for successful operations. Intelligence provides assistance in the form of operational planning at both the strategic and tactical-operational levels. It is equally important to provide direct assistance and provide early warning. Intelligence also provides an analysis of the development of scenarios that might be faced with hate speech (hatespeech). Thus, in this function, intelligence conducts investigation and analysis of threats; take steps to deal with these threats which are directly used for enforcement. Information from the results of this intelligence activity is needed by the National Police so that the actions taken are right on target. The dangers of hate speech against democracy are beyond doubt. However, regulations that limit hate speech are still controversial because they are considered to limit freedom of speech which is a fundamental aspect of democracy. This dilemma creates a situation without action which causes hate speech in Indonesia to spread freely without any obstacles. This condition provides an opportunity for the transformation of a number of hardline groups to divert the arena of struggle from war armed with bombs to war armed with words. As a result, hardline figures or media are free to carry out campaigns that attack other individuals or groups based on communal sentiment, including calls for violence and murder. Books and online media that place certain religious groups in a war situation with other religious groups are freely distributed. In addition, it cannot be denied that cases of sealing and acts of violence against a group or individual often begin with incitement. This incitement can be through pamphlets, news, speeches or broadcasts containing hate speech, which is known as hate speech. Hate speech generally has the character of attacking groups or individuals who are considered as opponents. One of the causes of the aforementioned series of acts of religious intolerance is the misperception among the community towards other sects and / or religions.
LITERATURE REVIEW
There is no general international definition of the concept of hatespeech or hate speech itself. Several definitions are parallel. In legal terms, according to UNESCO, hatred tends to refer to "expressions that suggest incitement to harm based on a target identified with a particular social or demographic group". The definition by the Council of Europe hate speech (2012) is understood as "all forms of expression that spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including: intolerance, aggressive nationalism and ethnocentrism, discrimination and hostility. against minority groups, migrants and people of immigrant origin. " The Anti-Defamation League, an anti-slander league (league), has developed a pyramid of hatred which consists of 5 levels. The first level is called bias. Bias consists of stereotypes, insensitive comments, disparaging jokes, non-inclusive language, justifying bias by looking like people's hearts, accepting negative information or filtering out positive information. The second level, is the individual's actions of prejudice which consist of intimidation, ridicule, nicknames, social avoidance, insults or nicknames and de-humanization. The third is discrimination which consists of economic discrimination, employment, and education. The fourth level is bias motivation of violence covering individuals, murder, rape, persecution, threats, community, arson, terrorism, vandalism, defamation. And the last stage is genocide or the act or intention to deliberately and systematically destroy everyone. Jubany and Roiha, 2015 said that the pyramid of hatred that is built at the first level is precisely stereotypical, insensitive comments, belittling with jokes. Meanwhile, in the second stage, hatred is based on intimidation, ridicule, nicknames, and insults or nicknames. These parts appear predominantly, such as the mention of Ahok as a Chinese, kafir, or Anies who is called a former minister who is stupid or Agus who is called a candidate for puppet governor. In this context hate speech destroys the lives or defames the people who are the targets of hatred, including their families.
Hate speech is also a part of marginalization where a person or group of people is described as bad. In this case, marginalization is carried out in several ways, namely: 1). Euphemism (refinement of meaning), is generally used to smooth out "badness". Euphemism is widely used by the media and is widely used to describe the actions of the dominant group towards the lower class of society, so that in many ways it can be deceiving, especially deceiving the people. 2). Dysphemism (roughening of language) is used to "make things worse". 3). Labeling is the use of words which are offensive to individuals, groups, or activities. 4). A stereotype is the equating of a word that exhibits negative or positive (generally negative) traits with a person, class, or set of actions. Here, stereotypes are representational practices that describe something with prejudice, negative connotations and are subjective in nature.
Today hate speech is something of a concern. People no longer think about ethics, or oriental customs and manners are starting to look sidelined. The manner of communicating that does not respect each other is very clear and erodes the values of politeness. Nowadays people are so spontaneously expressing and expressing what they feel, from subtle to frontal and uncontrollable. Now what often happens is that people can no longer hold back and attack other people without mercy. Even the emotions of internet users are more easily ignited by just reading words on social media which are then responded to with words and sentences that are also insulting, harassing and equally painful.
As stated earlier, hate speech is mostly spread through social media. One side of social media can promote closer friendship, online business platforms, and so on. The other side of social media is often the trigger for various problems, such as the rampant spread of hate speech, incitement, scorn, fighting against one another, which can lead to national division. Social media itself is a media platform that focuses on the existence of users who facilitate them in their activities and collaboration. Therefore, social media can be seen as an online medium (facilitator) that strengthens the relationship between users as well as a social bond.
Information or hate speech content (hatespeech) is produced and then used by internet users by sending the information to other users (message recipients). In this process, the sender and receiver can exchange roles in reverse. The message referred to here is all information or hate speech or untrue news that is disseminated through social media (Facebook, Twitter, Line, Instagram and so on) which is distributed according to the environment and time according to the user's wishes.
HATE SPEECH IN DIFFERENT PARTS OF THE WORLD
On an international scale, hate speech is of great concern to researcher Cherian George, who is described in "Hate Twists: Engineering Offenses on Religion and Threats to Democracy", which states that the expression of religious intolerance is something that is common. In Hungary and some other parts of Europe, for example, various groups express anti-Semitism overtly. The attitudes of pro-indigenous groups and extreme nationalists tended to be similar to those of some radical Muslim immigrants. This radical group calls for hostility towards other minority religious groups, while protesting against the bigotry that they themselves face. Meanwhile, rulers in Russia are policing blasphemy issues with enthusiasm. On the orders of one of the Russian Orthodox priests, the authorities disbanded an avant-garde opera that featured scenes of the crucifixion of Jesus Christ between the feet of a naked woman. Criminal charges were filed against the director and manager of the theater concerned. Although the two were eventually released, the theater manager was fired by the Russian Ministry of Culture. Similar tensions are found in other regions of the earth. In one village in Egypt, five Christian students stage a play of humor that laughs at IS or the Islamic State. After their teacher's cell phone was stolen, video footage of their play was released and the homes of the Coptic Christian students were attacked. The teacher and his five students, all under 18, were charged with religious defamation. In Nigeria, the 2015 presidential election was marred by hate speech. Bishop David Oyedepo, one of Africa's richest pastors, publicly expressed his support for the incumbent Good luck Jonathan, a Christian from Southern Nigeria, who was challenged by the Muslim candidate from the North, Muhammadu Buhari, who eventually emerged victorious. In a sermon before the general election, the bishop called on his congregation that he had been mandated to fight Muslim jihadists. "If you catch someone who looks like them, kill them! Kill and knock his neck. " In Brazil, an aggressive evangelical movement led to increased intolerant actions against homosexuals and religious minorities, such as adherents of the local Candomblé faith. One of the victims was an 11-year-old girl who was hit by stones from a group of men waving Bibles, screaming that people like her deserve to burn in hell. In the United States, anti-Islam activist Pamela Geller denounces Muslims by organizing an art exhibition and cartoon contest of the Prophet Muhammad, claiming that these activities were carried out to defend freedom of expression after the Charlie Hebdo killings. Two people who were offended and attacked the exhibition with firearms were shot dead outside the exhibition site. Not stopping there, Geller tried to buy advertising space to feature cartoons of the Prophet Muhammad on the Washington D.C. public transportation system, prompting the authorities to ban all issue-based (not product-based) advertising for safety reasons.
In Myanmar, the anti-Muslim campaign led by radical Buddhist monks like Ashin Wirathu is starting to gravitate toward genocide. When the UN Secretary General, Ban Ki-Moon, called for better protection for the Rohingya minority, Myanmar lawmakers accused him of speaking out about ethnic groups that did not exist and insulting Myanmar's sovereignty.
The above incidents, which took place six months before and after the Charlie Hebdo attacks, suggest that the Paris massacre represents a global phenomenon. These cases share similar elements, including deep intolerance of diversity, contempt for identity, calls for intra-group mobilization, and censorship or oppression of certain groups. These are all basic ingredients for "hate speech," a category of extreme speech that has been the subject of study for decades. Hate speech can be defined as an insult to the identity of a group in order to oppress its members and reduce their rights. Anti-Semitic rhetoric by far-right groups in Europe falls into this category, as do Bishop Oyedepo's call to kill anyone who "looks like" a jihadist, and Ashin Wirathu's claim that every monk should treat Muslims as he treats human excrement. Almost all countries around the world have laws regulating hate speech. An example is the UK, when the Public Order Act 1986 stated that an act is categorized as a criminal act is when someone commits an act of "threatening, insulting, and harassing both in words and in deeds against skin color, race, nationality or ethnicity". In Brazil, the state has a constitution that prohibits the emergence or development of negative propaganda against religion, race, suspicion between classes, and others.
In Turkey, a person will be sentenced to prison for one to three years for incitement against someone who creates hatred and enmity on the basis of class, religion, race, sect, or region. In Germany, there are certain laws that allow victims of annihilation to take legal action against anyone who denies that destruction occurred. In Canada, the Canadian Charter of Rights and Freedoms guarantees freedom of expression but with certain conditions so there is no incitement. Hate speech when discussed on the internet certainly requires in-depth discussion. Ethics in the online world need to be emphasized, considering that the online world is something that has been considered important by the world community. However, more and more parties are misusing cyberspace to disseminate unusual things about something, such as ethnicity, religion and race. The dissemination of slanderous news on the Internet, for example, is a matter of concern.
POLICE SECURITY INTELLIGENCE
Intelligence is related to the initial sensing process or better known as the early warning system. Intelligence activities are an integral part of the early warning system which enables policy makers to have fore knowledge (early awareness). The general task of intelligence is to collect, analyze and provide necessary information to policy makers in making the best decisions to achieve goals. Meanwhile, the special duties of the intelligence services are: (1) Provide analysis in fields relevant to national security; (2) Providing early warning of a threatening crisis; (3) Assisting the management of national and international crises by detecting the desires of the opposing party or potential opponents; (4) Providing information for the needs of national security planning; (5) Protect confidential information, and (6) Conduct counter-intelligence operations (ISDPS: 2008).
In the context of carrying out intelligence tasks within the National Police, intelligence operations are classified into three universally applicable forms, namely investigation, security and mobilization. Intelligence and security operations are carried out with the aim of obtaining Investigation is an effort to find and collect information material; security is an effort to secure the organization from becoming the target of the opponent; raising is an effort to create conditions and situations that benefit the organization. Therefore, the spectrum of Intelligence and Security activities in carrying out Polri's duties is to precede, accompany and end any police operational activities carried out by the Police. Investigation in the Intelligence and Security is an activity that is an integral part of the intelligence function to seek, collect, process data (information material) and present information as an effort of sensing and early warning for Polri leaders, both in the field of police guidance and operations so that the results are useful or necessary in carrying out their duties -task of the National Police (Pusdik Intelkam, 2008). Investigations are carried out to find, explore, and collect data as complete as possible from various sources, both open and closed sources through open and closed activities, then the data is processed into intelligence products, namely information that is ready to be used as a basis for decision making or action.
Security in the context of Intelligence is all efforts, jobs, intelligence activities aimed at supporting the implementation of the main tasks of the National Police which are carried out by implementing procedures, methods, techniques and tactics in the form of preventive and immediate, open or closed measures against all forms of threats. may occur in the form of deviations from norms to ensure security and order in life, and which can be expected to hamper the smooth implementation of national development originating from supra structure, technology, community members and the environment (Pusdik Intelkam, 2008). Security is efforts, steps and actions taken with the aim of safeguarding an environment and all its contents in order to create a safe and orderly atmosphere and to sterilize it from all forms of threats, disturbances, obstacles and challenges.
Raising in the context of Intelligence and Security is all efforts, jobs, activities and actions carried out in a planned and directed manner by intelligence facilities, especially to create and or change a condition in a certain area / opponent (both outside and inside the country), within a certain period of time. which is profitable, according to the will of the competent superior, to support the policies being pursued or to be pursued and to remove obstacles (Pusdik Intelkam, 2008). Raising means efforts, steps and activities carried out with the aim of fostering, directing and conditioning an environment with all its potential in order to create conducive conditions.
STRATEGIC INTELLIGENCE ANALYSIS
In contrast to military intelligence, which has the task of obtaining information related to weather, terrain and enemies, so relatively in the world of military intelligence, paradoxes are rarely found in carrying out its duties. In carrying out its duties, military intelligence seeks to obtain enemy military doctrine, characteristics of enemy commanders, psychological situations that occur in enemy forces, enemy plans, tactics and strategies etc., where to obtain this information can be done by placing intelligence agents in "Behind the line of enemies", smuggling intelligence agents into enemy forces or the information is obtained from the results of "interrogations" (where in the current reform era, news in the mass media often mentions the results of interrogations as "development results") of detained enemies. .
According to Richard K Betts in his book Paradoxes of Strategic Intelligence, several articles were written related to the paradoxes that occur in assignments carried out by strategic intelligence agents which are very different from the tasks carried out by military intelligence, justice intelligence or intelligence units. others, because strategic intelligence is very complex, including ideology, politics, socio-culture, economy, demography, biography, transportation, science and technology, law and defense in a country. Therefore, the paradoxes that arise are so diverse.
In Indonesia, during the New Order era, strategic intelligence operations were widely used to support the political security policies of the New Order regime. The New Order state, which at that time had the ambition to eliminate communism-socialism in Indonesia after 1965, confused the roles of strategic intelligence services and military intelligence. It is not surprising then that the practice of human rights violations increased during the Soeharto era. Military intelligence operations have become a strong feature of the practice of serious and serious human rights violations in specific cases: such as, Tanjung Priok 1984, Talangsari 1989, the Mysterious Shooting 1983, kidnapping and arrest of pro-democracy activists in 1997/1998. The information collected by intelligence agencies is always related to the intentions and capabilities of the enemy. Enemy abilities both material and non-material abilities. The enemy's material abilities such as weapons, the enemy's special skills and their numbers so far are very difficult to hide, while the enemy's non-material abilities such as the quality of the enemy's organization, morale and enemy doctrine are very difficult to evaluate properly. Meanwhile, the enemy's intentions often changed at the last minute and finding out about them was no easy task for intelligence. Usually, to find out the enemy's intentions can be known from memoirs, speeches, briefings and debriefings and others. Knowing the enemy's capabilities is very important for intelligence because there are principles"a country with weaker capabilities may nevertheless decide to go a war". 1
DISCUSSION
Early insights into intelligence were often linked to discussions of covert operations, undercover, infiltration, or wiretapping. Yet there are still many theories, practices, and dynamics in the term Intelligence itself. In Indonesia, intelligence development is manned by the BIN, TNI and Polri institutions. However, this assumption of the importance of intelligence has caused it to develop and has since been used by various other organizations, both government and private institutions. The existence of intelligence cannot be separated from its use which can be a solution for problems both within the organization and outside the organization. In these various organizations, intelligence becomes an organ whose function is to provide information needed for early warning or "early warning" and "early detection".
Strategic intelligence and its analysis are terms used to describe a particular problem and the practical process of analysis. Strategy has a definition that is directly related to the use of a plan that includes all the details needed to achieve the main objective. Various problems faced by organizations in charge of conflict handling are how the intelligence organs they have in predicting and planning strategies for future problems. This will certainly have a very bad impact when the conflict has occurred by causing various material losses and casualties, only then will it be able to reduce the development of this conflict. For this reason, strategic intelligence analysis is needed by leaders in making decisions that are able to suppress problems based on predictions and planning in future situations.
The word intelligence is generally used in several ways. However, in the context apart from containing the meaning of intellectual or intelligence, there are two meanings that stand out in its definition, among others. 1.
Intelligence can be used to describe processes and activities. That's because we're talking about doing intelligence work.
2.
On the other hand, intelligence is also used to show the final product of the process. In other words, we are talking about the development or process or outcome of that intelligence. Strategic intelligence does not discuss in detail about individuals but examines certain phenomena or problems so that the knowledge obtained can be used as study material and information to focus on solving problems and using decision making. Strategic intelligence is used to analyze and predict various changes and cycles of criminal acts, social behavior, and social vulnerability, so this strategic intelligence is needed by the government in identifying capabilities and taking opportunities to combat and minimize conflict or crime itself. According to Richard Helms, doing analysis is at the core of intelligence work. This is where all the intelligence capabilities are combined to produce accurate information. (Johnson and Raaf, 2008). The essence of intelligence is to reduce ambiguity for decision makers by providing understanding. The trick is to use a comprehensive intelligence analysis methodology, which combines collaborative use of structured analytical techniques, creativity, critical thinking, and sensory generation, to harness intuition and reduce bias.
After the intelligence analysis process was carried out through the Strategic Intelligence Applications mechanism in accordance with Down McDowell's explanation, various results were obtained as follows: The image above is the result of intelligence analysis that describes information and facts about hate speech in Indonesia through Strategic Intelligence Applications. First, from the point of view of Foreign policy and the strategy development program, the Government has attempted to minimize the occurrence of hate speech with various regulations, laws and policies, but the impact has not been maximally felt in reducing the occurrence of hate speech. Second, from the point of view of the Economic Analysis, in the midst of three serious economic threats in 2020, namely the prediction of a sluggish economic situation and conditions due to the global Covid-19 pandemic which caused many people to lose their jobs, become producers and spread hate speech then take advantage of the pros and cons this has resulted into the "job" choice for some people today. Third, from the point of view of political analysis, from the perspective of political interests, hate speech is actually produced to get rid of and overthrow political opponents.
From the point of view of Compliance monitoring that with a lot of hate speech there will be potential for conflicts in society, so that from the point of view of Defense and security threat, it can be seen that if the phenomenon of hate speech is not anticipated from an early age, it has the potential to split the Republic of Indonesia. Therefore, law enforcement planning that needs to be done is to maximize the various stakeholders in an effort to anticipate the phenomenon of hate speech so that it does not occur.
If you look at this growing phenomenon, the Indonesian nation and state are experiencing various challenges or even threats. Disorientation due to the influence of hatespeech makes people lose direction in the life of the nation and state, as a result of being increasingly detached from the basic values that become guidelines, guidelines, and views of life. The community experiences unsteadiness in their outlook on life, is easily swayed and is easily swallowed up by provocations. The mode of distortion is marked by the fading of social cohesiveness, such as a decrease in the sense of solidarity or social solidarity as fellow children of the nation. Social life becomes bland and arid, dry from the spirituality of social values and society becomes temperamental so that it is easy to commit various acts of violence or anarchism. The various challenges mentioned above, if not resolved immediately in their cumulation, will undermine the national resilience of the Indonesian nation and state. The role of Ditintelkam Polda Metro Jaya in preventing criminal acts of hate speech is by making various efforts as stated in the Chief of Police Circular (SE) Number SE / 6 / X / 2015. The measures referred to are pre-emptive, preventive and repressive measures. The form of preemptive efforts carried out is in the form of counseling to students, santri, and various community groups. In addition, the form of preventive efforts carried out is an effort to supervise and patrol in cyberspace or what is known as cyber patrol. Finally, the form of repressive measures is in the form of law enforcement carried out against perpetrators who have been named suspects after going through investigations and investigations.
During 2014-2018, the Wahid Foundation recorded 120 broadcasts of religious hatred against non-state actors. Meanwhile, the National Police recorded the number of hate speech, including religious hate speech, in 2017 reached 3,325 cases. This figure is up 44.99% from 2016, which amounted to 1,829 cases. While in the capital DKI Jakarta alone, the Polda Metro Jaya until November 2020 investigated 443 reports related to cases of spreading hoax or hoax news. A total of 14 cases out of the total report have entered the investigation stage. By setting 10 people as suspects.
As the national capital, DKI Jakarta is required to be better in terms of various views when compared to other regions in Indonesia. In addition, Jakarta is the center of economy and finance not only in Indonesia, but also at regional and international levels. This position as the center of the economy makes DKI Jakarta a magnet for many people from various regions with various backgrounds to try their luck in Jakarta, so it is not surprising that the social life of the people in Jakarta becomes a barometer if you want to see how multicultural life in Indonesia.
It is important to recognize that strategic intelligence analysis must directly and clearly influence national level decision making. Intelligence analysis involves descriptions, explanations, evaluations, and estimates, and many of these efforts are aimed at helping governments learn about developments in the situation over time. Perhaps the main influence that strategic intelligence analysts have is not on the highest-level policies on the biggest issues of our time, but rather on working-level bureaucrats throughout the government who as a whole ensure that governments learn about threats and problems over time.
CONCLUSION
He put forward a number of recommendations so that the Police Intelligence and Security can carry out its role optimally in handling and overcoming hate speech, especially in DKI Jakarta: 1.
An understanding of hate speech and hate crimes with very thin limits needs to be absorbed by all Police officers, because they are responsible for security and public order in the country, especially law enforcement in accordance with Law no. 2 of 2002 concerning the National Police of the Republic of Indonesia, the National Police is at the forefront of handling and overcoming hate speech. | 8,241.8 | 2021-01-28T00:00:00.000 | [
"Computer Science"
] |
Universal Health Care and Enforced Beneficence
I examine Allen Buchanan’s arguments for enforced beneficence and express a number of worries concerning his attempt to justify coercive distributive policies that guarantee (basic) health care services for all citizens. The central objection questions whether, given Buchanan’s own stipulation of universally-instantiated attitudes of moral beneficence amongst all society members, his arguments from, first, the coordination problem and, second, the assurance problem successfully establish a justification of enforced contribution. I defend alternative, non-coercive, responses to the aforementioned problems and show that a particular kind of institution (an “information service”) provides all citizens with the sufficient and reliable epistemic resources so that they can effectively help the sick and needy. I notice that Buchanan’s difficulties with justifying coercion can be regarded as providing indirect support for the view that developing a justice-based conception of moral health care rights remains, pace Buchanan, an important task to be completed.
I examine Buchanan's arguments for enforced beneficence and express a number of worries concerning his attempt to justify coercive policies. The central objection questions whether, given his own stipulation of universallyinstantiated attitudes of moral beneficence amongst all society members, Buchanan's arguments from, first, the coordination problem and from, second, the assurance problem successfully establish a justification of coercive contribution. I defend alternative, non-coercive, responses to the aforementioned problems and show that a collective agent/institution might well provide all citizens with sufficient information concerning the question of how to effectively help the sick and needy. My criticism does not deny that Buchanan has identified a set of important collective action problems regarding the universal provision of health care. However, given Buchanan's own premises, his argument falls short of vindicating his ambitious conclusion concerning enforcement as the remedy for these problems. Coercive and non-voluntary transfers of resources are simply not necessary in a society in which moral attitudes of beneficence are shared by all and when this fact is acknowledged by all members of the stipulated society. I notice that Buchanan's failure to justify coercion can be regarded as providing some indirect support in favor of the claim that developing a workable conception of moral health care rights remains, pace Buchanan, an important task to be completed (but not taken up in this paper).
Universal Basic Health Care Without Moral Rights
The diversity of ethical issues involved in debates about health care policies has grown in both academic and non-academic discourse. Technological innovations in medical science, and the related rising costs of medical practice, have led to an intensified scrutiny of the notion of health care rights and of the question of what "equity" with regard to access to health care services amounts to in scientifically-advanced capitalist societies. The problem that is central to this paper concerns the efficacy of collective (and to-be-coordinated) acts of providing health care for all citizens, and, most fundamentally, how enforced contributing to such policies can be justified on grounds of this desired efficacy even if positive moral rights to health care are called into question.
The idea that society (and its citizens) have an enforceable obligation to contribute to public health care endeavors, even if the sick and needy do not have a corresponding moral right to health care, lies at the heart of Buchanan's work; work that has been influential partly because it rightly reminds us not to take moral rights as an unquestionable given. Buchanan does not only call into question liberal and social democratic conceptions of universal welfare rights; Buchanan also criticizes conservative and libertarian strategies of categorically rejecting any legally-guaranteed welfare provisions in the name of supposedly absolute property rights and liberties. The latter dimension of his writings is the central topic of the following reflections.
In his "The Right to a Decent Minimum of Health Care" Buchanan (1984, pp. 59-66) criticizes three prominent philosophical (liberal) proposals and shows that they cannot establish a positive right to health care (utilitarianism, Rawls' "Justice as Fairness," and Norman Daniels' (1985) attempt to derive a right to health care from a Rawlsian principle of fair equality of opportunity). Its unsuccessfulness notwithstanding, Buchanan claims that we should nevertheless be careful when we consider the implications of this three-fold failure. Just because all these attempts to justify a positive right to health care fail, this failure does not lead, by default, to the libertarian triumph consisting in the successful refutation of any coercive arrangements designed to secure health care. In his discussion of rights' enforceability Buchanan (1984, pp. 56-7) notes: Indeed, the surprising absence of attempts to justify a coercively backed decent minimum policy by arguments that do not aim at establishing a universal right suggests the following hypothesis: advocates of a coercively backed decent minimum have operated on the assumption that such a policy must be based on a universal right to a decent minimum. The chief aim of this article is to show that this assumption is false. 2 2 One issue that I can only address in passing is the important question of why Buchanan's challenges of coordination and mutual assurance (extensively discussed below) do not support a more generous universal healthcare system, but merely the mentioned "decent minimum of health care." (I am indebted to a referee for this journal for raising this question.) Recently, Buchanan has directly addressed this issue by stating that "obligations of beneficence are traditionally understood to be limited by the proviso that rendering Even if all rights-based theories fail, Buchanan continues, there is an alternative (pluralistic) justification of coerced contribution to health care provision available. Buchanan begins his positive case by noticing that politicians and philosophers who are attracted to rights language in health care debates attribute significant importance to the object of that right, viz. a secured minimum of medical care for all. Why should they insist on this project being realized if and only if we are able to tag the label of "rights" on the respective policy? Buchanan (1984, p. 66;my emphasis) suggests that they should not: "My suggestion is that the combined weight of arguments [none of which is based on antecedent moral rights] is sufficient to do the work of an alleged universal right to a decent minimum of health care." Buchanan is quick in assuming that his beneficence-based account will be able "to do all the work" that a justice-and rights-based account can do, without running into the aforementioned problems that come with moral rights. Even if we restrict ourselves to a consequentialist perspective, that focuses on the outcomes of the two competing justificatory approaches, Buchanan's claim seems overly optimistic for a variety of reasons (on top of those that I spell out in the next sections). Especially with regard to the issue of the subjective experience of social and economic security, a universal and publicly guaranteed positive entitlement in terms of fundamental moral rights appears to contribute in ways that cannot be fully accounted for when society leaves everything to charitable impulses. As we will clarify in a moment, Buchanan (1984, p. 57) defends his charity-based approach by claiming that "[t]o the morally virtuous person the imperatives of charity may be as urgent as those of justice." First of all, this claim may be unconvincing to those who do not already believe that Buchanan's envisioned moral community can be realized and consequently think that meeting the basic health care needs of all must be backed by an irreducible appeal to rights and social justice. (Kantians, like Ripstein (2009) for example, have good arguments for such claims.) In addition, even if aid to the needy is not to be unduly burdensome to the benefactor. Consequently, the enforced beneficence approach avoids objections to which more demanding egalitarian concepts of the right to health care are vulnerable" (Buchanan 2009, p. 74). I do not find this quick clarification fully satisfactory but a detailed analysis of this aspect of Buchanan's view has to be postponed for another occasion. all citizens were morally virtuous persons (but, in principle, retain the freedom to change this attitude), and gave enough to meet the health care needs of all, it may nevertheless be a constitutive feature of a guaranteed basic minimum that the beneficiaries know that their health care needs are taken care of as a matter of justice-based entitlements and rights. Of course, this first set of critical remarks is not a conclusive argument in support of such a moral right's existence; Buchanan will probably (and rightly) highlight this point. However, it calls into question Buchanan's optimistic initial assumption that beneficence can do all the work that rights-based approaches do. I put aside these worries. Having said that, especially the second worry from the good of rights-based public guarantees leads us to the core of Buchanan's ambitious defense of enforced beneficence. To this central innovation we now turn.
Buchanan's Pluralistic Justification of Coerced Contributing
Buchanan's strategy for justifying an enforceable principle, guaranteeing a decent minimum of health care for everyone, is pluralistic. It establishes citizens' access to basic medical services in the framework of four independent considerations. The combined weight of these considerations is supposed to provide an argument justifying a centralized agent using coercion in order to meet the moral obligation of beneficence (as distinct from an obligation of justice) to help those in need of medical assistance and services. A crucial element of Buchanan's strategy is to present the obligation of beneficence in question as a collective one. Buchanan (1984, p. 70) justifies this shift from individual to collective beneficence by means of pointing to the significant financial and organizational efforts that are necessary to realize the goal of providing minimal medical services for all members of society. 3 A decent basic minimum of health care is such a collective good. 4 These goods come with a particular set of problems (other examples of such collective goods are national defense, environmental 3 Cf. Buchanan (1991, pp. 173-177). 4 I am indebted to a referee for highlighting that the good in question is not properly referred to as a traditional "public good." Hence, I replaced the language of public goods in many contexts. The good in question is health care that private and distinct individuals (the patients) enjoy. However, the crucial point is that we are considering the option of realizing and guaranteeing these goods in a public manner. "Publicly-realized-personal-goods" is probably the best label for what is at stake. protection, etc.) that must be confronted by societies in order to successfully generate these goods.
Before dedicating the remainder of this paper to these problems let me briefly present the other three considerations that Buchanan's (1984, pp. 66-68) pluralistic strategy employs. The first argument focuses on "special rights" and is concerned with the rectification of past and present injustices (e.g. health problems related to discriminatory policies), the requirements of compensation (e.g. health problems related to a third party's negligence), and entitlements to health care based on extraordinary sacrifices that citizens provide for their society (e.g. impairments due to compulsory military service).
The second consideration argues that a lack of certain kinds of collectively provided (and enforced) basic health care can be regarded as the violation of a specific negative right. This argument provides a justification for health care measures such as public sanitation and immunization programs and can be summarized under the heading of "harm prevention". It is, for example, in everyone's interest that certain infectious diseases are controlled by reliable public agents and services; collectively implementing (and financing) immunization programs (also, and especially 5 , for those who could not otherwise afford the vaccine) is in everyone's interest. 6 Thirdly, there is a number of prudential arguments in support of a guaranteed decent minimum to health care.
These arguments emphasize a healthy population's collective benefits such as a more productive labor force and fitter soldiers.
Buchanan is confident that these three arguments present strong support for the claim that every citizen should have access to a decent minimum of health care. He also reminds us that the three arguments do without any appeal to universal (as opposed to special) positive health care rights. In addition, and this 5 I say "especially" because historically it was the worst off members of societies that had been exposed to diseases that are now controlled by comprehensive preventive measures. Especially in today's circumstances (globalization and urbanization) the better off can hardly argue at this point that they are not required to contribute to vaccination programs because they are free to avoid close contact with the poor. 6 It is worth noting that this second argument by Buchanan challenges radical libertarianism in two ways: not only does it ask citizens to contribute, at least some amount of resources, to the collective sanitation and/or immunization program. In addition, Buchanan's argument seems to imply that a collective agent is justified in forcing citizens to undergo this immunization, if this is necessary to prevent harm not merely for the person in question but for those around her. I cannot pursue these issues here. Cf. Francis (2005). is the major difference to Daniels' and Rawlsian approaches, there is no need for a comprehensive theory of justice in order to support this entitlement to basic medical care.
The major component of Buchanan's pluralistic strategy is still underdeveloped at this stage of the argument though. After all, Buchanan claims that a health care regime based on beneficence and charity (as opposed to justice and positive moral rights) can do all the work that a coercive (welfare) state could do, viz. to guarantee a decent minimum of health care for each individual member of society. So far Buchanan has established the justification for such a legal entitlement either for some subgroups of the citizenry only (e.g. former military staff), or for the populace at large, but merely with regard to an extremely minimal subset of essential medical services (e.g. vaccinations against some contractible diseases). I now turn to Buchanan's argument for enforced beneficence that is supposed to solve this problem of providing the resources necessary for such a guarantee without thereby being committed to a universal moral right to health care. This is the fourth and final argument of Buchanan's pluralistic strategy. Buchanan spends by far the most amount of time and effort on defending the argument from enforced beneficence. It is crucial for the success of his pluralistic strategy.
Buchanan's Argument for Enforced Beneficence
Buchanan claims that reasonable secular and religious moral outlooks accept the existence of a moral duty of beneficence to help fellow humans in dire need. According to Buchanan, even (most) libertarians are committed to this duty. The latter can do so because this duty does not seem to imply any positive moral rights on part of the beneficiaries. With regard to the issue of providing a decent minimum of health care, however, discharging this obligation to help those in dire need consists, primarily, in contributing to the collective endeavor of providing basic medical services for those who cannot otherwise afford them.
As mentioned above, this assumption is crucial. The most effective way to discharge the obligation in question, according to Buchanan, is assumed to be a collective regime, as opposed to individual, small-scale initiatives.
With regard to this collective effort, Buchanan asks us to envision two scenarios. First, a situation in which all agents are actually morally motivated to discharge their duty of beneficence to help those in need effectively. Secondly, we imagine a situation in which an individual benefactor cannot be sure that all others are equally motivated by this duty of beneficence as she is. Buchanan's conclusion is that in both scenarios, beneficent and rational individuals have a decisive incentive not to contribute to the collective effort of providing a decent minimum to health care. However, acting on this rational incentive results in the most efficient policy not getting realized. This is so despite the fact that the agent is perfectly beneficent and consequently wants to help as effectively as possible.
The Coordination Problem
The first scenario envisions a society of individuals, all of whom are motivated to act in accordance with their duty of beneficence to help those who cannot purchase a decent level of health care. It is exactly because of this universally-present genuinely beneficent motive that the agents in question want to discharge this duty effectively, i.e., they want their individual contributions to improve the situation of the poor to the greatest possible extent. Buchanan assumes that this motivation expresses itself in the fact that each beneficent agent maintains a willingness to provide (a portion of) the means necessary for the successful implementation of a decent minimum of health care policy. 7 This is so because the agent is aware of the significant financial resources that are needed in order to achieve this aim. Buchanan presumes, for the sake of argument, that the agent in question is, in principle, willing to direct her individual contribution to this collective project because it promises to be the most effective way to discharge her endorsed duty of beneficence to help those in dire need.
There is one inescapable problem though, according to Buchanan. Since the benefactor is obligated (and willing) to help effectively, and since her individual contribution is going to be marginal (in comparison to the large number of individual contributions needed to realize the collective aim) she will conclude (for reasons detailed in the next paragraph) that the most rational thing 7 If we assume that our duty of beneficence must be discharged in an impartial manner and in accordance with some minimally egalitarian intuitions then a successful decent minimum policy gets all beneficiaries above the threshold of access to basic medical care. Once this goal is realized then the specific duty of beneficence, that Buchanan is concerned with, "disappears" so to speak.
to do is not to contribute to the collective endeavor. To the individual contributor, this conclusion appears to be the most rational, exactly because her contributing will very probably result in a contribution that less than maximally helps those in need. The rational and beneficent agent will, therefore, rather direct her individual contribution to small scale (but, overall, less effective) projects that aim to alleviate the health problems of those who cannot do so themselves, for example, on the level of local health care initiatives. Consequently, and assuming that all other beneficent agents are deliberating in a similar fashion, the most effective way to discharge the obligation in question will not be realized. Why does this paradox result? After all, are we not assuming that all agents are motivated by proper moral motives and genuinely want to help the poor as effectively as possible (and all society members know this about one another)? According to Buchanan, this first problem amounts to a variety of the "coordination problem".
The dilemma that each potential contributor faces is that either her contribution unnecessarily adds to the good of universal health care because enough others have already contributed or she gives her resources when not enough others contribute. 8 Buchanan (1984, p. 70) concludes: "In either case, my contribution will be wasted. In other words, granted the small scale of the investment required and the virtually negligible size of my own contribution, I can disregard the minute possibility that my contribution might make the difference between success and failure." In both cases the beneficent agent's contribution is wasted and would have been of more effective (and more beneficial) use if it had been spent on individual and small scale projects, despite the fact that these latter projects turn out less effective overall in comparison to the collectively-provided decent minimum policy. Again, since the most rational thing to do appears to be not to contribute to the collective policy it will not be realized despite its acknowledged superior efficacy (the very fact that would make beneficent agents contribute to it in the first place). The next step is the crucial one: Buchanan concludes that there is only one way to resolve this problem and to ensure that all citizens contribute to the decent minimum regime in a well-coordinated manner. Buchanan (1984, p. 70; my emphasis) says: "But if everyone, or even many people, reason in this way, then what we each recognize as the most effective form of beneficence will not come about. Enforcement of a principle requiring contributions to ensuring a decent minimum is needed." It is at this point that one wonders if the justification of enforced beneficence is in fact successfully established by Buchanan's previous argument.
Keep in mind that we are working under the assumption that all individuals are motivated by the stringency and force of an accepted moral obligation. Conflicts between individual self-interest on the one hand and duties of beneficence on the other are not the problem afflicting the scenario discussed. Given this reliable and society-wide presence of beneficent motives it appears ad hoc to claim that coercive mechanisms are needed and justified in order to overcome the coordination problem. My main objection to Buchanan's first argument (and as it will turn out, also to his second) for enforced beneficence is that it does not establish a clear link between the coordination problem and any sufficient justification of collectively-imposed coercion. Given the current discussion of the coordination problem, it is difficult to see why other (non-coercive) mechanisms are incapable of overcoming this particular problem. The problem in question is "information based", as opposed to being a challenge that requires coercion for its solution. Let me clarify and illustrate this objection to Buchanan's first attempt to vindicate the enforcement of contributions to collective health care endeavors.
I will then reply to objections and potential defenses of Buchanan.
Consider this alternative "mechanism." The state, the government, or some private institution may provide a service that solves the coordination problem without using coercion by determining the amount of each individual contribution to the collective project of guaranteeing the decent minimum of health care (again, a good that all beneficent individuals are presumed to want to realize together). In returning to Buchanan's discussion of the rational incentive that each of these agents has for not contributing, we can make my proposal clearer. The beneficent individual that we are asked to imagine is in a state of epistemic, rather than a motivational and normative "uncertainty." The task of my suggested institution, let us call it the "information service," would be to remove this epistemic uncertainty and to determine each individual's contribution that is necessary to achieve the publicly-guaranteed good of universal access to health care. Notice that the ultimate step of actually transferring the contribution, as determined by the information service, is then nothing anybody needs to be coerced to in a scenario of universally-maintained attitudes of beneficence. After all, thanks to the imagined highly-reliable information service, all individuals know that the contribution they give won't be wasted. They voluntarily give their contribution that is needed to realize the most effective policy for helping the sick, i.e., the collective decent minimum policy, as envisioned by Buchanan.
Two objections to my proposal emerge immediately. Firstly, it appears unrealistic that the envisioned institution can determine the very exact amount of individual contributions in a way reliable enough to overcome the coordination problem. After all, in order to determine the size of the individual contributions, one has to know the exact overall budget necessary to realize the decent minimum policy. Considering the unpredictability of advancements in medicine and the similarly unpredictable rise or decline of the number of worse off citizens who must be covered by the decent minimum policy (and the poor's diverse medical conditions), it seems rather unlikely that the information service can confidently guarantee each individual agent that her contributions are precisely as large as this is needed in order to avoid that her contributions are wasted.
One first thing to notice in response is that this same practical problem seems to apply to Buchanan's policy of enforced beneficence. A real-world Benefactors who are acting from a sincere duty of beneficence and have access to the accurate (and trustworthy!) information that is used by the centralized agency to determine the amount of needed contributions, do not have to be coerced to contribute to begin with.
Secondly, it is an empirical question how my information based approach can avoid these problems of recommending slightly too little or slightly too much voluntary contributions. One expects this empirical issue to be settled by continuous political and practical processes. 9 In addition, one might propose that the beneficent and non-coercive contributions, determined by the information service, are deliberately set slightly above the expected budget that is needed to ensure the decent minimum for all. This seems to be justified because of the specific features of the collective good in question, i.e., the unpredictable nature of health-care-related public policy goals (mentioned above). The resulting surplus will then not count as "wasted." It can be used in the following year(s) and/or for other basic minimum projects that have an indirect and long term impact on the society's overall health situation (such as dietary initiatives in public schools, etc.).
Moreover, one can argue that it is part of the idea of a guaranteed decent minimum, briefly mentioned in section one, that there are at least some additional reserves available in case the information service was too conservative in its projectionssomething that will always remain a possibility, regardless of how well the information institution is set up. According to this rejoinder, contributing to an already sufficiently-financed decent minimum program turns out not to be wasteful after all; on the contrary, it contributes to this minimum being guaranteed for all potential patients in the face of the inherently unpredictable variables characterizing the complex project in question. 9 Another issue is that the "right" amount of individual contributions is dependent on a particular society's conception of the decent health care minimum. Determining what services count as basic enough and what medical resources are to be dedicated to the implementation of these services (e.g., decent technology vs. the best technology available regarding cancer treatment) is not a valueneutral and apolitical project. I assume for the purposes of the discussion in the text that societies have settled that issue, that is, the decent minimum policy is reasonably codified and limited and its administrators have a precise idea of what this policy is going to cost. Many have stressed the point that continuous democratic deliberation is called for in order to settle these complex questions in real-world circumstances in a legitimate way. Cf.: Gutmann (1982, pp. 556-8) and Bole (1991, pp. 10-7).
A different way to formulate an objection to the suggested non-coercive solution to the coordination problem highlights that Buchanan repeatedly emphasizes the, seemingly unavoidable, negligible size of the individual contributions. 10 Why should the information service make a difference regarding this particular dimension, given that any morally-motivated agent, faced with the negligible impact that her contribution potentially has, will again judge that her contribution does more good if transferred to small scale and local initiatives and policies? We seem to run into the initial problem, the presence of the powerful information service notwithstanding.
As mentioned above, it must be acknowledged that this version of the objection also calls into question the applicability of the information-servicesolution to real world circumstances, with all their political and empirical complications. The service would have to provide an enormously precise, detailed, and (morally problematic) intimate set of information, determining exactly what the individual contributions would have to be in order to realize the publicly-funded health care infrastructure. The information has to be that detailed, exactly in order to make sure that the individual contributions in question are never negligible. If (and yes, it remains a big "if") such a service is delivered in this reliable and unambiguous manner, no contributor would ever be justified in judging her potential contribution a negligible one. The information service would make sure that the contribution is exactly as it ought to be, relative to the goal of efficaciously-guaranteeing the target endowment needed for the most effective strategy of helping those in need. 11 Again, all this calls into question the practical feasibility of my proposal. At the same time, this vagueness is permissible at this point in the development of the argument, if one keeps in mind 10 I am indebted to a referee for this journal for framing the objection in question in terms of the negligible size of the individual benefits. 11 There's an interesting further complication arising from this response (that I put aside in the paper). Does my proposal, counterintuitively, lead to the discrediting of any additional (voluntary) contributions into the public system, because these unforeseen contributions would then invalidate the information that the information service attempts to generate? I postpone this further complication until a later opportunity. However, the next paragraph in the text hints at the likely response, defending the moral praiseworthiness of contributors who, in a supererogatory spirit, opt to transfer even more towards the publicly-financed health care regime than would be required to precisely guarantee its realization. At first, all this had looked like a severe blow to the information service proposal (and this problem has readily been acknowledged). However, the unpredictable and "unplannable" nature of individual human health works even more decisively in the other direction, rendering problematic a contributor's personal judgment not to contribute to the public project due to some vague worry regarding the potential negligibility of her resource transfer. Realistically, the (real world) information service will provide a certain, reasonably-broad, spectrum of projected overall costs, taking the unpredictability of individual health into consideration (for, e.g., a certain number of fiscal years, based on assumptions regarding population development, life expectancy, etc.). Hitting one of the many reasonable targets within that range, will then be considered a satisfactory outcome. Given the reliable availability of the information (service) regarding the spectrum and range of efficacious outcomes, individual and private judgments in favor of non-contribution, based on the negligibility of one's contribution, are even harder to justify. 12
The Assurance Problem
Buchanan's second scenario appears more promising with regard to justifying enforced beneficence. We are still deliberating whether to contribute or not from the standpoint of a beneficent agent, i.e., an agent who accepts her duty of charity to help the poor with their health care needs and, a fortiori, wants to do so as effectively as possible. In contrast to the first scenario, however, the beneficent agent does now seem to have a new incentive not to contribute, namely that others might be prone to free-ride. She does not know whether or not enough others will actually contribute. This time the reason why these others might fail to contribute is not the inherent paradox that comes with the universal presence of the beneficent motive to help the poor as effectively as possible but, supposedly, it is the possibility of this very motive getting overpowered in others' practical deliberation by self-interest.
Unfortunately, Buchanan does not consistently motivate this shift towards morally-deficient agents in the set up of his thought experiments. I therefore proceed in two steps in order to execute my discussion of the assurance variety of the enforcement argument: First, I grant for a brief moment that we are now considering an imagined society, in which at least some agents might deflect from contributing to the collective health care policy because they end up being overpowered by self-interested, non-beneficent, motives. Second, however, I highlight that this very scenario is not consistent with most of the passages in Buchanan's writings, i.e., assumptions that remain committed to the idea that we are considering a society of universally shared attitudes and motives of beneficence.
It turns out that regardless of which of the two readings we endorse, the remainder of the argument from the assurance problem is relevantly similar to Buchanan's argument from lack of coordination discussed above and, therefore, open to an epistemic (and non-coercive) resolution. Buchanan begins the relevant argument with the reasonable claim that without the assurance that enough others actually contribute, the most rational thing to do, again, appears to be to direct one's individual contributions to (suboptimal) projects of dispersed and small-scale health care endeavors. 13 According to Buchanan, the assurance worry suggests that in order to achieve the optimal and most effective outcome, an agency must be established that enforces contributions in order to disarm the beneficent agents' (rationally-warranted) incentive not to contribute. Once the beneficent agents rest assured that their morally-deficient, i.e., narrowly selfinterested, co-citizens are forced to contribute, the former will regard their contribution as not being wasted and will contribute their share.
Granting for a moment this way of setting-up the assurance problem in the context of Buchanan's wider argument, one first observation concerning his reflections is that they seems to assume that each individual's contributions are strictly fixed. In particular, Buchanan appears to presume that beneficent individuals are not willing to contribute even slightly more than their "fair and equal share" of the overall sum that would have been sufficient to realize the decent minimum regime if all others had done the same right thing. If we imagine a small scale society of ten, equally well off, members and stipulate that a universal decent minimum policy costs one hundred dollars, then each individual's fair and equal share amounts to ten dollars. If one out of the ten is overpowered by self-interested motives and prefers to keep these respective ten dollars, then the other nine would waste nine times ten dollarsas long as they remain unwilling to contribute more than the fixed amount of ten dollars. Does this fact by itself establish Buchanan's conclusion, according to which the society in question is justified in forcing the one to be "beneficent" and to contribute her fair and equal share? Not necessarily, it seems to me. One should stress at this point the potentially problematic aspect of the assumption mentioned above, viz., that the potential benefactors are depicted as inflexible (and unwilling!) when it comes to giving even slightly more than their fair share. In the case of our model society, the fact that the one imperfectly-moral agent fails to give results in asking the beneficent nine for individual contributions of a bit more than eleven dollars (assuming that the medical services comprised by the decent minimum remain available to all, including the one deflecting member). If, and Buchanan is committed to this assumption, the nine others are ready to act from genuinely beneficial motives (as opposed to, for example, justice-based or contractarian motives of reciprocity) then they will not, at least as long as free riding remains rare (and it has to, according to Buchanan's own presentation of the assurance problemeven according to the first of my two readings), pull out of the collective endeavor, because a small minority of morally deficient individuals refuses to contribute its fair shares. And the above-introduced information service will again stand ready to alleviate that deeper epistemic worry, pertaining to determining the precise level of contributions and the number of active contributors that are now necessary, in the face of non-ideal levels of beneficent compliance. Crucially, coercion remains unwarranted, even if we currently go along with Buchanan and accept that the envisioned hypothetical society consists of both, benevolent and self-interested agents.
A qualification such as "as long as a certain point of widespread refusal is not reached" is, of course, a critical feature of my proposal. I admit that. If a large number of individuals is constantly overpowered by self-interested motives and stops being motivated to meet their obligations of beneficence, then the decent minimum policy breaks down. But notice that this would be a rather unsurprising outcome and not original news at all (and it is partly for this reason that I believe that this first interpretation of Buchanan's assurance problem cannot be what he had in mind. In line with Buchanan's other assumptions, mentioned above, such a society would be deeply morally deficient, a point granted by all major philosophical and religious doctrines and playing a major role in Buchanan's argument in the first place (further discussed below). Under stable moral conditions of widespread beneficent attitudes, agents are not going to be obsessed with ruling out every possible occurrence of the free rider problem as a necessary precondition for them to contribute to the project of realizing the collectivelypursued good of decent health care for all.
Be that as it may, my main response to Buchanan's treatment of the assurance problem is more fundamental, namely that his overall presentation of the collective action problems appears to rule out the assurance problem in its traditional formulation from the get go. When Buchanan describes the society that we are supposed to imagine for the sake of his arguments, its central features contradict those versions of the assurance problem that would be required to vindicate coercion and enforcement (as opposed to establishing the information service).
Just consider that Buchanan (1985, p. 74) explicitly states that in regard to both (!) collective action problems he "proceeds on the assumption that the individuals in question are motivated by a desire to be charitable, not simply by a desire that the needy be provided for (by someone or other)." But given this assumption that applies to his argument for enforced beneficence as a whole, Buchanan's version of the assurance problem ultimately collapses into the epistemic and knowledge challenge, initially introduced in response to the coordination problem above. Hence, given Buchanan's own assumptions about the motivational states of the individuals populating his envisioned libertarian society, even the "assurance problem" falls short of constituting the kind of challenge that we need to start vindicating coercively-enforced beneficence.
The Three Brothers' Problem and Buchanan's Assumptions
We can see this more clearly when we discuss another, closely-related, objection to my discussion of the assurance problem, namely the so-called "three brothers' problem" in evolutionary theory; also discussed in the economics of altruism. 14 This scenario poses a challenge to my critique of Buchanan because it describes another situation in which altruistic individuals seem to be rationally compelled not to engage in an action that they all acknowledge as necessary for generating a universally-desired collective outcome. We are supposed to imagine three brothers, one of whom (the "recipient") is in some dire emergency. Let us say he fell into a pond, can't swim, and would drown if no one helped him. Each of his two brothers (the two potential "donors") is equally far removed from him, can swim, but is able to rescue him only by incurring some non-negligible risk to his own life.
Evolutionary theorists highlight that even if each of the two donor brothers is basically altruistic and acknowledges that his genetic endowment (shared by the recipient brother in need) will be maximally promoted only when both he and the recipient survive, this attitude alone seems to fail to get the rescuing action going in the case at hand. The result might well be, the presentation of the puzzle concludes, that the brother in need drowns in the pond, leaving only two instead of the maximum three relatives alive and in a position to pass on their genes. 14 I am indebted to an anonymous referee for this journal for drawing my attention to the three brothers' problem and to the literature discussing it. Two varieties of the problem are discussed in two important papers by Eshel and Motro (1988a;1988b). My discussion focuses on the presentation of the problem in Cohen and Motro (1990, p. 56). Obviously, this constitutes an outcome that all three had rationally acknowledged to be less advantageous than the one that would have been feasible.
Why do the three brothers end up with this sub-optimal outcome? The answer is that from the narrow perspective of each of the two donors' rationality, the individually best outcome is to stand by and let the other brother jump into the pond, let him incur the risk of drowning, and to have the three brothers survive (who then have their shared genetic endowment promulgated to the maximum extent). The evolutionary biologists' take home lesson is that this scenario seems to always (!) support the less altruistic brother in terms of her "inclusive fitness" over other relatives who end up engaging in the risky, life threatening, rescuing action. In addition to having his brother getting rescued, the less altruistic brother enjoys the additional evolutionary benefit of having his personal genetic endowment not getting endangered by any risky rescue. In summary, Cohen and Motro (1990, p. 56) state, "this [the rescuing brother's decision in the face of all other potential donors remaining passive] entails an even greater increase in the inclusive fitness of the relatives which decided not to offer their help. It seems, therefore, that if there is any altruistic relative in the vicinity natural selection will always favour the other selfish relatives." At first sight, the three brothers' problem appears to support Buchanan (and undermine my information focused proposal) because it presents at least one case, in which some coercion and enforcement (not mere information services) seem unavoidably necessary to bring about the optimum outcome, in order to overcome the impact that evolutionary forces have on kin selection in the presence of more than one relative. Coercing one of the brothers to rescue the one in dire straits seems necessary in order to realize the outcome of one's kin's genetic endowment being maximally spread. Similar to old-fashioned prisoner's dilemmas, without any enforcement mechanism, a merely suboptimal collective outcome gets produced when all agents act in accordance with what seems the most rational thing to do (from the individual perspective), i.e., to wait for others to take the risk involved in rescuing the brother. This aspect of the three brothers' problem parallels Buchanan's description of what is happening in the case of altruistic individuals failing to provide a shared good that they all deem worth realizing but, due to one or the other collective action problems, are only capable of realizing if an external enforcement mechanism compels them to contribute.
In response to the three brothers' problem, and in concluding my investigation, let me apply another time the crucial distinction between scenarios in which assurance is absent because of some collective knowledge deficit or, alternatively, because the motivational states of the agents involved are unpredictable, unreliable, and unstable. Recall my above reflections on Buchanan's assurance problem as well as on his coordination challenge: With regard to both Buchanan (1985, p. 73) presumed that we are dealing with "a society of morally upright, altruistic libertarians," i.e., a group of individuals, with respect to which "the barrier to successful collective action is [neither] egoism [n]or self-interest in any significant sense." My central proposal has been that in these and in many other passages, Buchanan commits himself to a crucial and consequential presumption. If his argument for enforced beneficence includes this presumption from universal altruism, it undermines his argument for coercion and centralized enforcement mechanisms. The "morally upright libertarians" in question need institutions that overcome the distinctively epistemic deficits characterizing their predicament. Once a planning and knowledge agency provides the exact pieces of information regarding the empirical facts of what each person has to contribute in order to hit the target of effective health care provision for all, the universally-shared and acknowledged altruistic motives take care of the rest. No coercion and enforced contributing enter the picture at all. Hence, no argument is even necessary to justify such practices to begin with.
Alternatively, and this is the second horn of what we might call "Buchanan's dilemma", if we allow that some (many?) members of Buchanan's envisioned libertarian society are prone to free riding, deception, etc. then this not only contradicts many other things that he says (and that I quoted above) but, more problematically, this alternative set of premises lets his argument run into the standard problem that the enforcement in question will be executed against the preferences (and "the will") of non-consenting others, who will then simply reject the claim that they are members of "a society of morally upright, altruistic libertarians" (as it is defined by Buchanan). In that case the issue of coercion indeed becomes a relevant one and an enforcing, not just information-providing, authority must be introduced to realize the collective goods in question by ensuring that enough others contribute. However, framing the collective-actionchallenges this way would amount to Buchanan engaging the controversy concerning the enforcement of (controversial) virtues and actions; a debate in which the libertarian will readily insist that imposing beneficent actions and policies on dissenting agents is a morally impermissible thing to do on part of public institutions. Moreover, Buchanan's writings that I currently examine do not challenge the libertarian on that front. This, in turn, lends further support to my claim that Buchanan's overall argumentative strategy must be interpreted as resting on the alternative assumption of universally-shared beneficence amongst all parties. Now a similar Buchanian dilemma emerges when we revisit the three brothers' problem. While I cannot fully develop an analogously-structured response to that problem here, it should be clear at this point that, given the above reflections and claims, the three brothers case must be further specified in order to really present a challenge to the alternative solution of Buchanan's two collective action challenges. We have to ask, if the two brothers' (that is, the two potential donors') problem is an epistemic predicament or a matter of internal motivational deficiencies? If the latter, then it can be readily agreed that the only way to overcome their hesitance to help their brother is an external enforcement mechanism, forcefully "coordinating" the rescuing effort of their brother and countering the evolutionary pull to free-ride by simply waiting for the other brother to take care of the risky rescue.
As I tried to highlight throughout this essay, this does not at all appear to be a formulation of the three brothers' problem that fits Buchanan's analogous scenario regarding health care provision and its collective action hurdles. A parallel version of the three brothers' problem would presume universally shared altruistic attitudes and motives on part of all brothers, ruling out the desire to free ride from the beginning. Rather, the two potential donor brothers must be envisioned as facing a variety of the above described epistemic problem. That problem, however, can be resolved by the acquisition of information concerning the required act of rescue; again, an act that both are presumed to be willing to undertake. I call this a "variety" of the epistemic problem because in the case of the three brothers we need not merely an information gathering and distributing agencyand it is also for this reason that discussing the three brothers' problem is an enlightening exercise. Different from the good of collectively-provided health care in Buchanan's argument, rescuing the third brother is an indivisible good (that is, it can only be realized by each of the two donors individually).
Hence, in addition to the information concerning the exact contribution that is required to realize the desired collective good in an effective expression of beneficence, the two brothers need an unambiguous procedural mechanism that determines whose turn it is, so to speak. A lottery, for example, might be one way of settling the question of which of the two brothers actually ends up performing the rescue. Again, under the assumption of universally-shared attitudes of beneficence, this lottery is not insisted upon because the brothers distrust each other regarding their attitudes and motives. Rather, they need to generate a specific kind of belief content in order to overcome their currently vague situation in terms of their actions. After all, it would be an irrational waste of resources if both brothers were to jump into the pond, together overdetermining the act of rescue through their uncoordinated individual decisions. 15 These additional issues regarding the three brothers' problem are certainly important and more work needs to be done to spell out the details and their relevance to Buchanan's dilemma. However, the main response continues to consist in the observation that given (!) the assumption that all relevant parties are predisposed altruistically, also the three brothers' problem is susceptible to a non-coercive solution. Enforced contribution (enforced rescuing) is only necessary in case (some) parties' beneficent motives and attitudes are unpredictable and unreliable.
Both, Buchanan's society of "morally upright libertarians" and a three brothers scenario in which all are genuinely and reliably benevolent, indeed present collective action problems that ask for shared solutions -Buchanan has done a lot to correctly identify that point and its relevance for the health care debate. However, the problem, as described and contextualized by Buchanan, is open to getting resolved by entirely non-coercive means. On the other hand, if Buchanan's society and the three brothers were prone to deflection and freeriding, coercion would indeed be necessary to realize the shared goods in question. However, this latter scenario would then shift Buchanan's argument into the familiar territory of standard political-philosophical debates on the justifiability of enforcing contested public moralities on non-consenting members of society. 16 | 11,436.6 | 2019-05-03T00:00:00.000 | [
"Philosophy"
] |
Multiple quasicrystal approximants with the same lattice parameters in Al-Cr-Fe-Si alloys
By means of atomic-resolution high-angle annular dark-field scanning transmission electron microscopy, we found three types of giant approximants of decagonal quasicrystal in Al-Cr-Fe-Si alloys, where each type contains several structural variants possessing the same lattice parameters but different crystal structures. The projected structures of these approximants along the pseudo-tenfold direction were described using substructural blocks. Furthermore, the structural relationship and the plane crystallographic groups in the (a, c) plan of these structural variants was also discussed. The diversity of quasicrystal approximants with the same lattice parameters was shown to be closely related to the variety of shield-like tiles and their tiling patterns.
As one kind of structurally complex alloy phases, quasicrystal approximants have triggered wide interest owing to the challenge in solving their complex crystal structures and their possibly exceptional properties [1][2][3] . The substructures or the structural tiles of approximants are the same as those of the corresponding quasicrystals, but are arranged periodically in approximants while a quasiperiodic arrangement is observed in quasicrystals. Therefore, understanding the crystal structures of the approximants is beneficial for revealing the crystal structures of the quasicrystals, which are much more complicated than approximants because of their lack of translational symmetry.
X-ray single crystal diffraction is the most popular technique for structure determination, and the crystal structures of some approximants have already been solved by this technique [4][5][6][7][8][9][10][11] . Generally, one will preliminarily consider whether the phase is new or known according to the lattice parameters, which are determined preferentially by X-ray single crystal diffraction before collection of the precise data of the diffraction spots. Prior to the structure determination, the phases with the same lattice parameters are usually considered to be the same if the single crystals for X-ray diffraction are obtained from samples with the same compositions and preparation conditions.
Compared with X-ray single crystal diffraction, transmission electron microscopy (TEM) is more widely used to understand the crystal structures of approximants, because it is difficult to achieve large and high-quality single crystals for X-ray single-crystal diffraction. For example, the structural models of some approximants were proposed by electron crystallography [12][13][14][15][16] , based on the structural relationships of the known and unknown approximants determined from the high resolution transmission electron microscopy (HRTEM) images. Some typical structural blocks, such as hexagon (H), boat (B), star (S), and decagon (D) [17][18][19][20] , are often found in Al-based decagonal quasicrystals (DQCs) and their approximants; and the rich combinations of these substructures lead to an abundance of approximants [21][22][23][24][25][26] . Among them, the approximants with different lattice parameters may consist of the same structural blocks, for example, the H tile for (1/0, 2/1) and (1/1, 1/1) approximants 27,28 . On the other side, the approximants of Al-based DQCs with the same/similar lattice parameters in different alloy systems, for example, the O 1 approximants in Al-Mn-Ni 29 and Al-Cu-Fe-Cr 21 , were found to have the same the structural blocks and tiling patterns.
Recently, we found that the orthorhombic (3/2, 2/1) phase, previously reported in Al-Mn-Ni 29,30 (named as O 1 , with a = 3.27 nm, b = 1.25 nm, and c = 2.38 nm in ref. 29), Al-Mn-Pd 31 , Al-Cu-Fe-Cr 32 , and Al-Fe-Cr 33 , has an additional type of structural tiling in the Al-Cr-Fe-Si alloys 34 , which is different from the known structural tiling in Al-Mn-Ni 29 . Herein, we report a series of approximants with the same lattice parameters but different crystal structures in Al-Cr-Fe-Si alloys, based on the experimental results of the high-angle annular dark-field Scientific RepoRts | 7:40510 | DOI: 10.1038/srep40510 (HAADF) scanning transmission electron microscopy (STEM) images at an atomic resolution. Note that the same lattice parameters mentioned here are based on the assumption that the structural tiles of the approximants are perfect, and therefore, they exhibit no distortion. Accordingly, the same structural tile has the same shape and size. For clarity, we adopt (F n /F n−1 , F m /F m−1 ) to name the Fibonacci approximants, where F n and F n−1 , as well as F m and F m−1 , are neighboring numbers in the Fibonacci sequence 35 . Furthermore, we use (F n /F n−1 , F m /F m−1 ) x , with x = 1, 2, 3 … to distinguish the approximants with the same lattice parameters but different crystal structures. Consequently, the lattice parameters in the pseudo-tenfold plane of the Fibonacci approximants are intuitively reflected by the number of F n and F m 29,35 .
Results and Discussion
(2/1, 3/2) approximants. Besides the B-centered (3/2, 2/1) approximant in the Al-Cr-Fe-Si system 34 , we observe two additional primitive Fibonacci approximants in the Al-Cr-Fe-Si system by selected-area electron diffraction patterns (EDPs), as shown in Fig. 1. The unit cell, measured from Fig. 1a, is calculated as a ≈ 2.0 nm and c ≈ 3.8 nm, close to those of the (2/1, 3/2) Fibonacci approximant (a = 1.99 nm, c = 3.79 nm) 29 , and thus was ascribed to the (2/1, 3/2) type. The parameters of a and c of the second primitive Fibonacci approximant in Fig. 1b is calculated as a ≈ 3.2 nm and c ≈ 2.3 nm (from the selected-area EDP), corresponding to the (3/2, 2/1) approximant, which was reported previously in Al-Mn-Ni alloys 29,30 . The composition of the (2/1, 3/2) approximant in Fig. 1a is measured as Al 54.9 Cr 22.5 Fe 9.6 Si 13.0 , and that of the (3/2, 2/1)-type approximant in Fig. 1b A shield-like tile (SLT) was used to described the crystal structures of the (3/2, 2/1) and (2/1, 3/2)-type approximants, showing the advantage of concision 34 . Furthermore, three additional kinds of SLT structural blocks were observed in the Al-Cr-Fe-Si system in this report besides that adopted previously 34,36 . Therefore, the different SLTs were renamed as SLT-1, SLT-2, SLT-3, and SLT-4 to distinguish between them (Fig. 2). We describe these SLTs preferentially because the structural tiling patterns of the approximants in this paper will be mainly analyzed according to the SLTs.
The atomic resolution structural images of these SLTs are shown in the first row of Fig. 2, and their structural characteristics are demonstrated by both the white lines and the small green circles in the second row, where the small green circles represent the smallest D clusters located on the vertexes of the S, H, and SLT. The SLT-1 tile consists of two H tiles and one S tile, which was actually the reported SLT in refs 34 and 36. Neither perfect S nor H tiles (where each vertex of the perfect tiles should have the same structural configuration) could be deduced from the SLT-2 from the structural point of view, because the head of SLT-2 was quite similar to the corresponding part of the perfect D cluster in the Al-Cr-Fe-Si system 36 . Therefore, the SLT-2 could be decomposed into one decagon and one bowtie (BT) tile in geometry, rather than a combination of H and S in the SLT-1. We observed that the SLT-1 and SLT-3 are quite similar. However, the smaller D cluster was missing on one vertex of the S tile in the inner of the SLT-3, compared with the SLT-1. The brighter spots in the HAADF-STEM images in Fig. 2, for example, the centers of the smallest D clusters, suggest the positions of heavy atoms (Cr/Fe). Notably, the intensities of ten spots around the center of S in the SLT-1, and -3 are weaker than the corresponding ones of the other smallest D clusters, implying less numbers of heavy atoms in the atomic columns of the former in one periodicity along the b direction. The SLT-4 could be further decomposed into one D and two adhering BT structural blocks, resulting in a larger area of one BT, compared with the other SLTs. We adopted a different outline to describe the SLT-4 because the two perpendicular mirrors in it could be directly revealed, as shown in the third row in Fig. 2.
In Fig. 3, we deduced a structural schematic for each SLT from the images in Fig. 2. The red atoms are the transition metals (TMs) of Fe/Cr, and the others are the mixed sites of Al and TM (MSs). Furthermore, we deduced the atom positions in the inside of the H tiles of SLT-1 by comparing the H tiles in the crystal structure Fig. 2 as well as the deficiency of known crystal structures containing such a kind of D tile. We argue that the smallest D clusters, as highlighted in gradient green in Fig. 3, could be corresponding to the icosahedral chains extending along the b direction because it occurs to the H tiles of Al 3 Mn phase 4 when we analyze its crystal structure.
The coherent coexistence of the (3/2, 2/1) 2 and the third (3/2, 2/1) approximant, as highlighted in sapphire and denoted as (3/2, 2/1) 3 , can be observed in Fig. 6. Note that a few of the SLT-1 tiles are replaced by the SLT-3 with the same orientation in the matrix of the (3/2, 2/1) 2 phase, as shown in the upper-left corner of Fig. 6b, but without structural distortion to the (3/2, 2/1) 2 owing to the same size of the SLT-1 and SLT-3. While the (3/2, 2/1) 3 is composed of SLT-4 (the profile of one is outlined in red in the upper-right corner of Fig. 6b). The nearby SLT-4 tiles are overlapped by one BT along the c axis. The combined size of one D and an adhering BT in a SLT-4 is the same as that of the other types of SLT in geometry. Therefore, the (3/2, 2/1) 3 have the same unit parameters as those of (3/2, 2/1) 1 and (3/2, 2/1) 2 . The (3/2, 2/1) 3 approximant not only coexists coherently with the neighboring (3/2, 2/1) 2 and (1/0, 2/1) approximants, but also with the a, b, and c parallel to each other. The growth of the (3/2, 2/1) 3 approximant along the a direction was stopped by the (3/2, 2/1) 2 and some random structural blocks, resulting in a sandwich-like structure, where the (3/2, 2/1) 3 approximant was clamped in-between the (3/2, 2/1) 2 and (1/0, 2/1) approximants. = 100.7°). Two monoclinic approximants with the same unit cell of a = 1.90 nm, b = 1.23 nm, c = 3.61 nm, and β = 100.7° were also found, where the b inherits the periodicity of the DQC. One is the M1 ACFS in Fig. 5, and another is the monoclinic M2 ACFS approximant in Fig. 7. The planar structure of M1 ACFS viewed along the pseudo-tenfold direction could be described by the substructures of two inversed SLT-1 tiles and an oriented H tile (Fig. 5b). The neighboring SLT-1 tiles with the same orientation are connected by sharing two sides, while the inversed SLT-1 tiles are partly overlapped by the shape of an H tile.
Monoclinic approximants (β
In comparison, the M2 ACFS in Fig. 7 is composed of SLT-3, S, and two-oriented H tiles. Note that the small decagonal cluster on one vertex of S in the inner of SLT-3 is missing, as marked by the dashed yellow lines in the SLT-3 inserted in the upper-left corner of Fig. 7. We refer to the S with one missing vertex as the "pseudo S". The centers of S and pseudo S are arranged in rows, with an alternating short (S) and long (L) distance. However, the extension of the lattice of M2 ACFS along c M2 is disturbed, as demonstrated by the kinked dash line in Fig. 7. Consequently, one kind of (2/1, 3/2) unit cell, named as (2/1, 3/2) 3 and highlighted by red rectangles, is observed in-between the M2 ACFS lattices. Therefore, the M2 ACFS and (2/1, 3/2) 3 approximant grow alternatively and coherently. Furthermore, a few SLT-1 tiles are also found to be mixed within the matrix of SLT-3, for example, the SLT-1 highlighted in Fig. 7, which brings structural disorder to the matrix of M2 ACFS . Geometric analysis. We summarize the structural variants of three kinds of approximants in the schematic diagrams of Fig. 8. For simplification, the idealized structural blocks are proposed to be perfect, without distortion. Furthermore, the origins of the unit cells are set at the centers of the S or D tiles for comparison. Note that the b axes for all approximants inherit the periodic axis of DQC in this system, and are therefore the same as 1.23 nm.
For the four (2/1, 3/2) approximants in the first row of Fig. 8 (including the reported approximant in ref. 34, which is renamed as (2/1, 3/2) 4 ), we see easily that their a values are equal to the diameter of the D tile (d D ), deduced directly by comparing the geometry. It is also evident that the magnitude of c, shownin Fig. 8a-c is the same because their structural tiles are the same if we ignore the differences of the SLTs. Now let us compare the c values in Fig. 8a,d. The magnitude of c in Fig. 8a is equal to the sum of L 1 + L 2 , where L 1 and L 2 are the lengths of the H tile and circumcircle diameter of the red decagon, respectively, which is the same as the c value in Fig. 8d. Therefore, the four (2/1, 3/2) approximants have the same unit parameters, with the a and c calculated as: where l is the edge length of the structural blocks (e.g., ~0.62 nm in this paper) and the τ is the golden number of 0.618. The tiling of (2/1, 3/2) 1 , (2/1, 3/2) 2 , and (2/1, 3/2) 3 is quite similar because the part of their structures, without the SLT blocks, is exactly the same, for example, the H and S tiles highlighted by dark colors. Furthermore, the remaining part is described by an SLT tile, which also implies the similarity. The similarity of the (2/1, 3/2) 1 , (2/1, 3/2) 2 , and (2/1, 3/2) 3 approximants might explain their coexistence in the obtained experimental images, for example, in Fig. 4.
The structural variants of (3/2, 2/1) approximants are relatively simple, as shown in the second row of Fig. 8. The magnitudes of their a and c parameters are equal, respectively, with a = d D + L BT and c = d D + W BT , where d D is the diameter of the D tile; the L BT and W BT are the length and the smallest width of structural block of BT, respectively. The magnitudes of a and c are calculated as: where the l and τ are the edge length of the structural blocks and the golden number of 0.618, respectively. The structural relationship between Fig. 8e,f was discussed in our previous paper 34 , so we will not discuss this again. The SLT-4 in the (3/2, 2/1) 3 approximant has two mirrors, which results in a Bmm symmetry in the (010) projection plane. Note that the SLT-4 can be further decomposed into a SLT and BT tile, where the SLT is more like the SLT-2, rather than the SLT-1 in (3/2, 2/1) 1 and (3/2, 2/1) 2 approximants. Although the structures of (3/2, 2/1) 1 and (3/2, 2/1) 2 approximants are closely related and could be deduced from each other by changing the orientations of some SLT-1 substructures 34 , the structural transformation between the (3/2, 2/1) 1 and (3/2, 2/1) 3 approximantsis different because the atomic structures of SLT-1 in (3/2, 2/1) 1 and SLT-4 in (3/2, 2/1) 3 are different.
We note that the (2/1, 3/2) 1 approximant in Fig. 8a and the monoclinic M1 ACFS approximant in Fig. 8h have the same magnitude of a, and also the same tiles of H and SLT-1. Therefore, their structural relationship is compared in Fig. 8j, where the (2/1, 3/2) 1 is in green and viewed from an inverse direction with respect to that in Fig. 8a, to obtain SLT blocks with the same orientation as those in Fig. 8h. Meanwhile, the monoclinic M1 ACFS approximant is depicted by red lines and superposed onto the (2/1, 3/2) 1 approximant (Fig. 8j). The upper SLTs (solid lines) in both phases are completely overlapped. However, the lower SLTs of M1 ACFS (dash lines) shift a distance of the width of the H tile (~0.74 nm) along a M1 (marked by the red arrow) with respect to the corresponding tiles in the (2/1, 3/2) 1 approximant. Accordingly, the lower parallelogram formed by connecting the centers of the S tiles in the (2/1, 3/2) 1 approximant is changed to the position where the short edges nearly line up with the corresponding edges of the upper parallelogram in the M1 ACFS approximant. Furthermore, some of the H tiles also change their positions and orientations, as marked by the curved arrows, to fill the space in-between the SLTs of the M1 ACFS approximant, for example, the H tile filled in sky blue.
We summarize the lattice parameters of the approximants mentioned above in Table 1. Moreover, the plane crystallographic groups in the (a, c) plan of these approximants are also listed after analyzing the geometry in Fig. 8. We emphasize that the (2/1, 3/2) 1 , (2/1, 3/2) 2 , and (2/1, 3/2) 3 approximants should be ascribed to monoclinic phases from the viewpoint of symmetry because neither p1 nor p2 plane crystallographic group exists for an orthorhombic phase although a rectangle lattice could be drawn in Fig. 8a-c, which has also been noted for the Al 71 Ni 22 Co 7 approximant by Abe et al. 22 .
Plane crystallographic group in (a, c)
Lattice parameters * a (nm) c (nm) β (°) Conclusion By means of atomic resolution HAADF-STEM images, we have found three types of DQC approximants in Al-Cr-Fe-Si systems, where each type has several structural variants with the same lattice parameters, but differ in their crystal structures. The (2/1, 3/2)-type approximants contain four structural variants, in which the (2/1, 3/2) 1 , (2/1, 3/2) 2 , and (2/1, 3/2) 3 approximants belong to monoclinic phases by considering the symmetries although the β = 90°. The orthorhombic (3/2, 2/1)-type approximant includes three structural variants. Furthermore, we also found two more monoclinic approximants (M1 ACFS , and M2 ACFS ) with the same unit cell: a = 1.90 nm, b = 1.23 nm, c = 3.61 nm, and β = 100.7°, but varied in their crystal structures owing to the different SLTs. The structural variations for each type are closely related with changeable SLTs, which are further classified into four types: SLT-1, SLT-2, SLT-3, and SLT-4. The types, orientations, and connections of the structural blocks, especially the SLTs, are responsible for the multiplicity of the approximants with the same unit cell reported here.
Experimental
Approximately 1 Kg of the master Al-Cr-Fe-Si alloy with a nominal composition of Al 60 Cr 20 Fe 10 Si 10 was prepared by melting high-purity elements in an induction furnace under vacuum. The samples investigated in this study were treated according to the following process: several pieces of the master Al 60 Cr 20 Fe 10 Si 10 alloy were first heated at 1070 °C for 24 h in an evacuated quartz tube, and then cooled slowly to 1000 °C for 24 h, and then followed by cooling in the furnace (Sample 1). Part of the sample after the heat treatment was then annealed at 900 °C for 15 days in a vacuum, and then cooled in the furnace by shutting off the power (Sample 2). Powder samples were adopted for TEM observations. An FEI Tecnai F30 transmission electron microscope equipped with an energy dispersive X-ray spectrometer (EDS) was first used to check the phases and the composition. A JEM-ARM200F transmission electron microscope equipped with a Cs-probe corrector and Cs-image corrector was used to obtain HAADF-STEM images at an atomic resolution. The inner and outer acceptance semi-angles for HAADF-STEM imaging were 90 and 370 mrad, respectively. | 4,688.6 | 2017-01-13T00:00:00.000 | [
"Materials Science"
] |
SPECIAL ISSUE IN HONOR OF REINOUT QUISPEL
Reinout Quispel was born on 8 October 1953 in Bilthoven, a small town near Utrecht in the Netherlands. He studied both chemistry and physics, gaining bachelor’s degrees at the University of Utrecht in 1973 and 1976 respectively, and then specialized in theoretical physics, with a Master’s degree in 1979 (on solitons in the Heisenberg spin chain, supervised by Theodorus Ruijgrok) and a PhD, Linear Integral Equations and Soliton Systems [22], in 1983, supervised by Hans Capel. This thesis, which begins with a study of integrable PDEs, arrives in Chapter 4 (later published in [24]) with the discovery of a method for obtaining fully discrete integrable systems on square lattices, that have as continuum limits the Korteweg–de Vries, nonlinear Schrödinger, and complex sine–Gordon equations, and the Heisenberg spin chain. Thus several of Reinout’s lifelong research interests – continuous and discrete integrability, and the relationship between the continuous and the discrete – were present right from the start. The next stop was a postdoc at Twente University, working with Robert Helleman, the founder of the ‘Dynamics Days’ conference series, before a long-distance move to the Australian National University, working with Rodney Baxter. Reinout and Nel expected this southern sojourn to last for three years; thirty-three years later they are still happily resident in Australia. In 1990 Reinout moved to La Trobe University, Melbourne, where he became a Professor in 2004. Reinout’s three main research areas are discrete integrable systems, dynamical systems, and geometric numerical integration, along with interactions between these topics. In discrete integrable systems, having introduced a major new direction in his PhD thesis – his novel reductions to Painlevé equations led to the Clarkson–Kruskal non-classical reduction method – he continued by codiscovering the QRT map [25, 26], an 18-parameter family of completely integrable maps of the plane. These turned out to have far-reaching implications in dynamical systems theory, geometry, and integrability. For example, the modern construction of nonautonomous dynamical systems known as discrete Painlevé equations rely on them. Their geometry is explored at length in the 2010 book QRT and Elliptic Surfaces by Hans Duistermaat and is still being investigated today. In dynamical systems, his work has centred on systems with discrete and/or continuous symmetries. His review [28] marked the emergence of reversible dynamical
systems as a distinct class that exhibits both of conservative and dissipative features. He studied continuous and discrete k-(reversing) symmetries (symmetries of the kth iterate of a map) [11,14]. In [16], he introduced "linear-gradient" systems as a unification of Hamiltonian, Poisson, and gradient systems and systems with Lyapunov functions and/or first integrals.
A similar flavour runs through his work on geometric integration, where he has been a proponent of the structural point of view, in which each natural class of dynamical systems (that may form a group, semigroup, or symmetric space) is studied in its own right, with a goal of finding natural structure-preserving integrators for each class [18]. Clearly his background in physics and dynamical systems contributed here; the perspective he brought broadened the entire field. For example, he has introduced integral-preserving integrators; volume-preserving integrators; and (reversing) symmetry-preserving integrators [17,27]. He presented the Average Vector Field method as an energy-preserving B-series method, which opened the door to connections with Runge-Kutta theory [23]. This has triggered a revival of interest in integral-preserving integration which continues to this day. He conjectured and proved that no B-series method is volume-preserving for all divergence-free vector fields, one of few known 'no-go' results in this area [12]. He realized that Kahan's 'unconventional' method is also a B-series method [5], while in addition preserving integrability in many cases. This last work combined his interests in numerical integration and discrete integrability in a highly satisfying way.
An inveterate traveller, Reinout has been a key participant at numerous research programs around the world, of which we would like to mention the Semester on He is a true scientific leader, having transformed each of the areas he has touched into major fields of study, due in part to his style: he is an ideas person. He does not stick to the established path; he sees things differently and has consistently come up with new ideas and directions that have inspired others. His intuition, experience, optimism, persistence, and scientific style lead him to repeatedly make original and high-impact discoveries. We are extremely grateful for the opportunity to know and to work with Reinout. We wish him all the best for the years ahead, and look forward keenly to what new surprises await.
The papers in this special issue span a wide range of topics closely related to Reinout's research interests. There are three papers on continuous dynamical systems, all related to predator-prey equations. Tuwankotta and Harjanto [32] consider a family of classical planar predator-prey systems, finding that a small periodic perturbation induces strange attractors in a neighbourhood of an invariant torus of the unperturbed system. Christodoulidi, Hone, and Kouloukas [6] find new, highdimensional integrable and superintegrable cases of Lotka-Volterra systems, and numerical evidence for chaos in other cases. Evripidou, Kassotakis, Vanhaecke [9] study integrable reductions of the dressing chain in Lotka-Volterra form, again finding new superintegrable cases. They find that the Kahan map of these systems is
SPECIAL ISSUE IN HONOR OF REINOUT QUISPEL
iii superintegrable, arising in fact from the compatibility conditions of a linear system, thus linking Kahan maps to isospectrality.
There are three papers on discrete integrable systems. The QRT map mentioned above is a product of involutions. Joshi and Kassotakis [13] find conditions on the parameters that ensures that these involutions factorise further, providing links to other parts of the study of discrete integrability. Petrera and Suris [21] study particular Kahan mappings, providing a converse to a recent result of Reinout's that certain Kahan mappings are Manin transformations: they show that any such Manin transformation is a Kahan map. The degree growth of a rational map has been used as a test of integrability, with linear growth associated with particularly simple dynamics. Tran and Roberts [31] establish linear degree growth for several families of mappings, and also find new quad graph mappings with linear growth.
The remaining papers concern geometric numerical integration and its applications. New geometric integration algorithms are presented for the Gross-Pitaevskii equations with rotation term by Bader, Blanes, Casas, and Thalhammer [1]; for the wave equation with multifrequency oscillations by Condon, Iserles, Kropielnicka, and Singh [7]; for the modified KdV equation by Frasca-Caccia and Hydon [10]; for the space-fractional nonlinear Schrödinger equation by Miyatake, Nakagawa, Sogabe and Zhang [19]; for chains of rigid bodies by Saetran and Zanna [29]; for charged particle dynamics by Shi, Sun, Wang, and Liu [30]; and for variational PDEs with symmetry by Zadra and Mansfield [33]. McLachlan and Murua [15] determine the Lie algebra generated by an arbitrary potential and an arbitrary kinetic energy; this algebra underlies many popular symplectic integrators based on splitting.
As mentioned above, Reinout proved that no B-series can be volume-preserving. However, it is known that a generalisation, the aromatic B-series, can be volumepreserving. The composition and substitution rules for these series are developed by Bogfjellmo [3]. Benning, Celledoni, Ehrhardt, Owren, and Schönlieb [2] apply partitioned symplectic Runge-Kutta methods to an ODE formulation of deep learning; the time steps are regarded as parameters to be learned. Likewise, Pathiraja and Reich [20] apply discrete gradient methods to a gradient ODE formulation of Bayesian inference. Here we are moving into the realm of data analysis via geometric numerical techniques. Curry, Marsland, and McLachlan [8] also moves in this direction: data on a symmetric space (such as a sphere or torus) is approximated by lower-dimensional totally geodesic subspaces. If geometric numerical integration, as defined by Budd and Iserles [4], is the numerical solution of differential equations on manifolds, then perhaps we can anticipate the merging and cross-fertilisation of different techniques all concerned with numerical analysis on manifolds. | 1,836.8 | 2019-01-01T00:00:00.000 | [
"Physics"
] |
A hybrid correcting method considering heterozygous variations by a comprehensive probabilistic model
Background The emergence of the third generation sequencing technology, featuring longer read lengths, has demonstrated great advancement compared to the next generation sequencing technology and greatly promoted the biological research. However, the third generation sequencing data has a high level of the sequencing error rates, which inevitably affects the downstream analysis. Although the issue of sequencing error has been improving these years, large amounts of data were produced at high sequencing errors, and huge waste will be caused if they are discarded. Thus, the error correction for the third generation sequencing data is especially important. The existing error correction methods have poor performances at heterozygous sites, which are ubiquitous in diploid and polyploidy organisms. Therefore, it is a lack of error correction algorithms for the heterozygous loci, especially at low coverages. Results In this article, we propose a error correction method, named QIHC. QIHC is a hybrid correction method, which needs both the next generation and third generation sequencing data. QIHC greatly enhances the sensitivity of identifying the heterozygous sites from sequencing errors, which leads to a high accuracy on error correction. To achieve this, QIHC established a set of probabilistic models based on Bayesian classifier, to estimate the heterozygosity of a site and makes a judgment by calculating the posterior probabilities. The proposed method is consisted of three modules, which respectively generates a pseudo reference sequence, obtains the read alignments, estimates the heterozygosity the sites and corrects the read harboring them. The last module is the core module of QIHC, which is designed to fit for the calculations of multiple cases at a heterozygous site. The other two modules enable the reads mapping to the pseudo reference sequence which somehow overcomes the inefficiency of multiple mappings that adopt by the existing error correction methods. Conclusions To verify the performance of our method, we selected Canu and Jabba to compare with QIHC in several aspects. As a hybrid correction method, we first conducted a groups of experiments under different coverages of the next-generation sequencing data. QIHC is far ahead of Jabba on accuracy. Meanwhile, we varied the coverages of the third generation sequencing data and compared performances again among Canu, Jabba and QIHC. QIHC outperforms the other two methods on accuracy of both correcting the sequencing errors and identifying the heterozygous sites, especially at low coverage. We carried out a comparison analysis between Canu and QIHC on the different error rates of the third generation sequencing data. QIHC still performs better. Therefore, QIHC is superior to the existing error correction methods when heterozygous sites exist.
(Continued from previous page)
Conclusions: To verify the performance of our method, we selected Canu and Jabba to compare with QIHC in several aspects. As a hybrid correction method, we first conducted a groups of experiments under different coverages of the next-generation sequencing data. QIHC is far ahead of Jabba on accuracy. Meanwhile, we varied the coverages of the third generation sequencing data and compared performances again among Canu, Jabba and QIHC. QIHC outperforms the other two methods on accuracy of both correcting the sequencing errors and identifying the heterozygous sites, especially at low coverage. We carried out a comparison analysis between Canu and QIHC on the different error rates of the third generation sequencing data. QIHC still performs better. Therefore, QIHC is superior to the existing error correction methods when heterozygous sites exist.
Keywords: Sequencing analysis, PacBio sequencing, Sequencing error, Error correction method, Hybrid correction method, Heterozygous variant, Probabilistic model Background Genomic researches have been revolutionized by the genome sequencing technology, especially the singlemolecule long-read sequencing technology, also called the third-generation sequencing (TGS) [1]. The emergence of TGS technology not only inherits the high throughput of the next-generation sequencing (NGS), but also produces longer reads with the lengths greater than 10kbp compared to NGS reads which are generally limited to 100bp [1][2][3][4][5][6][7][8]. TGS has also brought a huge boost to a number of fields, such as detecting structural variations [9,10], identifying methylations [11][12][13], and further facilitating disease diagnoses [14]. Although TGS is on the cutting edge in read length and many other aspects, its sequencing error rate falls behind NGS due to its technical limitations. For example, one of the key sequencing technologies of TGS is to identify the spectrum caused by different nucleotides passing a nanopore, during which it is possible to misidentify the current nucleotides as deletions or insertions when an abnormal speed occurs [15][16][17]. More importantly, in terms of research value, the importance of TGS has been steadily increasing, and its sequencing error rate has also been gradually decreasing. The PB-scale third-generation sequencing data, which rapidly accumulated in the past decade, cannot be discarded. It is considered that the sequencing errors can be corrected by algorithmic methods.
Along with the development of TGS, bioinformatics researchers have been gradually focusing on correcting sequencing errors by error correction algorithms. A bunch of algorithms have emerged. With continuous optimization and development, the existing error correction methods have performed well on overall accuracy, although the performance at heterozygous loci is not satisfactory [18,19]. However, heterozygous variations are more common than homozygous variations in many cases, and heterozygosity plays a valuable role in disease genotype-phenotype analyses and genetics research. But sequencing error correction becomes more complicated in the presence of heterozygosity, the existing methods encounter some challenges in handling heterozygosity. According to the given data, and the existing methods generally fall into two categories: self correction algorithms and hybrid correction ones. The input data of self correction is a set of TGS reads, long reads (LRs) for short. Its core idea is to call a consensus between LRs, which is achieved by building multiple alignments among LRs and computing local alignments [20]. It is practical to estimate heterozygous variations based on multiple alignments and local alignments, however, the coverage of LRs limits the correction performance. Currently, the coverages of the published data sets are generally low due to the cost, which results in short splicing sequences and unsatisfactory correction performance. Therefore, the low coverage of LRs limits applications of self correction [18], which also makes it more difficult to properly handle heterozygous sites. For example, when the coverage of LRs is lower than 2, considering from the perspective of mathematical expectation, it is impossible to distinguish a heterozygous variation from sequencing errors, even from homozygous variations.
Because of the problems of self correction, hybrid correction is more popular in practice. The basic idea of hybrid correction is: given LRs and a set of NGS reads, for simplicity, called short reads (SRs), map SRs to a read that extracted from LRs, then vote with the mapping results of SRs, the allele with the most votes is the final correction result [21]. It can be seen that the core of this basic idea is voting, and some latest researches have also improved the voting process. According to this idea, the reason why the current hybrid correction algorithms cannot solve the heterozygous condition lies in the structure of the algorithms themselves. In the case of hybrid correction represented by proovread [22] and ECTools [23], the heterozygous variations are not considered as special situations in voting process. Figure 1 shows an example of miscorrection. Furthermore, even if the heterozygous variations are considered on SRs, since each long read (LR) is treated independently, the purpose of distinguishing the heterozygous variations and the noise cannot be achieved. On the other hand, the objective of the algorithms is to correct LRs, so the coverage of SRs is low in order to control the cost, which is also not conducive to the identification of heterozygosity. In addition, all SRs need to be mapped again for each LR, which leads to low correction efficiency. All these make the existing hybrid correction methods at a disadvantage when dealing with heterozygous sites.
Distinguishing a heterozygous site from sequencing errors is the key and difficult point for properly dealing with heterozygosity, which makes the simple voting process impossible to handle the complicated condition. Taking into account the characteristics of the heterozygous variation and the limitations of the existing correction methods, we propose a novel hybrid correction method, named QIHC. The highlight of QIHC distinguishing it from the existing methods is the adoption of probabilistic models, which solves the problem that the existing methods cannot effectively deal with error correction of heterozygous genomic LRs to a great extent. Specifically, according to the sequencing principle of reads, we can assume that bases of reads mapped to the same site obey binomial distribution. Since the bases for mapping are respectively derived from LRs and SRs, the probability in the binomial distribution is related to the sequencing error rate, in general, the prior error rate of LRs ranges from 15% to 20%, and the prior error rate of SRs is around 5% [15][16][17]24]. Therefore, we propose two sets of probabilistic models based on Bayesian classifer for LRs and SRs respectively, which differ from the different sequencing error rates and judge heterozygosity of the mapping sites by calculating the posterior probabilities before voting. More specifically, a set of probabilistic models determines whether a position is homozygous or Fig. 1 The voting rule in existing hybrid correction methods incorrectly handle heterozygous heterozygous by obtaining the maximum posterior probability. Then, according to the heterozygosity judgment, the corresponding site is corrected by a voting mechanism. Compared to the existing methods, QIHC has better correction performance when using the probabilistic models to judge heterozygosity before voting than simply voting. Similarly, another set of probabilistic models works on self correction module, which makes the results obtained under low coverage more excellent than directly voting.
Through application of the above two sets of probabilistic models, the correction of LRs is realized, and a completed data process flow is formed. In this paper, we compared QIHC with two methods called Canu [25] and Jabba [26] and designed five groups of experiments, which respectively compared the influence of coverage of SRs on accuracy, the influence of error rate of LRs on accuracy, the accuracy and the heterozygosity quality of the different correction methods, and the potential effects of the different prior probability distributions on the performance. Taking a set of accuracy comparison experiments, when the coverage is 3×, the accuracy spans from 10.2% of Jabba to 72.4% of Canu, and finally to 87.8% of QIHC. From the experiment results, our method can always achieve excellent results at low coverage, whether it is LRs or SRs coverage.
Experimental protocol
Let L denote a set of TGS reads and S denote a set of NGS data, respectively. To demonstrate performance of QIHC at heterozygous positions, we performed experiments on several aspects. Overall, (1) We performed our experiments on the following datasets: the third-generation sequencing data L with coverage of 3×, 5×, 10×, 12× and 15×, respectively; the next-generation sequencing data S with coverage of 5×, 10×, 15×, 20× and 50×, respectively. It should be noted that all third-generation sequencing datasets used in our experiments contain 500 heterozygous variations. For a position with heterozygous variation, we say that this position has heterozygosity. We generated these data under different configurations by PBSIM [27], specifically, a portion of the human genome hg19 was taken as a reference genome for generating simulation data, we called the reference genome hg19_ref . In view of BLASR 's fault tolerance and strong alignment ability [28], we chose BLASR as the alignment tool. The parameters we set for BLASR were: -header, -m 5. (2) Except for Canu, we did not make too many comparisons with other error correction methods such as FMLRC [29], LoRDEC [20] or HALC [30], because the experiments at heterozygous positions had already done in the literature [18], and the performance of these methods on correcting bases at heterozygous positions was proven not to be ideal. The reason for choosing Canu was that from 2015 to the present, the version of Canu was from 1.0 to 1.8, and continuous improvement had made Canu a stable and widely-used error correction tool. It is worth mentioning that Canu v1.8 added a module called trio bunning that specializes in handing heterozygous conditions. Therefore, it would be more convincing for us to choose Canu to compare.
Evaluation strategies
In order to show the error correction results efficiently and pertinently, we only demonstrated the results of the sites with heterozygous variation here. For each heterozygous position, we investigated the change of its heterozygosity after error correction. Specifically, the criteria for judging whether the site is still heterozygous is as follows: mapping the corrected long reads set to hg19_ref, observing the distribution of corresponding bases mapped to the heterozygous position, if the distribution satisfies heterozygosity, then the position remains heterozygosity; otherwise, its heterozygosity is lost. True positive (TP) positions are those heterozygous sites that maintain heterozygosity after correction, whereas false negative (FN) positions are the sites with original heterozygosity that cannot remain heterozygosity after correction, whether it is noise or homozygous. To evaluate the error correction performance of different error correction methods and different coverages in the TGS data with heterozygous variations, we focused on accuracy, which was computed by 1-error rate.
Analysis of accuracy under different coverages of NGS data
QIHC requires the participation of S, so it is necessary to confirm the impact of different coverages of S on the correction results. S was generated from hg19_ref when the coverages were 5×, 10×, 15×, 20× and 50× by PBSIM, respectively. L was also derived from hg19_ref, its coverage was set to 5× in consideration of runtime. Figure 2 shows the accuracy values of QIHC influenced by the coverage of S. In order to analyze the results shown in Fig. 2, "heterozygous interval" needs to be described first. The heterozygous interval defines what conditions need to be satisfied if the base distribution mapped to the heterozygous variation site is identified as retaining heterozygosity. For example, when the heterozygosity interval is set to [0.2,0.8], a heterozygous variation site is considered to retain heterozygosity only when the bases distribution under this site falls within the interval. The interval [0.2,0.8] is a heterozygous interval generally recognized Table 1 shows the results. As can be seen from the results, QIHC performed much better than Jabba, specifically, the difference value of accuracy was up to 85.6% when coverage of L was 5× and heterozygous interval was [0.2,0.8]. According to Fig. 2, QIHC's correction accuracy reached best when the coverage was 10×. It can be seen that QIHC is not sensitive to the coverage of S, which facilitates the use of lower coverage S for the purpose of correction L.
Comparison to the existing methods on accuracy
In this part of the experiment, we chose Canu and Jabba as the comparison methods. The results are shown in Table 2. It can be seen that we experimented with low coverages. The reason is that TGS technology generates a large amount of low-coverage sequencing data due to its cost, it is more practical to experiment with low-coverage data. Among the results, Jabba was significantly less accurate than QIHC and Canu, which also confirmed that the early error methods did not consider the heterozygous variation sites at all. For the results of Canu and
Comparison between Canu and QIHC on heterozygosity quality
In this part of the experiment, the quality of heterozygosity maintained by Canu and QIHC would be analyzed. The so-called heterozygosity quality analysis is that the corrected heterozygous site examines alleles mapped to the site after ensuring that the base distribution falls within the heterozygous interval. For example, a heterozygous site consisting of allele A and C, after correction, the bases mapped to this site should still be dominated by base A and C; otherwise, although the site remains heterozygosity within the heterozygous interval, its heterozygosity quality is very low. To more clearly analyze the heterozygosity quality, we quantify it. Specifically, for a A-C heterozygous site, we compare the proportion of base A and C mapped to this site with the proportion of base T and G. If the former is larger than the latter, that is, the difference value is positive, the heterozygosity quality of the site is high, otherwise, the quality is poor. Other types of heterozygous site are similar. Thereafter, a more detailed analysis of the sites with high heterozygosity quality is conducted to classify them as good and excellent. Specifically, the case where the difference value is between 0 and 0.3 is defined as good, and between 0.3 and 1 is defined as excellent. No doubt, excellence is better than good. The experiment results with coverage 15× were selected for heterozygosity quality analysis. The results are shown in Table 3. After removing the sites that did not maintain heterozygosity, 494 and 486 heterozygous sites left in Canu and QIHC results, respectively. Among the sites that maintained heterozygosity, the difference value of 246 heterozygous sites in Canu result were negative, that is, their heterozygosity qualities were poor; in comparison, although the number of heterozygous sites maintained in QIHC result was slightly less than that of Canu, the number of heterozygosity sites with poor quality was significantly less than Canu, which was 210. Similarly, QIHC was also significantly better than Canu in terms of the number of high quality heterozygous sites, 245 and 211, respectively. Among the high quality heterozygous sites, QIHC had a higher proportion of excellence. Therefore, from the above analysis, QIHC was slightly inferior to Canu in accuracy when the coverage was 15×, but after in-depth analysis, it can be concluded that QIHC was significantly better than Canu in heterozygosity quality. This is also the reason why we chose the 15× coverage for deep analysis, that is, QIHC can still lead significantly in other aspects when its accuracy result is not dominant.
Analysis of accuracy with different sequencing error rates of TGS data
In this part of experiment, we tested the accuracy of QIHC and Canu at different sequencing error rates of L, the experimental results are shown in Table 4. Since
Analysis of potential effects of the different prior probability distributions on the performance
So far, we focused on the presentation of the overall framework of the algorithm, directly defined the homozygous and heterozygous prior probabilities as point probabilities. In this part of experiment, we further discussed the potential effects of other prior probability distributions on performance. Here we chose Beta distribution for discussion, the reasons are as follows: Beta distribution can be understood as a probability distribution of probabilities, that is, it represents all the possible values of a probability when we don't know what that probability is. Going back to the background of our method, the prior probability P(c) is available in most cases, but in a few cases we can't explicitly obtain P(c), which happens to be the area where Beta distribution is good at processing. Moreover, by adjusting the shape parameters in Beta distribution, the probability distribution can be made into various shapes we want, so Beta distribution is sufficient to express our estimation of the prior probabilities in advance. We made P(c) obey Beta distribution, according to the principle of Beta distribution, , where a and b are shape parameters, θ is a reasonable guess of the probability of homozygosity or heterozygosity derived from experiences in previous studies, Beta Based on the characteristics of Beta distribution, we varied the probability density distribution by changing values of the shape parameters a and b, and observed the potential effects of the different prior probability distributions on the performance. Specifically, we made the expected value of the distribution equal to 0.5 (that is, a a+b = 0.5), which means the probability of homozygosity will most likely around 0.5, but it could reasonably carry out small fluctuations. Users can set this value according to their actual situations. Here, Fig. 4 shows four Beta distributions with the expected value equal to 0.5 by changing a and b values. We can see that as the values of a and b increase, the curve becomes more "sharp", that is, the probability distribution of the prior probability is more concentrated around the expected value. We Table 5. Through the results we can see, when the prior probability was from 0.25 to 0.75, which could be the common practice, the accuracy decreased slightly as the prior probability was far from the expected value, but it still maintained a relatively stable state. Specifically, in the heterozygous interval [0.2,0.8], the accuracy decreased from 0.964 to 0.95, then to 0.94, and the corresponding prior probabilities were 0.5, 0.35, and 0.25, respectively. Similarly, when the prior probability changed in the opposite direction to 0.75, the accuracy also reduced to 0.952. Further, when the prior probability continued to drop to 0.1, the accuracy fell to 0.652, which means the accuracy of the proposed method may be attenuated when the prior probability is at extrem level. Through the testimony of the experimental results, we can conclude that the accuracy reaches the optimum at the expected value of Beta distribution, then, as the prior probability get further away from the expected value, the accuracy gradually deteriorate. Obviously, the smaller the fluctuation of the prior probability, the smaller the impact on the posterior probability calculation, that is, the more stable the performance of QIHC. For example, in the case of a = b = 5, the prior probability is even possible to be more extreme, such as 0.1, although this situation is less likely, it also increases the instability of the correction result. When shape and scale parameters are quite large (e.g. a = b = 300), the prior probability reasonably ranges from 0.45 to 0.55, which has little effect on the performance.
Conclusions
The third-generation sequencing (TGS) technology has demonstrated unique advantages in terms of read length and so on, which providing great convenience for downstream analysis. As we start to see the promising potential of TGS, we must also be aware of where it might stumble. High sequencing error rate is a major problem in TGS technology, therefore, correcting the sequencing errors is an inevitable step when we apply the TGS data. The existing error correction methods are quite complete for the correction strategy at normal sites, but they are often not considered in the correction of heterozygous variation positions, which is an aspect that cannot be ignored. We have therefore proposed a method to break this limitation, solving the error correction at heterozygous sites. Our novel error correction method, termed QIHC, adopts probabilistic models to deal with heterozygous variantion sites based on the advantages of the existing error correction methods. According to the sequencing principle, QIHC reasonably assumes that the mapping bases obey binomial distribution, uses Bayesian classifier to judge the heterozygosity of sites by calculating the posterior probabilities, and then performs error correction. In addition, QIHC also generates a pseudo reference sequence, which makes our algorithm suitable for genomic data without reference sequences, and achieves high efficiency of single mapping and repeated using. In the simulation experiments, QIHC performs significantly better than Canu and Jabba at heterozygous variantion sites, especially in the case of low coverage. From the comparison of Canu and QIHC, the performance of QIHC at low coverage is significantly superior to that of Canu in all aspects; as the coverage increases to 15×, the accuracy of QIHC is also greatly improved, although Canu has the slightly upper hand in accuracy after eliminating the interference of low coverage, but still far behind QIHC in terms of the heterozygosity quality. In the case of low coverage, since Canu-correct is a self correction algorithm, coverage of the TGS data is a key factor affecting performance of Canu, making its performance worse than QIHC in many aspects; with the coverage going up, Canu continues the principle of Celera Assembler and adopts "Overlap-Layout-Consensus", that is, after the overlap of sequences, voting correction is performed directly according to the "the minority is subordinate to the majority" rule. QIHC adds probabilistic models for judging heterozygosity, so that even when the accuracy is slightly backward, it can prevail in the heterozygosity quality. For future work, we will try several assembly tools and generate the contigs to optimize the correction results of QIHC as much as possible. Moreover, we will adjust the program code to optimize running time and memory consumption.
Methods
Let L denote a set of TGS reads and S denote a set of NGS data, respectively. Suppose that we are given L and S, QIHC uses the probabilistic models to judge the heterozygosity through Bayesian classifier, and corrects reads from L based on the integration of self correction and hybrid correction mechanism, finally outputs the corrected set L'. No reference sequence is required for the inputs. From the inputs to the outputs, QIHC includes three major modules, which are in turn: 1) Generating pseudo reference sequence. Specifically, a pseudo reference sequence is obtained through assembly process, which can be done by any popular long-read assembly tool. Through this module, the inefficiency of repeatedly mapping S to each read from L in hybrid correction is solved. At the same time, there is no need to narrow down the QIHC scope of application in order to input a native reference sequence. At this time, the probabilistic models can be used to calculate the posterior probabilities, furthermore, Bayesian classifier is used to judge heterozygosity, that is, the largest posterior probability is selected for decision making, finally the targeted correction strategies are implemented. This is our core module, under the premise of not losing accuracy and greatly increasing the sensitivity to heterozygosity, QIHC accomplishes the error correction of L.
Generating pseudo reference sequence
As the beginning of the method, we perform sequence assembly to get a pseudo reference sequence, the assembly process is as following four steps: Step 1 : Load reads from L and align all reads to each other to get a directed graph, where each read is treated as a node.
Step 2 : Compute overlaps between any two reads based on Smith-Waterman algorithm and obtain the information of all possible overlaps. Specifically, we set the user parameters min_length, max_length and θ as the minimum length, the maximum length and a threshold score of overlap, respectively. Using the Smith-Waterman algorithm to compute the score of overlap between any pair of reads. Of course, if there is no overlap between two reads, the corresponding score is 0, and the overlap length is also 0. When the overlap length of a pair reads is between min_length and max_length, and the score is greater than θ, the overlap is established.
Step 3 : According to the overlaps, the reads from L are preliminarily assembled, and get the combined relationship of fragments, defined as contig.
Step 4 : Scan again, if there are overlaps in contigs, merge the contigs to form a new contig, and delete the original contigs. In this way, we get the final contigs.
Finally, we link these contigs to obtain a pseudo reference sequence-Ref. Since assembly principle of Canu can achieve the purpose of our assembly idea, and Canu adds correction and trimming before assembly to get high quality contigs, we choose Canu as the assembly tool.
Obtaining read alignment
After obtaining Ref, L and S are mapped to the pseudo reference sequence by BLASR [28], which has a strong fault tolerance and can map almost all reads to Ref. Since BLASR may generate multiple mapping results for a read and sort by the percentage of mapped base, we reserve the best mapping result for each read and divide L into Lm and Lu according to the percentage of mapped base, that is, the critical value of the percentage is 90%, a read with the percentage over 90% is assigned to Lm, otherwise it is assigned to Lu. Next, it is necessary to map S to each read of Lu separately in order to perform a correction strategy different from Lm.
Judging heterozygosity and correcting reads
The highlight of QIHC is the heterozygosity judgment, the core idea is as follows: two sets of the probabilistic models are established for S and L respectively, the probabilistic models are proposed based on Bayesian classifier. According to the basic principle of Bayesian classifier, we calculate the posterior probabilities of homozygosity and heterozygosity respectively, and take the side with higher probability value as the judgment result. More specifically, the comparison of the posterior probabilities is equivalent to the comparison of the product of the prior probability and the conditional probability of heterozygosity. The prior probability is a fixed value obtained through sequencing data features, the conditional probability is subject to binomial distribution. Alleles are sorted according to the frequency of occurrence. For heterozygous cases, the first two rank are taken as the heterozygous alleles, and the rest are sequencing errors; for homozygous cases, the first rank is taken as the homozygous allele, similarly, the rest are sequencing errors. Each allele obeys its respective binomial distribution, and the latter term is calculated on the basis of the former term.
Specifically, the heterozygosity judgment process is described in detail with respect to the site i on Ref. Let L k , S t , B m k , b n t , R i represent the kth LRs, the tth SRs, the mth base of L k , the nth base of S t , the ith base of reference sequence Ref, respectively. Figure 5 intuitively shows the dependency relationships and the distribution among Ref, L k , S t , B m k , b n t and R i . According to the known knowledge, a base may be four single-bases or null. Thus, B m k and b n t have five possible alleles, which are A, T, G, C and null label N. The mapping result is subdivided, the number of long reads mapped to R i is defined as the read depth of long reads, represented by RD i . Similarly, the read depth of short reads is defined, denoted by rd i . Processing the long reads which mapped to R i , let X q denote alleles which are sorted according to the frequency of occurrence from large to small, X q denote the corresponding frequency, q=1, 2, 3, 4, 5. Then, we can draw the binomial distribution for X q : X q ∼ Bin D q , P 1 , where Similarly, the alleles and frequency of occurrence It should be noted here that P 1 and P 2 are the prior probability values of X and x respectively, which vary according to the different situations, and the details are given in the probability model calculation part.
Therefore, the posterior probability of L can be calculated by Eq. (1), P (c|X 1 , X 2 , X 3 , X 4 , X 5 ) = P (X 1 , X 2 , X 3 , X 4 , X 5 |c) × P(c) where the value of c is homozygosity or heterozygosity. The probability of S is similar. For the allele whose occurrence frequency is 0, the corresponding item is removed in actual calculation. So far, two kinds of the posterior probabilities are obtained by multiplying the above probabilities, which are the probabilities of observing the bases distribution when homozygosity and heterozygosity; inferring the heterozygosity of the site based on the maximum probability value. Then, the bases distribution as a new definition is brought up, which reflects how many kinds of alleles are mapped to R i , denoted as dl, which can be computed as where I(·) is an indicator function, which outputs 1 when the equation is true. Similarly, the bases distribution are defined in short reads, denoted as ds. The possible distributions of bases are given in Fig. 6, which contribute to understanding of the heterozygosity judgment. After X q , x q , dl, ds, RD i and rd i are calculated, the heterozygosity judgment is performed. Let D i represent the bases distribution under the site i we observed. According to the bases distribution, it can be divided into five cases: d=1, d=2, d=3, d=4 and d=5, d here refers to dl and ds.
Case 1: If dl=1, then it is directly judged to be homozygosity. If ds=1, then it is directly judged to be homozygosity.
Case 2:
If dl=2, thus |X 1 | + |X 2 | = RD i , then the posterior probability of D i under homozygosity and heterozygosity are as follow (see the corresponding details of calculation formulas in Additional file 1): After calculating the two posterior probabilities, the result of judgment is corresponding to the larger value.
The judgment principle of S is similar to L, we do not describe here, see the corresponding details in Additional file 1.
Case 3:
If dl=3, thus |X 1 | + |X 2 | + |X 3 | = RD i , then the posterior probabilities of homozygosity and heterozygosity are given below (see the corresponding details of calculation formulas in Additional file 1): Similar to the case with d=2, we take the larger value in Eqs. (5) and (6) as the judgment result of L.
Case 4:
If dl=4, thus |X 1 | + |X 2 | + |X 3 | + |X 4 | = RD i , then the posterior probabilities of homozygosity and heterozygosity are given below (see the corresponding details of calculation formulas in Additional file 1): Case 5: If dl=5, thus |X 1 | + |X 2 | + |X 3 | + |X 4 | + |X 5 | = RD i , then the posterior probabilities of homozygosity and heterozygosity are given below (see the corresponding details of calculation formulas in Additional file 1): So far, the strategy of the heterozygosity judgment has been given. In general, the input to this process is the bases distribution under site i, and the different probabilistic models are implemented for different sources of reads. The final output is the result of heterozygosity of site i. Then, we perform the different correction strategies for Lm and Lu, respectively.
For the correction of Lm, the inputs are the bases distributions of Lm and S and their heterozygosity judgment results. Correcting Lm is our goal, so the R i on Ref is only used as an anchor point to locate related reads of Lm and S, as shown in Fig. 5. Under the same anchor point R i , results of heterozygosity judgment for the bases distributions produce four possible combinations: heterozygous result from Lm and homozygous result from S; heterozygous result from Lm and heterozygous result from S; homozygous result from Lm and homozygous result from S; homozygous result from Lm and heterozygous result from S. For these four combinations, QIHC makes a decision: when the judgment results of Lm and S are consistent, since the sequencing accuracy of NGS is much higher than that of TGS, the judgment result of S is adopted; otherwise, the party whose judgment result is homozygosity is accepted. Thus, the final judgment result of heterozygosity is obtained, which is defined as H m . According to H m , the following correction rules are implemented: If H m is homozygosity, then the site to be corrected is replaced with the allele which appears most frequently among bases mapped to R i ; If H m is heterozygosity: if the site to be corrected is already one of the top two frequent alleles among bases mapped to R i , then leave the allele of this site as it is; otherwise, the site to be corrected is randomly replace with one of the top two frequent bases.
According to the above decision results, all reads corresponding to the anchor point in Lm are corrected by the correction rules, a correction result set Lm' is outputted.
For the correction of Lu, since Lu is the set of long reads that have not been successfully aligned to Ref, it can be seen that there is not enough correlation between each read in Lu. Therefore, the inputs are the bases distributions of S mapped to reads of Lu and their heterozygosity judgment results, using S to correct each read in Lu one by one. The basic principle is obtaining the final heterozygosity judgment result of S named H u , and correcting Lu according to the following criterions: If H u is homozygosity, then the site to be corrected is replaced with the allele which appears most frequently among bases mapped to the base B m k ; If H u is heterozygosity: if the base corresponding to the site to be corrected is already one of the top two frequent alleles among bases mapped to B m k , then leave the base of the site as it is; otherwise, the site to be corrected is randomly replaced with one of the top two frequent bases.
It is worth noting that the implementation of heterozygosity judgment and correction rules here only use the information provided by S. All reads in Lu are corrected, a correction result set Lu' is outputted. Eventually, Lm' and Lu' form L' together.
Overall, we design an error correction algorithm with high sensitivity to heterozygosity, the algorithm mainly consists of the following steps: Step 1 : Assemble L and get contigs; Step 2 : Link contigs one by one and obtain a pseudo reference sequence-Ref ; Step 3 : Map L to Ref and get Lm and Lu; Step 4 : Map S to each read of Lu, obtain rd i , os (V n ) and ds, implement heterozygosity judgment and save result; and ds, implement heterozygosity judgment and save result; Step 7 : Make the final judgment H m for Lm, if the results of step 5 and step 6 are consistent, the result of step 6 is adopted; otherwise, the step whose result is homozygosity is accepted, jump to step 9; Step 8 : According to the result of step 4 and the correction rules mentioned above, correct each read of Lu, obtain the correction set Lu'; Step 9 : According to the result of step 7 and the correction rules mentioned above, correct all reads of Lm which corresponding to the anchor point R i , then load R i+1 and jump to step 5, until all sites on Ref are traversed, obtain the correction set Lm'; Step 10 : Combine Lu' and Lm' to get L'.
Additional file 1: Supplemental Material 1 -The corresponding details of calculation formulas. ZML prepared and provided the datasets. JQL, XX, JW wrote this manuscript. All author(s) have read and approved the final version of this manuscript. | 9,023 | 2020-11-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
RT-Cloud: A cloud-based software framework to simplify and standardize real-time fMRI
RT-Cloud
Introduction
Real-time functional magnetic resonance imaging (RT-fMRI) is an emerging technology that holds tremendous promise for breakthroughs in basic science and clinical applications. In contrast to traditional, offline fMRI analysis, RT-fMRI involves analyzing data while participants are still in the scanner, giving experimenters the ability to modify the stimuli or tasks that they present as a function of the participant's measured neural state. RT-fMRI can be used in neurofeedback designs, in which participants are given feedback on how well they are instantiating a target brain state, and they use this information to learn how to better instantiate that state (for a historical review of fMRI neurofeedback, see Linden et al., 2021 ; this review is part of a recent textbook on fMRI neurofeedback edited by Hampson, 2021 ). In another use of RT-fMRI, stimuli are modified as a function of brain activation, but par-In a different study, participants given neurofeedback were able to improve their ability to sustain attention ( deBettencourt et al., 2015 ). Researchers have even been able to induce perceptual learning of a particular stimulus orientation without visual presentation of that orientation and without participants becoming aware of what was being trained ( Shibata et al., 2011 ).
Clinical studies have also used fMRI neurofeedback to treat neuropsychiatric and neurodevelopmental disorders; for a comprehensive listing of these studies as of mid-2020, see Table 1 from Linden (2021) , and for a recent review see Taschereau-Dumouchel et al. (2022) . To give one example, depressed patients who underwent fMRI neurofeedback training to increase amygdala activity while recalling positive autobiographical memories showed a decrease in depressive symptoms; these effects were specific to when neurofeedback was based on amygdala activation vs. activation of a control region in parietal cortex ( Young et al., 2017 ; for further discussion see Young et al., 2021Young et al., , 2018. Notably, several clinical studies have obtained promising results using the Decoded Neurofeedback (DecNef) approach, in which participants are given feedback to boost activation of a particular neural pattern without being told what the neural pattern is (for recent reviews, see Shibata et al., 2019 ;Taschereau-Dumouchel et al., 2021 ;Watanabe et al., 2017 ). For example, one DecNef study showed that training individuals with snake or spider phobias to activate a pattern corresponding to snakes or spiders (respectively), in the absence of viewing the phobia-triggering stimuli, and without knowledge that the target neural patterns related to snakes or spiders, led to decreased skin conductance fear responses to these stimuli ( Taschereau-Dumouchel et al., 2018 ; for an example of a similar approach to treating post-traumatic stress disorder, see Chiba et al., 2019 ). Other clinical studies have obtained promising results by providing feedback based on functional connectivity. For example, Ramot et al. (2017) used RT-fMRI neurofeedback in individuals with Autism Spectrum Disorder (ASD) to reinforce functional connectivity (i.e., correlation in fMRI timeseries) between brain regions that are underconnected in ASD relative to controls; the training led to increases in functional connectivity that lasted up to a year and were correlated with improvements in behavioral symptoms.
Challenges with RT-fMRI
Importantly, despite the strong potential of RT-fMRI, its uptake has been limited by several factors. First and foremost, setting up real-time analysis pipelines is technically challenging: A real-time communication bridge needs to be built to connect scripts across multiple processes so that an incoming DICOM image can be transferred and analyzed, and then participant feedback can be presented based on the analysis results in a timely fashion. Clinical sites in particular may lack the software engineering and IT expertise needed to assemble this pipeline. A second factor is that the computational complexity of fMRI analysis has escalated substantially over the past two decades. Whereas, previously, the field relied almost exclusively on univariate measures (e.g., average activation in a region of interest), researchers have increasingly come to rely on more computationally-demanding and sensitive multivariate analyses (e.g., pattern classifiers and functional alignment algorithms; Cohen et al., 2017 ). This increase in the use of multivariate methods has occurred both for offline analysis and also real-time analysis (e.g., deBettencourt et al., 2015(e.g., deBettencourt et al., , 2019Iordan et al., 2020 ;LaConte et al., 2007 ;Shibata et al., 2011 ;Wang et al., 2016 ). A third factor relates to the lack of open standards for RT-fMRI research. Offline fMRI analysis has been the focus of significant efforts at standardization: for example, development of the Brain Imaging Data Structure (BIDS; Gorgolewski et al., 2016 ) and fMRIPrep ( Esteban et al., 2019 ). However, these new standards generally have not been applied in the RT-fMRI domain. This lack of standards has led RT-fMRI researchers to use a wide variety of different (and incompatible) pipelines in their research, which in turn has made it more difficult to share work, reproduce results, and reuse components.
Addressing the challenges of RT-fMRI using cloud computing As described above, many of the challenges of RT-fMRI revolve around building and deploying a set of coordinating software components and harnessing enough computing power to complete analysis in time. This general set of challenges is not unique to RT-fMRI and in fact is common among many computer applications including e-commerce, data analytics, AI, and web-based communication. A common theme in the past decade has been to harness the power of cloud computing to simplify, standardize and reduce the cost of creating and deploying applications.
A classic example of an application that has primarily moved to the cloud is email. Without the cloud, a company would need to install an email server in their server room, and then install email clients on all employee computers. Whenever there would be an application update, the IT group would need to push out the new email client to all employee laptops. Some percentage of users would encounter a problem because of an outdated OS, insufficient storage space, the wrong libraries, and so on. With email running in the cloud, the server runs in the cloud and the employees access their email through a web browser. There is no installation on each computer and employees can access their email from anywhere. If the email server becomes too slow, it can be scaled up instantly and more storage can be added as needed. This on-demand model is known as Software-as-a-Service, or SaaS.
Here, we describe RT-Cloud, a newly-developed, open-source software framework written in Python 3 that leverages cloud computing and SaaS to address the challenges of RT-fMRI (https://github.com/brainiak/rt-cloud). With RT-Cloud, the realtime fMRI analysis software is installed on cloud computers and is accessible from any web browser; only one lightweight software component is installed locally (to forward images up to the cloud). This setup makes it possible to run highly complex fMRI analyses in real time, even in situations where the scanning facility does not itself have extensive computing resources or IT expertise. To facilitate the sharing of pipelines and data, RT-Cloud also takes advantage of the BIDS data standard for fMRI, as described in the Integration with BIDS section below. In this section, we provide an overview of the key advantages that cloud computing and SaaS provide for RT-fMRI: ease of setup and maintenance, lowering of costs, ease of scaling, and accessibility from anywhere; in the section after this one, we describe the RT-Cloud framework in more detail.
• Ease of setup and maintenance. With cloud-based computing, all installation and maintenance are on a single cloud virtual machine (VM) image and that image is used to instantiate each VM at startup, thus ensuring that all instantiations are identical. This means that quality control can be addressed centrally, ensuring that the project meets strict and consistent standards for system functionality, library compatibility and security. In addition, the installation and maintenance can all be done remotely by a centralized team -and often just a single person -rather than separately at each facility, thus reducing IT costs and time. This is especially important when deploying in clinical settings where the availability of specialized hardware and technical staff to do software installations and maintenance may be limited. • Lowering of costs. Cloud is a "pay for what you use " model. So, for example, two hours of cloud computer time to run a session will cost about $1-$2. Cloud savings occur because of the ability to spin up and spin down hardware on demand. Within our framework, cloud VMs are only running during the scanning session and then are stopped when the session is done. This on-demand nature eliminates the primary cost of owning hardware, which is equipment idle-time. • Ease of scaling. Cloud computing also enables system scaling, both out and up. Scaling out is the addition of more of the same type of resource (i.e. more VMs) to accomodate more simultaneous studies.
This is important to enable large-scale deployment with simultaneous usage at multiple sites. Scaling up is the allocation of larger VM instances (faster, more cores, more memory) to accommodate higher processing demands of an individual experiment. This is very helpful as experimental designs and computational demands change. The result is that, rather than committing to a $10,000 piece of hardware today, only to find it is insufficient tomorrow (or conversely is overpowered and thus mostly idle), you can simply rent the right VM size today and change it up or down at any time. • Accessibility from anywhere. The Software-as-a-Service (SaaS) model is transforming many industries by simplifying application deployment; as noted in the email example above, this model involves installing an application on the cloud and having users access the application through a web browser, without installing libraries or software on their local computer or laptop. This has major advantages in the context of fMRI. Since MRI scanners are typically heavily booked and tightly scheduled, experimenters do not want to waste time in the control room logging in to software and entering session configurations. SaaS makes it possible for researchers to do these preliminaries outside of the control room on any web-accessible computer, so they will be ready to immediately begin the experiment when they get into the control room. It also makes it possible to install, maintain and test the experiment from a laptop outside of the control room, and then use the same interface to run the experiment, thus ensuring smooth operation. As improvements are made to the service, users can have access to those immediately, never having to wonder or check if their computer has enough memory, the right OS version, the right libraries, and so on.
Overview of framework
The RT-Cloud software framework provides the basic infrastructure needed to run an RT-fMRI experiment. The framework wraps experiment-specific code that the researcher provides, thus providing a pluggable model that reduces the complexity and time of setting up and running an experiment. RT-Cloud was co-developed with the BrainIAK suite of Python tools for advanced fMRI analysis (https://brainiak.org; Kumar et al., 2021 ). Users can deploy BrainIAK analysis modules in their custom analysis code, or they can use other tools if they wish. During execution, the RT-Cloud framework handles details such as starting and stopping the analysis pipeline, getting fMRI images in real-time, and handling data communication between components such as for images, analysis results, and participants' responses.
The framework has two major components, the FileWatcher and the ProjectServer. The FileWatcher watches for and forwards DICOMs as they arrive from the MRI scanner, while the ProjectServer coordinates between the FileWatcher, the researcher's analysis script, and the feedback presentation shown to participants. In addition, the framework has two web interfaces: the Experiment Control web page, which allows the researcher to control the experiment session, and the Subject Feedback web page, which can be used to present stimuli to participants.
To elaborate, the FileWatcher runs on a computer in the control room and requires minimal processing power. Once parameters on the scanner computer are set so that DICOM images are made accessible, the FileWatcher registers for file-system notifications to watch for the arrival of new DICOM images from the scanner. Each new DICOM is read, converted to a BIDS format using the BIDS-Incremental system that we created for this purpose (see the Integration with BIDS section for more details), and forwarded to the ProjectServer using standard network protocols.
The ProjectServer is deployed on a system with enough processing power to quickly analyze the incoming brain-volume data in time for the participant to receive neurofeedback, in about 1 to 2 s. This component usually runs on the cloud but it can also run on a local computer or cluster. As noted in the previous section, running this component in the cloud makes it possible to scale computing resources as a function of the complexity of the analysis, by either scaling up the VM instance or splitting the processing into parallel components across multiple VMs.
The experiment can be controlled by interacting with the Project-Server via the Experiment Control web page, which is typically accessed from a laptop. This web page allows the researcher to change configuration settings, start and stop runs, and view analysis results in real time. The Subject Feedback web page is made visible to the participantsthis web page can be flexibly used to present stimuli to participants, including (but not limited to) neurofeedback based on the results of the RT-fMRI analysis. Moreover, this web interface can also collect behavioral responses (e.g., button presses) from participants (see the Integration with Experiment Scripting Packages section below). Fig. 1 illustrates how the framework components fit together to complete the neurofeedback loop.
Integration with BIDS
To facilitate the re-use of existing pipelines and data, RT-Cloud supports BIDS, the leading standard for fMRI data ( Gorgolewski et al., 2016 ). BIDS is supported by a wide variety of formatting and analysis tools and data repositories. For example, BIDS is used by the Open-Neuro database ( Gorgolewski, Esteban, et al., 2017 ;Markiewicz et al., 2021 ), a large and growing repository of neuroscience datasets (incorporating fMRI and also other data types). BIDS has an automated and comprehensive validation tool that analyzes datasets for compliance and identifies issues ( Gorgolewski et al., 2016 ). It also is the data format used by "BIDS Apps ", which are container-based applications with a standardized interface that work on BIDS-formated datasets ( Gorgolewski, Alfaro-Almagro, et al., 2017 ).
BIDS archives include brain volumes stored in NIfTI format, and have meta-data stored in separate "sidecar " files, typically with JSON or TSV (tab separated value) formating. BIDS archives also have a standard directory structure and file naming convention. Included in the file names are "BIDS entities " such as the subject name, session, task, and run.
BIDS is designed as an archival standard (i.e. for data-at-rest), but RT-fMRI requires streams of data in order to process brain-volumes as they arrive from the scanner. An adaptation is required to support streaming data in a BIDS-compliant manner. Here, we introduce a data structure called the "BIDS-Incremental ", which is an in-memory BIDS archive with only one brain volume and associated meta-data in it. BIDS-Incrementals allow us to package and stream DICOM images one at a time as they arrive off the scanner and send them to the ProjectServer for analysis.
BIDS-Incrementals must hold the proper data structures to be compatible with a BIDS archive. These data structures include: the NIfTI image; a metadata dictionary describing the image; and several supporting data structures that map to files in a BIDS archive (namely, the events file that includes information about the stimuli presented to the participant, a README file with general information about the experiment, and a JSON file that describes the dataset). In addition, the data structure must be fast enough to support all operations well within the typical 1-2 second repetition time (TR) window in which a brain volume is acquired. There are three main operations that the software supports: 1) creating a BIDS-Incremental from a DICOM image; 2) appending a BIDS-Incremental to a BIDS archive; and 3) reading a BIDS-Incremental from a BIDS archive; these operations are illustrated in Fig. 2 . The first two operations support streaming live data as it arrives from a scanner and accumulating it into a BIDS archive during processing. The third supports "replaying " or re-processing, through a real-time pipeline, data that have been previously collected and stored in a BIDS archive. This "replay " workflow is very useful for testing an analysis pipeline. Instead of collecting new data in real time, users can just take an existing dataset and run it through the pipeline to see how well everything is working (see section below on Integration with OpenNeuro ). (2) The ProjectServer, which wraps the researcher's code, processes the NIfTI image and runs the analysis code to obtain a measure of the participant's brain state (e.g., whether they are attending to a face or a scene). The researcher accesses the cloud application from a browser page that can run on a laptop. Among many things, the researcher can initiate/finalize the session, change settings, and even observe the graph output of the analysis results from this browser page. (3) The analysis results are provided to the participant as neurofeedback presented on a display screen in the MRI room. Note that RT-Cloud can also be installed on a local computer or cluster node in lieu of using the cloud. Figure adapted with permission from Kumar et al. (2021) .
Fig. 2. BIDS Support:
We have adapted the typical data-at-rest BIDS standard to real-time streaming by developing a BIDS-Incremental system. As data arrive from the scanner, we create single-volume BIDS archives that are streamed to the real-time analysis engine on the cloud. These single-volume archives can be appended in the cloud to create a BIDS archive that encompasses the entire run, and they can also be replayed one volume at a time to simulate a real-time experiment.
To satisfy real-time requirements, these BIDS-Incremental operations must be completed quickly, ideally within a couple tens of milliseconds to avoid impacting overall real-time deadlines. Initial implementations of the BIDS-Incremental relied on disk-backed operations, such as appending to or reading from an on-disk BIDS archive. However, as an experiment continues and accumulates data, the on-disk archive grows in size and eventually operations exceed a completion time threshold. To counter this issue, we leverage the fact that real-time fMRI users typically work one scanning run at a time; by caching a run's worth of BIDS data in-memory, we can optimize operations that are in the time-critical portion of an experiment. Other operations, like writing or reading a run's worth of data to or from disk, can be done before or after the time-sensitive section of a real-time fMRI workflow. We developed this mechanism into a data structure called a "BIDS Run ", an in-memory cache of BIDS-Incrementals corresponding to one run in the experiment.
The above data structures that provide BIDS support within the framework leverage their implementation on PyBIDS, a software package produced by the BIDS Standard maintainers that provides a set of utilities for interacting with and manipulating BIDS data ( Halchenko et al., 2020 ). In addition, NiBabel ( Brett et al., 2020 ) and dcm2niix are used for NIfTI image handling.
Supporting BIDS within RT-Cloud has many benefits. Data that are collected and stored in BIDS format can be readily understood and used by other researchers and can be replayed through the RT-Cloud framework. In addition, if an RT-Cloud experiment is encapsulated as a BIDS App (see Current and future directions section below), then the full software environment needed to run the experiment is made available to users, including not only the analysis scripts, but also any libraries and configurations required. If a user shares both their BIDS App and their BIDS data with a second user, the second user will have everything needed to replay, validate, and modify the study. Fig. 3. OpenNeuro Integration: RT-Cloud, through its support for BIDS data, can download and stream datasets from OpenNeuro in order to test and validate experiment pipelines. In addition, the BIDS formated data resulting from an experiment can be uploaded by the researcher to OpenNeuro to share with the neuroscience community.
Integration with openneuro
To support the "data replay " functionality described in the previous section for simulating RT-fMRI studies, we also added a feature that connects RT-Cloud with the OpenNeuro database ( Gorgolewski, Esteban, et al., 2017 ;Markiewicz et al., 2021 ). With this connectivity, it is possible to stream any dataset stored on OpenNeuro through an RT-Cloud analysis pipeline. Data are processed using a consistent BIDS-Incremental format whether new or replayed ( Fig. 3 ).
Specifically, we provide an OpenNeuroService component that will download datasets (or parts of datasets such as particular subjects or runs) from OpenNeuro and make them available for streaming; this OpenNeuroService component can be run on any computer (for example, a cloud VM). When testing or validating their experiment, researchers can connect to the OpenNeuroService component and specify the dataset accession number, subject name, session, and run number in order to stream that previously-collected data through their analysis pipeline.
This can be thought of, in an initial way, as a kind of "Netflix for neuroscience data ". The data are housed in the cloud and can be accessed, streamed, and processed on-demand. This has benefits not only for validating previous experiments but also for building and testing new experiments. Before an analysis pipeline is deployed in a "live " experiment, users can stream previously-collected data through it to test for processing errors and/or develop and benchmark new analysis approaches.
Integration with experiment scripting packages
All RT-fMRI studies need some way of controlling how the experiment unfolds (i.e., which stimuli to present to participants) as a function of the results of the real-time fMRI analysis. To accomplish this goal, RT-Cloud has been designed to integrate with behavioral feedback scripting frameworks like JsPsych ( De Leeuw, 2015 ), PsychoPy ( Peirce et al., 2019 ), and PsychToolbox ( Kleiner et al., 2007 ). The ProjectServer of RT-Cloud has a websocket based communication layer that allows remote scripts to make a connection to the ProjectServer and receive analysis results. These results can then be used to adjust the stimuli or task displayed to the participant in the MRI scanner.
The ProjectServer can control how stimuli are displayed in several different ways depending on the requirements of the experiment. The most straightforward approach is to use the SubjectFeedback web page served up by RT-Cloud for stimulus presentation and behavioral data collection. Researchers taking this approach can use browser-based presentation toolboxes such as JsPsych ( De Leeuw, 2015 ) to script their experiment. In this use case, the JsPsych framework is served-up by RT-Cloud's web server and runs directly in a web browser that is viewable by the participant in the MRI scanner. For this reason, it requires no installation for use, just opening and pointing a web browser to the Pro-jectServer URL. This is particularly convenient for deploying real-time experiments in clinical settings where computer hardware availability is uncertain. RT-Cloud provides a JsPsych module for receiving fMRI analysis results; by default, this module renders a DecNef style feedback display, in which the displayed circle radius indicates the correspondence to the desired neural state. This can easily be extended to other feedback types by extending the draw function within the example.
Another approach to displaying stimuli, which could work with almost any presentation system, is to set up the presentation software on a separate computer, and to configure the ProjectServer to write each analysis result back to the presentation computer as a small text file, with a filename corresponding to the trial and containing only one floating point value, i.e. the analysis result. The presentation script can watch for the creation of such files and read them to adjust the presentation feedback. As described in the Real-world validation of the framework section below, we have used this approach in multiple studies to interface RT-Cloud with the PsychToolbox stimulus presentation software. A related, more streamlined, approach is for the stimulus presentation software (running on a separate computer) to receive the analysis results directly over a websocket connected to the ProjectServer; any scripting method that has support for websockets can use this approach. For example, this approach can be used to send analysis results from the Pro-jectServer to experiment scripts that were built using the Python-based PsychoPy stimulus presentation system ( Peirce et al., 2019 ).
Real-world validation of the framework
In this section, we discuss our experiences with real-world validation of the RT-Cloud framework. Thus far, two studies have been completed using previous versions of the RT-Cloud framework. The first study, referred to here as RT-Attention, was a clinical study conducted at Penn Medicine; the focus of this study was to train participants with major depressive disorder (MDD) to disengage their attention from negativelyvalenced stimuli ( Mennen et al., 2021 ). The second study, referred to here as GreenEyes, was conducted on healthy participants at Princeton University; this study aimed to bias participants toward a particular interpretation of an ambiguous story ( Mennen et al., 2022 ).
In the RT-Attention study ( Mennen et al., 2021 ), MDD participants ( N = 15) and healthy controls ( N = 12) were shown a superimposition of two images, a neutral scene and a negatively valenced face. They were asked to focus on the scene and ignore the face. A multivariate classifier was trained to track how strongly participants were attending to the scene vs. the face. Feedback was provided by varying the relative visibility of the scene vs. the face: The more that participants got distracted and attended to the face (as measured by the classifier), the more visible the face became, and the less visible the scene became. This had the effect of externalizing and amplifying internal attentional lapses, making the task of judging the scene more difficult. The key dependent measure was how well participants were able to recover from these lapses and resume attending to the scene. We found that, at the outset of training, MDD patients were less able than healthy controls to recover from attentional lapses (i.e., their negative attention to the face was more "sticky "), but -by the end of training -MDD patients had significantly improved on this measure relative to controls.
This study was conducted at Penn Medicine with 27 participants each completing three neurofeedback training sessions across different days. The challenge was to deploy a RT-fMRI study in a clinical setting, where we had to ensure that our implementation of RT-fMRI did not disrupt any of the other clinical studies underway at the facility. After developing the study at Princeton, we used the RT-Cloud framework running in the cloud, in coordination with a local Linux computer, to deploy the study at the Penn Medicine imaging facility. Our RT-fMRI software was the first cloud-based application deployed by the Penn Medicine IT group, thus breaking ground on the administrative as well as technical requirements for a study of this type. This effort won a Fierce Innovation Award .
The cloud framework was installed within Penn Medicine's account on the HIPAA compliant Microsoft Azure Cloud and integrated with onpremise resources via a Virtual Private Network (VPN). The Penn IT team set up the virtual machine (VM) and did security scanning to ensure compliance. During sessions of the study, a version of the RT-Cloud server was running in the cloud and data were sent to it in the form of masked 2D-arrays of DICOM volume data (this study used an earlier version of the framework that pre-dated our use of BIDS-Incrementals). The analysis script in the cloud did smoothing, high-pass filtering, z-scoring and classification of the brain image data. The runs were divided into interleaved blocks of trials used for classification training (where participants attended to faces or scenes and neurofeedback was not provided) and neurofeedback trials. The cloud framework supported both classifier training and the use of the trained classifier during neurofeedback, as specified by a session configuration file. The classification scores were sent back to the presentation computer in the control room, saved as a text file, and then read by a PsychToolbox script to provide participants with feedback by altering the relative visibility of the face and the scene.
In the GreenEyes study ( Mennen et al., 2022 ), participants listened to an ambiguous story and were given neurofeedback in order to steer their interpretation to one of two interpretations (randomly assigned for each participant); the story stimulus used here was the same as the stimulus used in Yeshurun et al. (2017) . The study involved two onehour scanning sessions per participant ( N = 20). Here, the ProjectServer was run on a VM in the Microsoft Azure cloud. During scanning sessions, as DICOM images arrived, they were anonymized and sent to the cloud server for processing. On the cloud, the BOLD data were registered to MNI space, preprocessed, and then processed using a Shared Response Model ( Chen et al., 2015 ) and then a support vector machine classifier to produce the neurofeedback values. These values were returned to the presentation computer in the control room and written to a text file which was read and used by a PsychToolbox script to update the feedback display. Control of the ProjectServer was accomplished via a web page that was accessed by the researcher's laptop.
The following sub-sections describe key take-away points from these real-world validation studies.
Timing and responsiveness
In these real-time experiments, a new brain scan was performed every 1.5 or 2 s (i.e. the TR). Thus, we had to read and process the scan im-age and provide the classification feedback within that time window. On the Siemens scanning system used in these studies, it took about 700 ms from the completion of the scan until the reconstructed DICOM image was available on the scanner computer's disk. Our FileWatcher then read the DICOM and transferred it to the cloud, taking about 100 ms. The classification was performed in 200 ms. The classification result was returned from the cloud in 20 ms. Thus the total processing time was slightly more than 1 second. Using a TR of 2 s (a typical TR time in RT-fMRI experiments) gives plenty of leeway. Reliability of network transfer to the cloud has not been an issue for us in the more than 100 scanning sessions that we have run as part of the aforementioned two studies. We do note that a network outage would of course prevent running a session; however, it is possible that such an outage would prevent even an on-premise real-time study depending on where the outage occurs. In our validation studies, there were very rare occasions where processing did not complete in time for a particular TR or trial. To handle this, we simply configured the experiment presentation script so that -if the feedback value was missing -the script delivered the same feedback value as on the previous TR or trial. Note that this error-handling is up to the user; if a user wanted to handle missing feedback in a different way, they could set up the script differently.
Network bandwidth requirements
An RT-fMRI session requires about 2 Mbps (Megabits per second) average bandwidth and 20-40 Mbps of peak upload bandwidth. A typical session sends a 0.5 megabyte (MB) image every 2 s and we want the image transfer to complete in 100-200 ms. Having a 20-40 Mbps peak upload bandwidth allows the image transfer time constraint to be met. The average bandwidth is much lower than the peak because of idle time between images. The reply from the cloud-based ProjectServer is usually only a few bytes (such as a floating point number) and requires minimal bandwidth.
All of the facilities where we have deployed the framework have had adequate bandwidth and low enough latency to the cloud to support these workloads, and we expect that these bandwidth requirements will be well covered at typical scanning facilities. In general, network infrastructure bandwidths have been increasing to meet on-demand video streaming and as of 2020, Speedtest.net estimates the average U.S. fixed internet speed is 135 Mbps download / 52 Mbps upload. HealthIT.gov recommends that hospitals have a minimum of 100 Mbps service, and that academic / large medical centers have 1000 Mbps.
Data privacy and security
Working with medical data presents concerns and challenges in maintaining data security and privacy. Additional concerns are often raised when using cloud infrastructure. One of the first steps is to ensure that the computational and storage infrastructure are certified to be HIPAA compliant. Both Microsoft Azure and Amazon AWS have HIPAA certification. For the Penn Medicine-based RT-Attention study, we chose Microsoft Azure, used anonymized image data with only the ROI voxels being sent for processing, used a private VPN network for data communication, used encrypted storage in the cloud, and deleted data from the cloud after each session. This provided a strong security model and met IRB approval. For the GreenEyes study, we used similar measures but sent anonymized DICOM images (for which all identifying header information was removed).
Over time, cloud-based infrastructure will likely become more trusted than on-premise infrastructure. Typically, on-premise facilities have a limited IT staff responsible for securing and patching all computers. Cloud vendors have a much larger IT staff and automated tools and will be more on top of security patches and issues. The number of high-profile security breaches at large companies with on-premise data illustrates this point. The aim is, of course, to use the best secu-rity practices available and cloud providers are incentivized to provide state-of-the-art solutions that can be employed.
Cloud costs
Costs associated with cloud computing can include VM rental, data storage, data transfer, and other services like persistent IP addresses, logging etc. Cloud data transfers typically have asymmetric costs (free to transfer data in, charge to read data out). Azure is free to transfer in and provides 5GB/month free outbound transfer and charges $0.08/GB thereafter. In studies using RT-Cloud, image data are transferred in (free inbound), and analysis results (which are typically very small files) are transferred out; at the end of an experiment, data archive files can be transferred out to a more permanent storage site (possibly cloud storage, possibly on-premise or in a data repository like OpenNeuro). The final archive transfer out may be a couple of GBs in size, costing about $0.20. As a cost example, our Penn Medicine based RT-Attention study used Azure D16s VMs which cost about $0.80/hour, and our monthly bill was typically much less than $100, with a few dollars of it being spent on network transfer.
Adapting the framework to different computing environments
The purpose of RT-Cloud is to make RT-fMRI easily accessible to researchers. This includes handling core functionality like watching for new DICOM images, handling real-time scheduling, providing pathways for feedback, and making configuration and user interaction easier. Using cloud computing can be a simpler and cheaper option, but some facilities already have computer clusters or other resources in place, and our framework is adaptable to fit into many different compute configurations. Some configurations in which our software framework have been deployed include: • At Penn Medicine, for the RT-Attention study, we split the Project-Server into two parts: The classification model ran in the cloud, and the other parts of the ProjectServer ran on a control-room computer. Participant feedback was provided using MATLAB PsychToolbox (running with the ProjectServer on the control-room computer), with classification results being passed in via small text files. • At Princeton University, for the GreenEyes study, we ran the Pro-jectServer in the cloud.
Participant feedback was provided using PsychToolbox, running on a separate computer.
• In other studies that are currently underway at Yale University, the ProjectServer is run on a local compute cluster, and analysis results are sent via websockets to a PsychoPy script running on a separate computer that provides participant feedback. • For testing and development, we often run everything on a local computer.
Current and future directions
We are planning to add several features to RT-Cloud in the coming years. One important component will be the ability to wrap an RT-Cloud experiment as a BIDS App. BIDS Apps are containerized (and thus selfcontained) applications that operate on BIDS data and have a common set of invocation parameters. An experiment packaged as a BIDS App would contain all the libraries and configurations needed to run the experiment on any computer that can run a Docker or Singularity container. This will complement the BIDS data standardization added to the framework, allowing researchers to share both data and full execution environments in order to reproduce, validate and extend each other's work.
Using the BIDS App framework, we will pre-package a set of realtime experiments. These will be modeled on paradigms that have yielded good results in the past, and will be representative of a range of techniques. This will give new researchers a reference and quick starting point for building new experiments, and will also make it easy to deploy proven techniques to clinical settings.
We also plan to add support for running multiple analysis models in parallel within the ProjectServer. For example, researchers could deploy a fast analysis model to guarantee a real-time result alongside a slower but more accurate model. Another potential use case could be to run MR-based eyetracking ( Frey et al., 2021 ) alongside an analysis model (e.g., to get an online measure of whether participants are fixating on stimuli).
Conclusion
In summary, the RT-Cloud framework makes it possible to scale RT-fMRI to more computationally-intensive analyses, while also simplifying the deployment of these analyses, by making it possible to run them in situations where local computing hardware or computing expertise are lacking. There are several other packages for running RT-fMRI studies (e.g., Basilio et al., 2015 ;Cox, 1996 ;Goebel et al., 2006 ;Heunis et al., 2018 ;Hinds et al., 2011 ;Koush et al., 2017 ;Shibata, 2012 ; for a more complete list see Sulzer, 2021 ). However, ours is unique in its use of the cloud and SaaS. As described above, our cloud approach has several important benefits, relating to ease of installation and maintenance, reduced cost, ease of scaling, and accessibility from anywhere. Our use of open-source Python code makes the framework extensible by experts and allows for community-based development, and our integration with the BIDS standard facilitates pipeline sharing and data sharing. Taken together, we hope that these developments will expand the use of RT-fMRI to a much wider community. | 8,995 | 2022-01-26T00:00:00.000 | [
"Computer Science"
] |
Faint millimeter NIKA2 dusty star-forming galaxies: finding the high-redshift population
We develop a new framework to constrain the source redshift. The method jointly accounts for the detection/non-detection of spectral lines and the prior information from the photometric redshift and total infrared luminosity from spectral energy distribution analysis. The method uses the estimated total infrared luminosity to predict the line fluxes at given redshifts and generates model spectra. The redshift-dependent spectral models are then compared with the observed spectra to find the redshift. Results. We apply the aforementioned joint redshift analysis method to four high-z dusty star-forming galaxy candidates selected from the NIKA2 observations of the HLSJ091828.6+514223 (HLS) field, and further observed by NOEMA with blind spectral scans. These sources only have SPIRE/Herschel photometry as ancillary data. They were selected because of very faint or no SPIRE counterparts, as to bias the sample towards the highest redshift candidates. The method finds the spectroscopic redshift of 4 in the 5 NOEMA-counterpart detected sources, with z>3. Based on these measurements, we derive the CO/[CI] lines and millimeter continuum fluxes from the NOEMA data and study their ISM and star-formation properties. We find cold dust temperatures in some of the HLS sources compared to the general population of sub-millimeter galaxies, which might be related to the bias introduced by the SPIRE-dropout selection. Our sources, but one, have short gas depletion time of a few hundred Myrs, which is typical among high-z sub-millimeter galaxies. The only exception shows a longer gas depletion time, up to a few Gyrs, comparable to that of main-sequence galaxies at the same redshift. Furthermore, we identify a possible over-density of dusty star-forming galaxies at z=5.2, traced by two sources in our sample, as well as the lensed galaxy HLSJ091828.6+514223. (abridged)
Introduction
It is now clearly established that dusty star-forming galaxies (DSFGs) are critical players in the assembly of galaxy stellar mass and the evolution of massive galaxies at z < 3 (e.g.Madau & Dickinson 2014).At higher redshift, observing the dusty star-formation and its spatial and redshift distribution requires undoubtedly (sub-)mm experiments and is still very challenging.For example, the limited existing estimates on dust-obscured star formation rate densities (SFRD) at z > 4 are still not consistently measured, as shown in the discrepancy between recent studies (e.g., Gruppioni et al. 2020;Dudzevičiūtė et al. 2020;Fudamoto et al. 2021;Zavala et al. 2021;Fujimoto et al. 2023).This is largely due to difficulties in uncovering a large unbiased sample of high-redshift DSFGs in relatively large cosmic volumes.Bright and faint DSFGs at high redshift have been uncovered by SPT (Reuter et al. 2020) and ALMA surveys (Franco et al. 2018;Zavala et al. 2021;Aravena et al. 2020).
However, statistical studies with these sources suffer from the fact that either strongly-lensed DSFG samples are not well statistically defined or covered areas are limited.
It is well known that in the (sub-)millimeter, larger area and relatively deep surveys can efficiently find high-redshift DSFGs (Béthermin et al. 2015b), thanks to the negative k-correction (e.g., Casey et al. 2014), combined with the shape of the luminosity functions.Such large-area deep surveys are conducted with single-dish telescopes, as with the SCUBA2 instrument on the JCMT (Holland et al. 2013) or the NIKA2 instrument on the IRAM 30m (Perotto et al. 2020).The angular resolutions of such single-dish surveys are 13", 11.1" and 17.6", for SCUBA2 at 850 µm, and NIKA2 at 1.2 and 2 mm respectively.This makes it difficult to unambiguously identify the multiwavelength counterparts of the DSFGs and to search for the high-redshift population.As already shown by the follow-ups of SCUBA2 sources with ALMA (e.g., Simpson et al. 2020), the combination of single-dish and interferometer surveys is by far the most efficient way of constraining the dusty star formation at 2 < z < 6.Indeed, the high resolution and sensitivity of (sub-)millimeter interferometers can provide accurate position measurements on DSFGs and thus the identification of their multiwavelength counterparts.However, getting photometric redshift from optical-IR is complicated by the lack of sufficiently deep homogeneous multi-wavelength data to analyze large samples.Moreover, DSFGs are subject to significant optical extinction (some of them are even optically dark, see Franco et al. 2018;Williams et al. 2019;Manning et al. 2022) which impacts the quality and reliability of photometric redshift estimates and prevents optical/near-infrared spectroscopic follow-up.Photometric redshifts from far-IR/mm to radio broad-band photometry have been used in studies on the cosmic evolution of high-z DSFGs since the discovery of DSFGs (Yun & Carilli 2002;Hughes et al. 2002;Negrello et al. 2010).However, these measurements are even more uncertain than the optical-IR photometric redshift, as the spectral energy distributions (SEDs) in the far-IR/mm do not show any spectral features (but a broad peak), and there are often only few data points on the SEDs to constrain the model.In addition, there is a strong degeneracy between dust temperature and redshift in distant dusty galaxy, which limits the usefulness of simple photometric redshifts (e.g., Blain 1999).Finally, in the modelling of the FIR emission, optically thin or thick solutions are heavily degenerate.Indeed, the same SED could arise from either cold and optically thin or from a warmer and optically thicker FIR dust emission with no robust way to discriminate between the two by using continuum observations (Cortzen et al. 2020).This often leads to an overestimate of the FIR photometric redshifts because of an apparent colder dust temperature derived from optically-thin emission in high-redshift, starbursting DSFGs (Jin et al. 2019).
For such galaxies, spectral scans in the millimeter can be the only way of getting the spectroscopic redshift, as shown in e.g.Walter et al. (2012); Fudamoto et al. (2017); Strandet et al. (2017); Zavala et al. (2018).The success rate of measuring the redshift using millimeter spectral scans can be very high, being >70% (Weiß et al. 2013;Strandet et al. 2016) and even up to >90% (Neri et al. 2020).Such a success rate is obtained on large samples in a reasonable amount of telescope time but for bright DSFGs.For example, with a total time of 22.8 hours on 13 DSFGs with average 850µm fluxes of 32 mJy, Neri et al. (2020) measured the redshift of 12/13 sources with NOEMA.Weiß et al. (2013) obtained a ∼90% detection rate for sources with S 1.4 mm > 20 mJy.Obviously, for much fainter objects, obtaining redshifts may become much more difficult (e.g.Jin et al. 2019).
We are currently conducting a deep survey with NIKA2, the NIKA2 Cosmological Legacy Survey (N2CLS), a guaranteed time observation (GTO) large program searching for a large sample of high-z DSFGs (Bing et al. 2022(Bing et al. , 2023)).The observations cover two fields, GOODS-N and COSMOS, and most of the detected DSFGs are sub-mJy sources at 1.2 mm.One of the goals of N2CLS is to put new solid constraints on the obscured SFRD at z > 4. To reach that goal, we need first to obtain the redshift of N2CLS sources.While deep optical-IR data are available in the two fields and have been extensively used to obtain photometric redshifts, a large fraction of the sources currently lack a secure redshift.Given the wealth of ancillary data already available on these two fields, blind millimeter spectral scans is the only solution to measure their spectroscopic redshift.As a pilot program to try to identify the high-redshift population, we selected 4 high-redshift candidates detected at 1.2 and 2 mm by NIKA2.They have been selected from their far-IR/mm SEDs photometric redshift in the HLSJ091828.6+514223field observed with NIKA2 during the Science Verification.This paper presents the redshift identification and source properties based on the spectral scans obtained with NOEMA on these sources.It is organised as follows.In Sect 2, we present the sample and NIKA2 observations.Section 3 describes the NOEMA observations and data reduction, as well as the extraction of continuum fluxes and spectral scans.In Sect.4, we extensively discuss the redshifts.In particular, we develop a new method that combines both far-IR to millimeter photometric data and spectral scans to measure the redshift.Source properties, as their dust mass and temperature, kinematics and excitation of molecular gas, are given in Sect. 5. Section 6 presents the potential discovery of a DSFG over-density at z=5.2 in the HLS field.Conclusions on the main results and the possible implications of our findings in future high-z DSFGs studies are given in Sect.7. Finally, three appendices give more details on the method of redshift measurements and its validation.Throughout the paper, we adopt the standard flat ΛCDM model as our fiducial cosmology, with cosmological parameters H 0 =67.7 km/s/Mpc, Ω m =0.31 and Ω Λ =0.69, as given by Planck Collaboration et al. (2020).
NIKA2 field around HLSJ091828.6+514223
As part of the NIKA2 Science Verification that took place in February 2017, we observed an area of 185 arcmin2 , centered on HLSJ091828.6+514223 , a lensed dusty galaxy at z=5.24 (Combes et al. 2012), for a on source time of about 3.5 hours at the center.This allowed us to reach 1σ sensitivities of about 0.3 mJy at 1.2 mm and 0.1 mJy at 2 mm on HLSJ091828.6+514223.This galaxy is close to the z=0.22 cluster Abell 773, but is likely lensed by a galaxy at z∼0.63.For our NIKA2 sources, the magnification by the galaxy cluster is <10%.Therefore, we do not expect the NIKA2 sources to be highly magnified (E.Jullo, private communication).
The NIKA2 field overlaps almost entirely with Herschel SPIRE observations at 250, 350, 500 µm.On the contrary, the PACS, IRAC and HST images cover only a very small part of the field on the west side (where NIKA2 observations have lower signal-to-noise ratios).Thus only SPIRE data were used to select high-z candidates.The SPIRE fluxes were measured using FASTPHOT1 (Béthermin et al. 2010) through simultaneous PSF fitting, using NIKA2 source positions as priors on the SPIRE maps.
We built a 1.2 and 2 mm catalog using the NIKA2 data reduced using the collaboration pipeline (Ponthieu et al., in prep).A total of 27 sources are detected with S/N>5 in at least one band (1.2 or 2 mm).From this catalog, we selected four sources detected at both 1.2 and 2 mm with high signal to noise ratio (between 5.7 and 9.7) and for which there is a faint (at the level of confusion noise) or no SPIRE counterparts, as to bias the sample towards the highest redshift candidates.Indeed, rough sub-millimeter photometric redshifts, obtained by fitting empirical IR SED templates from Béthermin et al. (2015a) to our SPIRE+NIKA2 data, were z phot ∼ 5 − 7.These sources are named HLS-2, HLS-3, HLS-4, and HLS-22.Their fluxes are between 1.7 and 2.9 mJy at 1.2 mm and 0.28 and 0.60 mJy at 2 mm.The flux measurements and uncertainties are presented
NOEMA observations and data calibration
Follow-up observations were made using NOEMA from 2018 to 2020, with 4 different programs.The 4 sources in the HLS field were all observed by NOEMA with the PolyFiX correlator.They were initially targeted by project W17EL (HLS-2/3/4) and W17FA (HLS-22) using the same setups that continuously cover the spectra from 71 GHz to 102 GHz with the D configuration in band1.HLS-22 were further observed in project W18FA with the A configuration in band1 and HLS-2/3 were further observed in project S20CL with the D/C configuration in band2.The total on-source time of all of the proposals is 44.9 hours.The details of the observations on each source are summarized in Table 2.
NOEMA observations are first calibrated using CLIC and imaged by MAPPING under GILDAS2 .Radio sources 3C454.3,0716+714, 1156+295, 1055+018, 0851+202 and 0355+508 are used for bandpass calibrations during these observations, and the source fluxes are calibrated using LHKA+101 and MWC349.With the calibrated data, we further generate the uv table with the original resolution of 2 MHz.We also produce the continuum uv table of each source by directly compressing all corresponding lower sideband (LSB) and upper sideband (USB) data with the uv_compress function in MAPPING.
NOEMA continuum flux measurement and source identification
We identify the counterparts of our sample in the NOEMA continuum data.We first generate the continuum dirty map and then clean the continuum image of each source with the Clark algorithm within MAPPING.The cleaned image of each source with the highest SNR and(or) the best spatial resolution is shown in Fig. 1.We blindly search for candidate sources by identifying all of the peaks above 4×RMS within the NOEMA primary beam.Their accurate positions are then derived with uv_fit function in MAPPING (with the peak positions as the initial prior and point source as the model), and the continuum fluxes at the other frequencies are estimated with source models fixed to these reference images.
The continuum fluxes are measured using uv_fit and the same models as given in Table 3.The 4 sidebands in the 2 setups of W17EL and W17FA are combined together to generate continuum uv tables centered on 3.6 mm, given the low SNR of the continuum emission at such a long wavelength.When data are available, the continuum fluxes at higher frequencies are measured both sideband by sideband and on the combined LSB+USB uv-table.The continuum fluxes are listed in Table 4.
We detect 5 reliable continuum sources within the primary beam of NOEMA observations as counterparts of 4 NIKA2 HLS sources.We further checked the residual RMS on the map with source models more complex than point sources.This does not improve the level of residuals of three NOEMA sources.For the rest two sources, HLS-2-1 and HLS-3, the favored simplest models are a circular Gaussian model and an elliptical Gaussian model, respectively.The position and preferred models of each source is listed in Table 3, and we note that the position of these sources does not change significantly depending on the model.
We show on Fig. 1 the cleaned images of NOEMA observations.The NIKA2 source HLS-2 is resolved into two continuum sources in our high-resolution NOEMA observation with SNR∼10.The rest of NIKA2 sources are all associated with one single NOEMA source.For these sources (HLS-3, HLS-4 and HLS-22), we compare their positions in the NOEMA and NIKA2 observations.The maximum offset is found in HLS-3 with a value of 1.9 arcsec.The average offset is 0.9 arcsec among these three sources, which suggests a high positional accuracy of NIKA/NIKA2 for locating sources with relatively high SNR.
For HLS-2 and HLS-3, part of our NOEMA observations measure their continuum fluxes at a frequency close to the representative frequencies of NIKA2 2 mm bands.The NOEMA and NIKA2 fluxes are consistent for HLS-3.The total NOEMA fluxes of the 2 components of HLS-2 is 50% higher than that measured by NIKA2, while still being consistent with each other within 3 σ uncertainties.This first comparison is encouraging.A detailed study of NIKA2 and NOEMA fluxes is beyond the scope of this paper and will be conducted with more statistics (e.g. with the NOEMA follow-up of N2CLS sources).
Extraction of NOEMA millimeter spectra
We extract the millimeter spectra of NOEMA continuum sources from the full uv table.The uv tables are first compressed by the uv_compress function in MAPPING, which makes averages within several channels to enhance the efficiency of the line searching with higher SNR per channel and smaller load of data.For observations in band1 we set the number of channels to average to 15 while the observations in band2 and band3 are averaged every 25 channels, which corresponds to channel widths of 107km/s, 100km/s and 59km/s at 84 GHz, 150 GHz and 255 GHz, respectively.Given the typical line width (a few hundred to one thousand km/s) for sub-millimeter galaxies (Spilker et al. 2014), the compression of uv tables could still ensure Nyquist sampling by 2-3 channels on the emission line profiles and preserve the accuracy of line center and redshift measurement.
To extract the spectra, we perform uv_fit on the compressed spectral uv table with the position and source model fixed to the same as those given in Table 3.For the observations of the W17EL002 setup, we flagged the visibilities associated with one antenna significantly deviating from the others.Given the relatively low angular resolution of most of our data on HLS sources (∼5" in band1 and ∼2" in band2), these galaxies is unlikely to be significantly resolved, thus the uv_fit at fixed position on the uv tables should be able to uncover the majority of their line emission.
We further remove the continuum in the extracted spectra, assuming a fixed spectral index of 4. This is equivalent to a modified black-body spectrum with a fixed emissivity (β) at 2, and is generally consistent with the dust emissivity we derived in Sect.5.2.We use these continuum subtracted NOEMA spectra for the redshift search and the emission line flux measurement (see Sect. 4.2 and Sect. 5.1).The extracted spectra and continuum model to be removed are shown in Fig. 2.
Source redshift from photometric-spectroscopic joint analysis
An accurate redshift is a prerequisite to the accurate estimate of the physical properties of high-z galaxies.However, the optical-IR SED of high-z DSFGs are often much poorer constrained Notes. (*) Data sets and fluxes derived with free parameters on source position and shape in uv_fit.The fluxes at the other frequencies on a specific source are fitted with positions and shapes fixed to the same as marked data set, which are given in Table 3.
than other high-z galaxies due to their faintness at these wavelengths, which poses challenges to the accurate measurement of their photometric redshifts.In far-IR, the degeneracy between colors, dust temperature and redshift could also lead to a highly model dependent estimate of the photometric redshift.In this section, we describe the different methods and summarize the results of redshift estimate on our sample, with both photometric and spectroscopic data described in Sect.3. Specifically, we introduce a new joint analysis framework to determine the red-HLS-2-1 HLS-2-2 HLS-3 HLS-22 HLS-4 Fig. 2: Millimeter spectra of all HLS sources extracted from the uv tables obtained from the NOEMA observations.The continuum models to be subtracted are presented as grey solid lines.The lines identified to determine the spectroscopic redshift of the sources (see Sect. 4 and Fig. 4 for details) are marked by dashed black vertical lines.shifts of NIKA2 sources combining the probability distribution function of photometric redshifts together with the corresponding IR luminosities and blind spectral scans, which helps us identify the low SNR spectral lines in the NOEMA spectra.
Photometric redshifts
The lack of deep optical and infrared data for the HLS field makes it impossible to conduct a full SED modeling of NIKA2 detected sources.However, with the NIKA2 and SPIRE pho-tometry, we fit the far-IR SED of HLS sources with dust emission templates to estimate their redshifts and IR luminosities.Given the poor angular resolution of the FIR data, we are not able to obtain the fluxes of each single component resolved by NOEMA observations on HLS-2.Thus we only fit with the integrated fluxes under the assumption that the 2 components blended within the beam of SPIRE and NIKA2 are located at the same redshift.
B15 templates could be described as a series of empirical dust SEDs of galaxies at different redshifts.The dust SEDs are produced based on the deep observational data from infrared to millimeter.It considers 2 populations of star-forming galaxies, starburst and main-sequence galaxies, and produces the 2 sets of empirical SED templates correspondingly.We fit our photometric data points with the templates of main-sequence galaxies, which consist of 13 SEDs at each redshift.These templates include the average SED and the SEDs within ±3σ uncertainties with steps of 0.5σ.The estimated redshift, as well as the 1σ uncertainties based on the fitting using B15 main sequence and starburst SED templates are listed in Table 5.In the following redshift searching involving B15 templates (see Sect. 4.2 and Sect. 4.3), we will only use and present the output of SED fitting based on main sequence templates.This is mainly because the output results of the SED fitting based on starburst and main sequence templates, as shown in Table 5, are highly consistent considering the uncertainty.Casey (2012) describes the intrinsic FIR dust emission of galaxies using a generalized modified black-body model in far-IR plus a power-law model at mid-IR.For the SED fitting with the Casey (2012) template, we work within the framework of the MMPZ algorithm Casey (2020).It considers the intrinsic variation of dust SED at different IR luminosities, as well as the impact of the rising CMB temperature at high redshift.The default set of IR SEDs fixes the mid-infrared spectral slope to 3 and dust emissivity β to 1.8.The template SED also considers the transition from optically thin to optically thick when going to lower wavelengths, where the wavelength of unity opacity (τ(λ)=1) is fixed to 200 µm.The redshift, the total infrared luminosity and the corresponding wavelength at the peak of IR SED are the main parameters to be considered for the fit.The empirical correlation between the latter two parameters is also taken into account during the fit.
From the analysis and results shown in Fig. 3, we find that the redshifts from MMPZ are systematically lower than those from B15 template fitting, with a typical ∆z/(1 + z) of around 20%.However, the two redshifts are still consistent within their uncertainties.The infrared luminosities returned by MMPZ are also systematically lower by ∼0.3 dex, especially at redshifts beyond 3.
The faintness and large flux uncertainties of our sources in the 3 SPIRE bands make the constraint on the peak of their IR SEDs much worse than for brighter/lensed high-z sources, which leads to large uncertainties on the estimated total IR luminosity.Compared to the template fitting with B15, MMPZ further takes the CMB heating and dimming (da Cunha et al. 2013) into consideration.Although this could affect the dust emissivity index β and, as a result of β-T degeneracy, the dust temperature and IR luminosity, the β values are all fixed to 2 in these 2 templates.Thus we consider that the inclusion of the CMB effect is not the major contributor to the differences between the results of the two template fitting methods.
The difference in the estimated total IR luminosities will be propagated to the joint photometric and spectral analysis on the source redshift in Sect.4.2.
Joint analysis of photometric redshifts and NOEMA spectra
Due to the lack of characteristic spectral features in far-IR, the photometric redshifts of our sample derived from Sect.4.1 still have large uncertainties.Searching for emission lines in the millimeter spectra provides an approach to constrain our redshifts with a significantly better accuracy.To identify the possible emission lines in the spectra, we performed a blind search in the NOEMA spectra.The NOEMA spectra are first convolved by a box kernel of 500 km/s width, which corresponds to the typical molecular line width of bright (sub)millimeter selected galaxies (e.g., Bothwell et al. 2013).To more completely uncover the possible emission lines in these noisy spectra, we list five lines 3.2 100.628 3.0 (if exist) with the highest S/N in the convolved spectra with S/N > 3 in Table 6.We failed to detect any lines for HLS-2 and HLS-4 with S/N > 3 in our observations.For HLS-3 and HLS-22, we identify one and two detections, respectively.The "detection" at 100.628 GHz in the HLS-22 spectrum is likely to be a glitch or noise spike with wrongly estimated uncertainty (see Appendix A and Fig. A.2).With only one significant detection of an emission line, it is not possible to have an unambiguous redshift solution.
To find the redshift solutions, we need to take additional constraints from the broad-band photometry, in particular from the total infrared luminosities at any sampled redshift in the SED fitting.From the output χ 2 and IR luminosities of all models at one given redshift, we could derive the weighted average value of total infrared luminosity of the source at this redshift using Eq. ( 1): where the σ( j) is a weighting term to account for the deviation of the 13 model SEDs from the median of star-forming galaxies at a given redshift in B15.Indeed, at a given redshift, the B15 template includes 1 median SED and 12 SEDs within ±3σ uncertainties with a spacing of 0.5σ.So when deriving the source IR luminosities at given redshifts, the σ( j) terms should be included to account for the probability of the IR template SEDs to deviate from the median in B15 model.The values of σ( j) are thus between -3 and +3 with step of 0.5.When using the output from MMPZ, the σ( j) will be set to 0.
With a series of average IR luminosity over the redshift grid from the SED fitting, we linearly interpolate the IR luminosity at any given redshift.We further use the IR luminosity to constrain the fluxes of strong FIR-millimeter emission lines at any given redshift based on the well defined, almost redshift invariant L FIR -L line relations in the form of Eq. 2: The luminosities and fluxes of the 12 CO lines of J(1-0) to J(12-11), two transitions of [CI], and the [CII] line at 158 µm are predicted based on various scaling relations found in the literature.The detailed information are listed in Table 7 and references therein.With the estimated fluxes of different line species at a given redshift, we generate a model spectrum in the frequency range of the NOEMA spectral scans and compare this model with the observations 3 .When generating the model spectra, we assume the emission lines have Gaussian profiles with a fixed full width half maximum (FWHM) of 500 km/s.We also linearly interpolate the Fig. 3: Results of IR template fitting on our 4 HLS sources with the B15 dust templates and MMPZ method, using the SPIRE, NIKA2 and NOEMA photometry.The plots in the first column show the probability density distribution (normalized by the peak values) of each sources.The second column shows the evolution of the weighted average infrared luminosity with the redshift.The third column shows the best fit SED models with the observations.Sources from the top to the bottom are HLS-2, HLS-3, HLS-4 and HLS-22.
L FIR,med -z relations from the IR template fitting to a finer redshift grid to avoid missing any possible redshift solutions.
The spacing between adjacent redshifts in the resampled grid satisfies Eq. 3, which is equivalent to a fixed spacing in velocity (∆v) between adjacent redshifts: We fixed the ∆v to be 1/3 of the chosen FWHM, making the emission line profile to be Nyquist-sampled by the predicted line centers at the corresponding redshifts in the new grid.This ensures that emission lines in the spectra and their corresponding redshift solutions will not be missed in our analysis due to poor redshift sampling.The goodness of the model prediction at a given redshift is evaluated by log-likelihood ln(L spec (z)) from the χ 2 between the model spectra and the data, as given below in Eq. 4 and Eq.5: In addition to the goodness of match between spectra and models, we further account for the goodness of SED fitting at given redshifts, χ 2 S ED (z), which is defined similarly to Eq. 5.The joint log-likelihood at each sampled redshift reads as: As already pointed out, we assume that the two counterparts for HLS-2 have a similar redshift and share the same FIR SED.Under this assumption, the total infrared luminosity of the two NOEMA sources is thus computed based on their contributions to the total flux at 2 mm, which are later used in deriving their final joint-likelihood of redshift.
Redshifts measurement
The results of the joint log-likelihood of redshift from photom-etry+spectral scan analysis on the five HLS sources are shown in Fig 4. For each source, we normalize the L spec (z) to the peak value, which helps us to compare quantitatively the relative goodness of match between the model predictions and the observed spectra at different redshifts.We select all the peaks in ln(L spec (z)) with an amplitude larger than -10 and a width larger than 3 samples in the redshift grid as the possible redshift solutions of our sources, using the "find_peaks" algorithm in SciPy.Considering the large uncertainties on the total infrared luminosity of HLS sources, we further cross validate their possible redshift solutions by repeating the joint likelihood analysis using the output IR luminosity at different redshifts from MMPZ fitting, and apply the same algorithm to record the possible redshift solutions.Compared to the log-likelihood of redshift with photometric constraints only (see Fig. 3), the joint analysis helps us to highlight significant isolated peaks for the redshift of HLS-2-1, HLS-2-2, HLS-3 and HLS-22 at z=5.241, 5.128, 3.123 and 3.036, respectively.The redshift with maximum log-likelihood value is also not sensitive to the choice of IR templates.As shown in Fig 4, the B15 template and MMPZ find almost the same redshift where the joint log-likelihood value reaches the maximum, which further confirm their redshift solutions as listed above.For HLS-4, despite having the most accurate photometric redshift constrained through template fitting, the absence of emission line detection in the band1 spectral-scan observations results in our analysis being unable to identify significant peaks in the joint log-likelihood.Thus no reliable redshift solution is found for this source.
For HLS-3 and HLS-22 with blindly detected candidate emission lines at S/N>3, our method successfully confirms the candidate lines but for that at 100 GHz in the HLS-22 spectrum.The extremely narrow profile of the candidate detection suggests that this is likely to be a glitch, which is shown and discussed in Appendix A. For HLS-3, MMPZ also reveals a secondary redshift solution at z=2.299 with slightly lower log-likelihood in the analysis.This assigns the most significantly detected emission line at 139.746 GHz to be CO(4-3), while the best solution at z=3.123 assigns it to be CO .If HLS-3 has z=3.123, we also expect to cover the CO(3-2) line in the spectral scan.Although the line is not detected at 3σ (see Sect. 5.1), we could not simply reject any of the two possible redshift solutions due to the the high noise level around the observed frequency.Thus, we consider the redshift of HLS-3 to be less secure than that found for the other sources with at least two lines with S/N>4 (HLS-2-1, HLS-2-2 and HLS-22, see Table 9) and we will provide the estimate of HLS-3 properties based on both redshift solutions in the paper.We also note that HLS-2-2 could have a secondary redshift solution at z=3.385.However, this redshift could not match with any of the two most significant emission lines found in the spectrum, which correspond to CO(4-3) and [CI](2-1) at z=5.128.Thus, we only adopt the z=5.128 solution in the following analysis.
In Table 8, we summarize the redshifts from the joint analysis method (z joint ), as well as the far-IR photometric redshifts based on the two far-IR templates (z B15,ms and z MMPZ ).For sources with ambiguous redshift solutions, we use the two different z f ix for the analysis.The uncertainties of z joint are conservatively given and correspond to 0.5×FWHM of the line (see Table 9) In the following sections, the source properties are estimated at the best solution of redshift of joint analysis, or the most possible photometric redshift when no redshift solution is found in the joint analysis.These choices of redshifts of our sample are listed as z f ix in Table 8.
The first row shows the normalized log-likelihood from SED fitting and the joint log-likelihood for each source after considering the information obtained from the NOEMA spectral scans.The second and third row show the cutout of the spectra around candidate spectral lines at the best redshift solutions.The lines shown in the second row are detected in the earliest band1 spectral scans (W17EL and W17FA) and those in the third row are detected in the additional follow-up observations (W18FA and S20CL).The models generated based on the fit with the B15 dust templates and MMPZ at the most probable redshift are plotted as solid and dashed red lines, respectively.We emphasize again that these models are not coming from parametric fitting but are generated using an estimate on the total infrared luminosity and the L CO (L [CI] )-L IR scaling relations.Given the line profile, the spectra of HLS-3 will be analysed using a double-gaussian model.The rest of the sources will be analysed using a single-gaussian model.
Robustness and self-consistency of the joint analysis method
The analysis on the source properties of our sample, as dust mass, temperature and star-formation rate, largely relies on the estimated redshifts from the joint analysis method, which is subject to assumptions on the line widths and line luminosities.
To test the robustness of the redshift derived from the jointanalysis method, we make tests with model spectra of varying line widths, with NOEMA data of more limited spectral coverage and with different far-IR templates in deriving photometric redshift and predicting IR luminosity.These tests show that the redshifts of our sources from the joint analysis are reliable.Besides, we also check the self-consistency of our redshift solution by comparing our L FIR -L CO correlation with the scaling relations and their scatters.These tests and discussions are presented in Appendix A,B,C and D.
Kinematics and excitation of molecular gas of HLS sources
We find redshift solutions to 4 NOEMA sources associated with 3 NIKA2 sources.Each of these sources has at least one emission line detected with SNR>4 or 2 lines with SNR at ∼3-4 in NOEMA spectral scans.With these redshifts, we further measure the flux and line width of the spectral lines covered by the observations, and derive the corresponding lines luminosity.We start the fitting with a single Gaussian model on the continuumsubtracted spectra of each source.No matter if they are detected with high significance, all CO/[CI] lines falling into the frequency coverage of NOEMA are considered.To make a more robust analysis on the line width, we also force the kinematics of CO and [CI] lines to be the same during the fit.For the emission line at 139.746 GHz of HLS-3, a double Gaussian model results in an Akaike information criterion (AIC) of -17.1, comparing to the AIC of -11.3 using a single Gaussian model.This suggests an improved quality of the fit with the double-Gaussian model and thus we use this as the model for HLS-3.The line widths, fluxes and upper limits of CO/[CI] lines for each source are listed in Table 9.We measure the total flux or flux upper limit of each line by integrating the spectra within ±3σ line around the best-fit line center for the single-peaked lines.For HLS-3 with a double-peaked line profile (noted as the red and blue peak, respectively), we estimate the line fluxes and upper limits by integrating the spectra in the range of [f center,red -3σ red , f center,blue +3σ blue ].The corresponding CO/[CI] line luminosities or 3σ upper limits (L' line and L line ) are also calculated using the following equations (from Solomon et al. 1997): where S line ∆V is the velocity integrated flux in Jy km s −1 , ν rest = ν obs (1 + z) is the rest frequency in GHz, and D L is the luminosity distance in Mpc.
The [CI](1-0) lines of HLS-2-1 and HLS-2-2 are covered by the spectral scan but are located at the noisiest edges of the NOEMA sidebands.This makes their upper limits of little scientific value and thus we discard them from the table.The CO(7-6) line of HLS-2-2 is marginally detected but only partly covered by our observations.For this line, we use the output parameters from the spectral line fitting to constrain its flux using a complete Gaussian profile.
Figure 5 shows the best-fit models for each CO/[CI] line.The line identification in each panel are presented assuming the best redshift solution of each source, as listed in Table 8.The spectra have the same channel width as the ones we used for the joint analysis.From the best-fit parameters we find the line widths are generally consistent with the assumption we made during the redshift search in Sect.4.2, with an average FWHM of 500 km/s.Although previous observations reveal that the integrated [CI] and CO lines from the same high-z galaxies may have different line widths and line profiles (Banerji et al. 2018), fixing or relaxing the velocities and widths of the different lines during our analysis does not significantly change the quality of the best-fit model.
The observations of the 2 sources associated with HLS-2 cover both mid-J (CO(4-3) or CO ) and high-J CO lines (CO(7-6) or CO ), allowing us to roughly estimate the conditions of their molecular gas and compare with other DSFGs at similar redshifts using luminosity ratios (expressed in K km/s pc 2 ).For HLS-2-1, the L' CO(8−7) /L' CO (5−4) is 0.31±0.11,which is consistent with the values found in typical high-z SMGs with low excitation (Bothwell et al. 2013) but lower than the reported value of some starburst galaxies and luminous quasars at similar redshift (Rawle et al. 2014;Li et al. 2020).The L' CO(7−6) /L' CO(4−3) and CO(8-7)/CO(4-3) of HLS-2-2 are <0.17 and <0.12, respectively.These values are even lower than the typical value of high-z SMGs but still consistent with the low excitation ISM found in the "Cosmic Eyelash" (Danielson et al. 2011).
For the rest 2 sources with CO detection of less separated quantum numbers J, our observations find L' CO(5−4) /L' CO(3−2) >0.73 in HLS-3 and L' CO(4−3) /L' CO(3−2) =0.78±0.22 in HLS-22, respectively.The value for HLS-22 is generally consistent with the average CO SLED of high-z SMGs in Bothwell et al. (2013), being similar to the case of HLS-2-1.On the contrary, HLS-3 has a L' CO(5−4) /L' CO(3−2) ratio higher than typical SMGs in Bothwell et al. (2013) and resembles the average of the SPT sample (Spilker et al. 2014) or the local starburst galaxy M82 (Carilli & Walter 2013) with higher excitation.However, the observations on these 2 sources do not cover higher-J CO lines like for HLS-2, which traces warmer and denser components in the molecular gas reservoir.Thus, with these 2 line ratio measurements, it is more difficult to conclude.
Dust mass and dust temperature
The Far-IR continuum emission of star-forming galaxies could be well represented by a single temperature modified black-body model from which the dust temperature (T dust ), dust emissivity index (β) and total dust mass (M dust ) can be derived.At high redshift, the increasing temperature of the cosmic microwave background (CMB) reduces the contrast of star-forming galaxy emissions in the (sub-)millimeter and changes the apparent shape of the spectrum at these frequencies.Considering the impact of the CMB, the observed modified black-body emission of high-z SMG could be expressed as Eq. 9 using the optical-thin assumption (da Cunha et al. 2013) : The dust emissivity κ(ν) in far-IR can be described by a single power law: where k 0 stands for the absorption cross section per unit dust mass at a given specific frequency ν 0 .Here we take k 0,850µm =0.047 m 2 /kg from Draine et al. (2014) (see also Berta et al. 2021).We perform MCMC fitting (using the PyMC3 package) on our far-IR to millimeter photometric data using the model given in Eq. 9.The two sources associated with HLS-2 are fitted using their integrated flux, as they have very similar redshifts and their individual fluxes at the far-IR could not be obtained with the low resolution SPIRE data.We adopt uniform priors for T dust and M dust and a flat prior between 1 and 3 for dust emissivity β.We constrain the temperature to be between T CMB at the given redshifts and 80 K.The redshift values are fixed to z f ix given in Table 8. Figure 6 shows as an example the best-fit modified black-body model for HLS-2, as well as the 1σ and 2σ confidence intervals.For HLS-22, the original fit with free parameters lead to a non-physically low dust temperature at 16 K and a high β larger than 3, which is due to the poor observational constraints at 3 mm and <500 µm.Thus, we perform a constrained modified black-body fit with a fixed β of 1.8, consistent with the average of the other HLS sources.We list the estimated dust temperature, mass and emissivity index in Table 10.
We derive a dust mass of ∼ 10 9 M ⊙ , which is consistent with the dust masses derived for bright SMGs selected from blind single-dish surveys (Santini et al. 2010;Miettinen et al. 2017).The abundant dust indicates that these high-z dusty star-forming galaxies have already experienced a rapid metal enrichment in the first few billion years of the Universe.The dust emissivity β of our sample (excluding HLS-22) has a median value of 1.75, which is also consistent with the values found in a variety of galaxies across the cosmic time.
We also measure the far-IR luminosities (L FIR ) by integrating the model SEDs between 50 and 300 µm at the rest-frame of each source.The L FIR are listed in Table 10.Our observations could not properly constrain and model the mid-IR emission of galaxies.Thus, we extrapolate our L FIR (50-300 µm) to the total infrared luminosity (L IR , 3-1000 µm) by multiplying L FIR by a factor of 1.3, based on the calibrations given in Graciá-Carpio et al. (2008).We further derive the star formation rates, SFR, based on the standard scaling relations from (Kennicutt & Evans 2012).The corresponding results are also listed in Table 10.
The dust temperature of our sample varies from 18 to 41 K. Fig. 7 shows the comparison between the dust temperature of our sample with other DSFGs and star-forming galaxies (Roseboom et al. 2013;Riechers et al. 2014Riechers et al. , 2017;;Pavesi et al. 2018;Béthermin et al. 2020;Faisst et al. 2020;Neri et al. 2020;Bakx et al. 2021;Sugahara et al. 2021) at different redshifts.Similar to our analysis, the literature dust temperatures here for comparison are all derived under the optically-thin assumption.The average dust temperature of normal star-forming and starburst galaxies from Béthermin et al. (2015a) and Schreiber et al. (2018) are also shown as baselines of comparison at different redshifts.We find large scatters in the dust temperature of our sample with respect to these empirical T dust -(z) scaling relations on star-forming/starburst galaxies.Among our HLS and literature sample, HLS-3 shows one of the lowest dust temperatures of 23(18) K, while the other three sources at higher redshifts have higher dust temperature not distinctive to literature DS-FGs and the average dust temperature of normal star-forming galaxies.DSFGs with apparently cold temperatures have been reported by some studies in recent years (Jin et al. 2019;Neri et al. 2020).Similarly to these galaxies, the redder/colder far-IR 0.9 +0.9 −0.5 ×10 2 HLS-4 4.3 8.9 +0.3 2.9 +2.0 −1.7 ×10 2 HLS-22 3.0 8.9 +0.1 SED of HLS-3 could resemble normal DSFGs at much higher redshift in SED modeling, which explains the significant deviation of its far-IR photometric redshift from its spectroscopic redshift based on the Béthermin et al. (2015a) SED template.At fixed L IR , we expect galaxies with colder dust temperature to be brighter at 1.2 mm and thus these galaxies are more favored by the selection of candidate high-z DSFGs based on red far-IR to millimeter colors.
Molecular gas mass
Both the CO emission lines and dust continuum in the Rayleigh-Jeans tail of the far-IR SED have been widely used to estimate the amount of molecular gas in galaxies (Carilli & Walter 2013;Hodge & da Cunha 2020).In this section we measure the molecular gas mass for our sample and cross-validate the results with various methods.
The detections/constraints on CO emission lines enable the estimate of the molecular gas mass using the CO luminosity to 10 3 obs (GHz) We also overlaid the average T dust -z relation of main sequence galaxies derived by Schreiber et al. (2018) and Béthermin et al. (2015a) based on observational data.For HLS-4 with only photometric redshift, we show with the dash-dotted grey line the degeneracy between dust temperature and redshift.The T dust of HLS-3 at the two possible redshifts are both plotted and connected by the red solid line.
molecular gas mass conversion factor α CO .Robust estimations of the conversion factor are mostly made on the lowest J transition, CO(1-0), while CO detections in our sample start from CO(3-2) to CO .Thus, we need to convert the luminosities of the lowest-J CO line detected in our observations to CO(1-0), in addition to the assumptions on the α CO conversion factor between CO(1-0) luminosity to molecular gas mass.In our case, we take the advantage of multiple line detections/flux upper limits in the NOEMA spectra of each source to find the matched cases in the literature and roughly estimate the CO(1-0) luminosity and molecular gas mass.As described in Sect.5.1, for each source, we compare their CO luminosity ratios with literature results and find the cases with CO SLEDs that could reproduce the observed values.For HLS-2-1 and HLS-22, we apply L' CO(5−4) /L' CO(1−0) =0.32 and L' CO(3−2) /L' CO(1−0) =0.52 from the average SLED of unlensed SMGs in Bothwell et al. (2013).
Having the L' CO(1−0) (see Table .11), we then estimate the total molecular gas mass with a fixed conversion factor α CO .As the derived star formation rates do not reveal solid evidence of ongoing starburst in our sample, we adopt the typical Milky Way value of α CO =4.36M ⊙ (K km/s pc 2 ) −1 .The estimated molecular gas are also listed in Table .11.
For all sources, we also provide an estimate on their molecular gas mass using their continuum emission in the Rayleigh-Jeans tail.Following the calibration described in Scoville et al. (2016), we derive the luminosity of dust emission at rest-frame 850µm.We use the series of optically-thin modified black-body models generated by the combinations of parameters 11.We find the differences between the molecular gas masses estimated by the two methods are within a factor of 2. Our galaxies have molecular gas mass of 1-3×10 11 M ⊙ , suggesting a gas-rich nature.For HLS-4, we derive a similarly massive molecular gas reservoir as that of the 4 sources with CO detections.
One of the primary source the uncertainty of molecular gas mass measurement comes from the CO-to-H2 conversion factor (α CO ).We applied a typical Milky-way α CO value of 4.36 to the CO(1-0) luminosities.The alternative estimate, based on (Scoville et al. 2016), has a equivalent α CO of 6.5.However, previous studies find starburst galaxies could have much lower α CO compared to the Milky Way like values typical for normal starforming galaxies (e.g Downes & Solomon 1998;Tacconi et al. 2008).The exact value of α CO at high redshift is still highly uncertain.Although there are evidence for α CO as large as the Milky Way in high-z SMGs, starburst-like values are also prevalently used in previous studies.This would introduce differences of a factor of 5-7 in molecular gas mass measurements.The impact of α CO is accounted for in the values of molecular gas mass and gas depletion time given in Table 11.
Combining the measurements from Tables 10 and 11, we derive the gas-to-dust ratios of the 4 sources (HLS-2-1, HLS-2-2, HLS-3 and HLS-22) with relatively secure spectroscopic redshifts and dust-independent molecular gas mass measurements from CO lines.With the assumption of a Milky Way like α CO , our analysis yields an average gas-to-dust ratio of 113, which is in line with values found in local and high-z massive galaxies (Santini et al. 2010;Rémy-Ruyer et al. 2014;De Vis et al. 2019;Rujopakarn et al. 2019) and consistent with the values expected at solar metallicity (Leroy et al. 2011;Magdis et al. 2012;Shapley et al. 2020).A lower, starburst-like α CO will lead to an average gas-to-dust ratio 5 to 8 times lower, which is also consistent with the results of Rowlands et al. (2014) under similar assumptions but still at the extreme values.Such abundant dust in ISM could be difficult to explain unless the sources are already enriched to super-solar metallicity at z = 3 − 5 (Chen et al. 2013;Santini et al. 2014) or (and) they are undergoing vigorous merger+starburst events (Silverman et al. 2018).
We finally derive the depletion time of the molecular gas in each galaxy using the molecular gas mass and the SFR.For the molecular gas mass, we use M gas,S 16 to keep the measurement consistent among all galaxies with or without CO line detections.The results are also given in the last column of Table 11.Fig 8 shows the gas depletion time of HLS sources compared to high-z main sequence galaxies Tacconi et al. (2020) and SMGs (Dunne et al. 2022).Considering the uncertainties of α CO , the plot marks the τ dep in rectangles, with the upper and lower bounds at the τ dep derived using α CO values of 6.5 and 0.8, respectively.Fig. 8 shows that most of HLS sources have short gas depletion time of a few hundred Myrs, which is typical among high-z SMGs in (Dunne et al. 2022).The only exception, HLS-3, shows possibly long gas depletion time up to a few Gyrs and being comparable to main sequence galaxies at the same redshift.Remarkably, HLS-3 also has the largest millimeter continuum size (2.8"×0.7",see Table 3) and the lowest dust temperature (23 +7 −6 K at z=3.123 or 18 +6 −5 K at z=2.299, see Table 10).These atypical properties among SMGs, in addition to its long gas depletion time, suggest that HLS-3 is more likely a massive main sequence galaxy under secular evolution.As reported in Table 3, HLS-2-1 and HLS-3, are already partially resolved in dust continuum with compact NOEMA configurations.This could further suggests extended distributions of the molecular gas reservoir/disk.
A possible over-density of DSFGs at z=5.2
It is found that HLS 2-1 and HLS 2-2, separated by 12 arcsec on the map, have both a redshift of ∼5.2.They are also located within 2 arcmin from HLSJ091828.6+514223 a bright lensed DSFG firstly found by the Herschel Lensing Survey at a similar redshift of z=5.243 (Egami et al. 2010;Combes et al. 2012;Rawle et al. 2014).At this redshift, their projected separation in the sky corresponds to a physical transverse distance of ∼800 kiloparsec.
Given the close spectroscopic redshifts of HLS-2-1 and HLSJ091828.6+514223 the physical distance between these two sources is given by their transverse distance D t =796 kpc, computed following D t = D A × θ sep .The physical distance between HLS-2-1 (or HLSJ091828.6+514223 ) to HLS-2-2, which are separated in both redshift and sky coordinates, is approximately estimated using the following equations: We derive physical distances (D) of ∼9.4 Mpc between HLS-2-1/HLSJ091828.6+514223 to HLS-2- HLSJ091828.6+514223 and HLS-2-1 corresponds to 5.0 comoving Mpc, which is comparable to the scale of the z∼5 overdensities found in COSMOS and GOODS-N associated with SMG/DSFGs (Mitsuhashi et al. 2021;Herard-Demanche et al. 2023).When assuming the core of the possible structure has the same redshift as HLS-2-1 and HLSJ091828.6+514223, the deviation of the redshift of HLS-2-2 would correspond to a line-ofsight comoving distance of 58 Mpc.This is an order of magnitude larger than the scale of the SMG over-density in COSMOS, while still being comparable to the proto-clusters traced by Ly-α emitters at z=5-6 (Jiang et al. 2018;Calvi et al. 2021).Although the stochasticity of star formation makes SMG a unreliable tracer of the most massive halos at intermediate redshift, the high SFR of the 3 sources at such high redshift could only be produced by the most massive galaxies tracing the densest environments in the early Universe (Miller et al. 2015).A more complete redshift survey on the other NIKA2 sources in the HLS field, as well as deep optical-IR observation in the same region, could possibly reveal more members of this possible galaxy over-density to confirm its nature and understand its fate of cosmic evolution.
Summary and conclusions
We present the study on 4 DSFGs selected from the early science verification observations of NIKA2, the KIDs camera installed on the IRAM 30m telescope.
We develop a new framework to determine the redshift of sources with the joint analysis of multi-wavelength photometry and millimeter spectral scans.Accounting for the additional constraints on IR luminosity from the SED modeling, we predict the flux of the strongest emission lines from CO, [CI] and [CII], generate the model spectra at given redshifts accordingly, estimate the goodness of match between the broad-band SEDs, models and the observed millimeter spectra altogether and quantitatively find the most probable redshift solutions based on all this information.
Based on the prior selection on red far-IR to millimeter colors, we identify a sample of 4 millimeter NIKA2 sources of mJy fluxes in the HLS field with possible high redshifts, at z = 3 − 7. We conducted deep NOEMA observations on these sources, and resolve them into 5 individual sources.With the NOEMA spectral scans and the newly developed joint-analysis method, we obtain their redshift and confirm they all have z > 3. Our analysis reveals that most of their properties, such as star formation rate, dust temperature and gas depletion time are normal compared to typical high-z DSFGs with very active star formation.However, we also find that one of our source (HLS-3) shows significantly low dust temperature and long gas depletion time, resembling the properties of secularly-evolved main sequence star-forming galaxies.Furthermore, we find two sources at z=5.2 that are separated by only 5 comoving Mpc, possibly linked to a third source lying at a distance comparable to the proto-cluster size as traced by Ly-α emitters at z = 5 − 6.This could be the hint of an interesting high-z structure in this field.
We demonstrate that our method to constrain the redshift, applied to millimeter selected DSFGs with only far-IR to mm photometry and blind spectral scans, could determine the true redshift accurately.Such accuracy of redshift determination with multiple low SNR emission lines shows promising potential in blind redshift searching on large sample of high-z millimeter-faint DSFGs, even in the absence of accurate optical-IR photometric redshifts.The method is especially expected to improve the design and efficiency of blind redshift search on candidate high-z DSFGs detected by the NIKA2 Cosmological Legacy Survey (N2CLS).Indeed, most of N2CLS sources are fainter (sub-mJy) than the 4 sources discussed here.The new tool we developed will allow us to mitigate the increase of NOEMA or ALMA time that will be needed for these faint DSFGs.
The joint analysis methods also provide possible implications to the strategy to obtain accurate redshift and cosmic evolution of high-z DSFGs.The next generation single-dish telescopes/instruments, such as the CCAT-prime (CCAT-Prime Collaboration et al. 2023) and LMT TolTEC (Wilson et al. 2020), are planned to devote a substantial fraction of observing times in wide area deep blind surveys.With thousands of DSFGs expected to be detected, these surveys aim to reveal the role of DS-FGs in the formation and evolution of massive galaxies through their cosmic evolution and environment/clustering.However, comparing to the existing deep millimeter surveys, the majority of these planned surveys are not expected to be completely covered by deep surveys in near IR at >2µ (Wang et al. 2019;Williams et al. 2019;Fudamoto et al. 2021;Xiao et al. 2023).The lack of the wide and deep near IR surveys like COSMOS-Web (Casey et al. 2023) could make it difficult to identify the counterpart of high-z DSFGs, which further prevent the application of optical-IR SED modeling for efficient and accurate redshift measurements.Our practice on the HLS sources under similar conditions, however, demonstrate that the joint constraints of photometric redshift, IR luminosity and millimeter spectra from far-IR SED and blind spectral scans could also provide a promising accuracy and robustness in efficient redshift searching of high-z DSFGs.Further improvement following this strategy, including the application of this method to the redshift identification of a larger sample of DSFGs discovered by the NIKA2 Cosmological Legacy Survey (Bing et al. 2023), is expected to benefit the key scientific objectives of these future wide area (sub)millimeter surveys.
Appendix A: Robustness of the joint-likelihood method using different line widths One of the key assumption is the width of the emission lines, which we fixed to 500km/s.Previous studies reveal a correlation between total IR/line luminosity and far-IR to millimeter line width, possibly originating from the regulation of gaseous disk rotation by gravity or (and) feedback from star formation or AGN (Bothwell et al. 2013;Goto & Toft 2015).The assumed line width generally matches the average of DSFGs with ULIRG-HyLIRG luminosities in infrared, which is similar to the derived IR luminosity of our sample.However, observations also show significant scatters among IR luminous DSFGs.Our sample could be a typical example of the variety of line width of luminous DSFGs, which have line FWHMs ranges from ∼250km/s (HLS-2-1) to ∼750km/s (HLS-3).Besides, the assumption of Gaussian line profile generally holds for most of our source, but HLS-3, as described in Sect.5.1, has a significant double-peak feature in the detected emission line in band2.
The impact on the joint analysis result from the mismatch between real and assumed line width and profile are an uncertain prior.Thus, we make the following tests to check if and how the results of this joint analysis could change with the assumption on different line widths.In addition to the default setting of 500 km/s FWHM Gaussian line profile, we further perform the joint analysis with line profiles FWHMs of 300km/s and 800 km/s, using the redshift and infrared luminosity derived from the fit with Béthermin et al. (2015a).We identify the redshift solutions of the 4 HLS sources with at least 1 line detected with these 2 different assumptions, using the same method and criteria described as in Sect.4.2.The results are listed in Table A.1.
From the results in Table A.1, we conclude that the redshift solutions from the joint analysis method are generally not sensitive to the assumptions on emission line widths.For all sources but HLS-22, we find little variation in the redshift solutions using different line width assumptions.The differences of ∆z ∼ 0.003, as shown in Fig. A.1, are mostly originating from the changes of the peak intensity of emission lines, which could lead to slight variations of χ 2 (z).However, such little difference is still within the width of the emission lines, and will neither cause false identification of emission line, nor affect the analysis on line fluxes and kinematics in Sect.5.1 with the corresponding central frequency as an initial guess.The only case of significant inconsistency in redshift from the test is HLS-22, where the procedure using 300km/s line width strongly favors a redshift solution at z = 2.436.Given the frequency of the 2 emission lines detected with high SNR, we are confident about the redshift solution at z = 3.036 from the analysis with the model spectra of 500km/s line width.Therefore, We checked the 300km/s model and the data at z=2.436 and find that the mis-identification is caused by a strong noise spike at 100.64 GHz, as shown in spectrum with the decrease of model line width.In our calculation of χ 2 spec (z) and correspondingly, joint log-likelihood, their variation with redshift are dominated by the goodness of match between model and data within the range of model line profiles.With a narrower line width in the model, the number of data points that dominate the variation of χ 2 spec (z) will be smaller compared to the cases with wider line width.This will make the analysis with narrow line width more sensitive to single spurious data points, like the noise spike in HLS-22 spectra, and lead to the mis-identification in Fig. A.2.A less aggressive spectral binning along frequency and a pre-processing with sigma clipping could probably reduce such false identification in practice.
Appendix B: Robustness of the joint-likelihood method with narrower frequency coverage
Millimeter spectral scans made by interferometers are widely used to blindly search for the emission lines from candidate high redshift DSFGs, determine their spectroscopic redshift and study the conditions of their cold ISM (Strandet et al. 2016;Fudamoto et al. 2017;Jin et al. 2019;Neri et al. 2020;Reuter et al. 2020).These spectral scan observations are designed to cover a continuous frequency range with several spectral setups.For ALMA and NOEMA, the default setup of the blind spectral scans at their current lowest frequency band covers ∼ 31 GHz.The earliest observations in 2018 on HLS sources blindly and continuously cover the spectra of HLS sources between 71 GHz and 102 GHz in NOEMA band1 with 2 setups, which follows this basic strategy of blind redshift search.To test the joint analysis method under more realistic conditions in large DSFG redshift survey projects, we apply the method to analyse these band1 spectra and compare their resulting redshifts with the ones in Sect.From the comparison between the redshift analysis using early and full datasets, we find that the best redshift solutions of HLS-2-1 and HLS-2-2 remain stable, while the results of HLS-22 and HLS-3 are affected by the narrowed spectral coverage.Such a difference in robustness under different spectral coverage could be explained as follows.The joint analysis method works equivalently to the automatic alignment and stacking of 2 or more lines in the spectra.If it is at the correct redshift and with multiple lines covered, this method could numerically boost the stacked SNR of emission lines, even if none of the single lines are detected with high significance.At a fixed coverage in frequency, we could expect that sources with higher redshift could have more CO lines to be covered.Taking our sample as an example, although the lines of HLS-2-1 and HLS-2-2 at z∼ 5.2 are only tentatively detected, their relatively high redshifts ensure that at least two CO/[CI] lines are covered by the spectral scan.On the contrary, the narrowed spectral coverage leaves only one CO line in the spectral coverage of early observations of HLS-3 and HLS-22, leading to ambiguous redshift solutions no matter if the line is detected at high significance.With the comparison of the redshift robustness of these two groups of sources under different spectral coverage, we also emphasize that wide spectral coverage covering at least two strong molecular/atomic lines could be even more crucial in the redshift identification of DS-FGs compared to reaching high sensitivity.
Appendix C: Cross-validation and tension between different SED modeling
The application of the joint-analysis framework on HLS sources largely relies on the current knowledge on the far-IR SED of high-z galaxies.However, although Herschel provides estimates on the mean far-IR SEDs and the redshift evolution of the main population of star-forming galaxies, these results are also limited by significant source confusion, especially in SPIRE data at longer wavelengths.Moreover, current studies reveal some DS-FGs with apparently low-dust temperature (Jin et al. 2019), as well as a significant warm-dust contribution in some starburst galaxies (Eisenhardt et al. 2012;Wu et al. 2012;Fan et al. 2016).These results suggest that a large variation in far-IR SEDs could exist in high-z DSFG populations.
Our choices of template could not be free from these issues, and this is the reason why we adjust our joint-analysis framework to the results of 2 different far-IR SED templates and modeling framework, and make the cross-validation between the results of the two.The analysis with Béthermin et al. (2015a) templates and MMPZ mostly shows consistent redshift solutions.This suggests the relative stability of the joint-analysis method with input information from different SED fitting results.
However, some discrepancies on HLS-22 when using typical blind spectral scan conditions in Appendix B are also found, which leads us to have an additional check on its origin.From the comparison on derived IR luminosity in Fig. 3, and the comparison between model and data in Fig. 4 and Fig. B.1, we notice that the estimated infrared luminosity and line fluxes from MMPZ are systematically lower than those from Béthermin et al. (2015a).At the correct redshift, the predicted line fluxes of Béthermin et al. (2015a) match better with the observed line fluxes compare to MMPZ.On the contrary, MMPZ generally returns more accurate photometric redshifts, especially on HLS-3, where the photometric redshift from Béthermin et al. (2015a) significantly deviates from the spectroscopic redshift.However, as indicated by the low dust temperatures of the HLS sample, it is possible that the properties of far-IR emission of these galaxies are not representative among high redshift star-forming galaxies.Thus, we decide not to make any preference on the choice of dust template and far-IR SED fitting in our framework, and we recommend a cross-validation between the redshift solutions from various method in application.
Besides, the faint emission of HLS sources in SPIRE bands introduce large uncertainties on the constraints on source SEDs around the peak of far-IR emission.This, as a result, could contribute to the difference in IR luminosities derived from methods with different prior constraints (Casey 2020).These issues also further suggest the importance of matching observation at ALMA band 8-10 frequencies in properly reconstructing the far-IR SED, as well as estimating the IR luminosity and star formation rate of high-z DSFGs selected by millimeter surveys.
Scaling Relation
The redshift from the joint analysis is derived based on the goodness of match between the emission line model and observed spectra.As mentioned in Sect.4.2, in this approach we predict the expected fluxes of spectral lines based on the L FIR from the SED template fitting using the best-fit scaling relation between L FIR -L line from literature (Greve et al. 2014;Liu et al. 2015;Valentino et al. 2018).However, these scaling relations are subject to substantial scatter up to a factor of a few in observations.To test the possible impact of these scatters on our analysis and the robustness of the joint analysis method against them, we first checked the output best redshift solution after adding a systematic offset to all L FIR -L line scaling relations when generating the predicted spectra models.In this test, we shift the predicted CO/[CI] line fluxes by four different systematical offsets corresponding to ±0.5 and ±1.0 times of the 1σ scatter of the scaling relations.The exact values of the 1σ scatters for the considered lines are given in Table .7. The best redshift solutions for HLS sources (except for HLS-4) after applying these four different offsets in L FIR -L line conversion are listed in Table D.1.We find that all offsets in line flux, but the +1.0σ, result in best redshift solutions similar to the analysis using the median (zero offset) scaling relations.This suggests good robustness of our joint analysis method against the existing scatter of L FIR -L line scaling relations in observations.As for the test with +1.0σ offset, we further checked the reason leading to the discrepancy in best redshift solution.For HLS-22 and HLS-2-2, the application of the +1.0σ offset leads to mismatches between the model and the data due to glitches or noise spikes in the spectra (see Fig. A.2 as an example, where the noise spike is matched with CO at the best redshift solution of HLS-22 here).For HLS-3, we find this unlikely low redshift solution after applying the +1.0σ offset as the code assigns the only strong emission line in the spectral scan at 139.746 GHz to be CO .This suggests the stronger demand of having spectral scan wide enough to cover more than one strong spectral line in the redshift confirmation of DSFGs with moderate redshift (i.e z∼2-3).To test the self-consistency of the joint analysis method, we put the measured CO line fluxes and far-IR luminosities of HLS sources at their best redshift solutions (the z best,med in Table D.1) in the corresponding L FIR -L CO diagrams to check if they follow the scaling relations used for line flux predictions.The results are shown in Fig. D.1.The plotted CO and far-IR luminosities are derived using the Gaussian-fitted line fluxes and modified black-body fitting (see Table. 9 and Table. 10).Apart from the average L FIR -L CO correlations, we also show the samples from Cañameras et al. (2018) with multiple line transitions from the same sources as a comparison to our sources.
From the Fig. D.1, we find that most of our sources fall within +-2σ of the scaling relation used in our analysis, which is also consistent with the regions occupied by bright submillimetre galaxies in Cañameras et al. (2018).The only exception is HLS-3, which is also highlighted in Fig. D.1.On either L FIR -L ′ CO (5−4) or L FIR -L ′ CO(4−3) diagram, HLS-3 falls well below the scaling relation even if we consider the scatter of these scaling relations.However, we also note that it has one of the poorest SPIRE photometry among all of the four HLS sources.As the SPIRE bands close to the peak wavelength of SED predominantly constrain the IR luminosity, it is likely that the IR luminosity of HLS-3 is much less constrained than the rest HLS sources, especially compared to HLS-2 and HLS-4.The comparison between L FIR -L CO correlations of different transitions (Greve et al. 2014;Liu et al. 2015) and our sample based on measurements from our observations at the best redshift solutions.The sources with upper limits on line luminosities are presented as leftward triangles.
Fig. 1 :
Fig. 1: Cleaned images of NOEMA observation on our four NIKA2 sources.The effective beam size and shape of each map is shown in the bottom right of each panel.The contour levels from orange to dark red correspond to -4, 4, 8 and 12× RMS of each map, respectively.The red crosses mark the position of detected NOEMA sources from the uv_fit.The two resolved sources associated with HLS-2 are also labeled separately (HLS-2-1 and HLS-2-2).The scale bars in the maps (upper left) correspond to 5 arcseconds in the sky.The frequency of the continuum data are given in the lower left corner of each panel.
Fig. 5 :
Fig. 5: Observations and best fits on spectral lines, including both detections and upper limits.The best-fit model of each line is shown with the red solid line.For HLS-3, we show the two lines assuming z=3.123.The luminosity of the CO(3-2) line has a best-fit value consistent with zero.Table 10: Dust properties of the HLS sources from optical-thin modified black-body fitting.Source z f ix log(M dust,MBB /M ⊙ ) T dust,MBB β L FIR (50-300µm) SFR K 10 12 L ⊙ M ⊙ /yr HLS-2 5.2 9.1 +0.1 −0.1
Fig. 6 :Fig. 7 .
Fig. 6: Example of a modified black-body fitting on the far-IR to millimeter photometric data for one of our galaxy.(a) Photometric data and best-fit model.±1σ and ±2σ uncertainties of the model are shown with blue shades of different transparency.(b) Corner plot of the posterior distribution of the 3 parameters.The contours correspond to 1, 1.5 and 2 σ in the 2D histogram.
Fig. 8 .
Fig.8.The gas depletion time of HLS sample based on the molecular mass from Rayleigh-Jeans dust emission and the star formation rate from far-IR luminosities.The red dashed line shows the redshift evolution of gas depletion time of main sequence galaxies fromTacconi et al. (2020).The grey dots show the gas depletion time of z> 2 SMGs based on the data summarized inDunne et al. (2022).
Fig. A.2.The false identification suggests an increased sensitivity to narrow spikes in the
Fig. A. 1 : 2 :
Fig. A.1: Upper row: Joint log-likelihood of 4 HLS sources with models spectra with line widths of 300km/s, 500km/s and 800km/s.Lower row: Comparison between the models of these 3 line widths at the corresponding redshift solution and the observed source spectra.
Fig. B. 1 :
Fig. B.1:The result of joint-analysis on the 4 HLS sources with spectroscopic redshifts derived in Sect.4.2, using only the 31 GHz NOEMA spectral scans observed in 2018.First row shows the likelihood from SED fittings and joint log-likelihood of photometric and spectroscopic data, using the SED fitting outputs withBéthermin et al. (2015a) templates and MMPZ.The comparison between observed spectra and the model spectra predicted by the IR luminosities from the 2 SED fitting results are shown in the second and third row.
Appendix D: Impact of the Scatter of L FIR -L CO Fig. D.1:The comparison between L FIR -L CO correlations of different transitions(Greve et al. 2014;Liu et al. 2015) and our sample based on measurements from our observations at the best redshift solutions.The sources with upper limits on line luminosities are presented as leftward triangles.
Table 2 :
Information on NOEMA follow-up observations.
Table 3 :
NOEMA continuum source positions and best-fit sizes.
Table 6 :
S/N>3 lines blindly detected in the NOEMA spectra.
Table 7 :
Parameters of the log-linear L FIR -L line in our analysis.
Table 8 :
Summary on the joint-analyzed redshifts of NOEMA sources
Table B .
1: Redshift of HLS sources from the joint-analysis with only the 31 GHz continuous spectra observed in 2018.
4.2.The results from this analysis are presented in Table B.1 and Fig. B.1.
Table D .
1: Redshift of HLS sources from the joint-analysis when using different amount of offsets for all L FIR -L line scaling relations. | 16,793 | 2024-01-26T00:00:00.000 | [
"Physics"
] |
Accurate nucleon electromagnetic form factors from dispersively improved chiral effective field theory
We present a theoretical parametrization of the nucleon electromagnetic form factors (FFs) based on a combination of chiral effective field theory and dispersion analysis. The isovector spectral functions on the two-pion cut are computed using elastic unitarity, chiral pion-nucleon amplitudes, and timelike pion FF data. Higher-mass isovector and isoscalar t-channel states are described by effective poles, whose strength is fixed by sum rules (charges, radii). Excellent agreement with the spacelike proton and neutron FF data is achieved up to Q^2 \sim 1 GeV^2. Our parametrization provides proper analyticity and theoretical uncertainty estimates and can be used for low-Q^2 FF studies and proton radius extraction.
Introduction
The electromagnetic form factors (EM FFs) parametrize the transition matrix element of the EM current between nucleon states and represent basic characteristics of nucleon structure. The FFs at spacelike momentum transfers Q 2 1 GeV 2 have been measured in a series of elastic electron scattering experiments [1,2,3], most recently at the Mainz Microtron (MAMI) [4,5,6] and at Jefferson Lab [7,8,9]. The derivative of the proton electric FF at Q 2 = 0 (charge radius) is also determined with high precision in atomic physics experiments. Discrepancies between results obtained with different methods have raised interesting questions concerning the precise value of the proton charge radius and the Q 2 → 0 extrapolation of the elastic scattering data [10,11,12]. Besides their importance for nucleon structure, the EM FFs are needed as an input in other areas of study, such as precision measurements of quantities used to test the Standard Model.
The experiments and applications require a theoretical description of the FFs that covers a broad range Q 2 ∼ few GeV 2 and controls the behavior in the Q 2 → 0 limit (higher derivatives). This can be accomplished using the framework of dispersion theory, which incorporates the analytic properties of the FF in the momentum transfer. Dispersive parametrizations of the nucleon FFs have been constructed using empirical spectral functions, determined by amplitude analysis techniques and fits to the FF data [13,14,15,16]. It would be desirable to have a dispersive parametrization that is based on first-principles dynamical calculations and permits theoretical uncertainty estimates.
In recent work we developed a method for computing the spectral functions of nucleon FFs on the two-pion cut using a combination of χEFT and amplitude analysis (dispersively improved χEFT, or DIχEFT) [17,18]. The spectral functions are constructed using the elastic unitarity condition. The N/D method is used to separate the ππ rescattering effects (contained in the pion timelike FF) from the coupling of the ππ system to the nucleon (calculable in χEFT with good convergence). The method permits computation of the two-pion spectral functions up to masses ∼1 GeV 2 with controled accuracy. In Ref. [18] the computed spectral functions in LO, NLO, and partial N2LO, accuracy were used to study the FFs at low Q 2 (<0.5 GeV 2 for G E , <0.2 GeV 2 for G M ) and their derivatives.
In this letter we use DIχEFT to calculate the nucleon FFs up to Q 2 ∼ 1 GeV 2 (and higher) and construct a dispersive parametrization of the FFs with theoretical uncertainty estimates. This is achieved by extending our previous calculations in two aspects: (a) We partially include N2LO chiral loop corrections in the isovector magnetic spectral function, by parametrizing them in a form similar to the N2LO corrections in the electric case. This brings the calculation of electric and magnetic isovector FFs up to the same order. (b) We account for higher-mass t-channel states in the spectral functions (isovector and isoscalar) by parametrizing them through effective poles, whose strength is determined by sum rules (charges, magnetic moments, radii). This allows us to extend the dispersion integrals to higher masses and compute the spacelike FFs up to higher Q 2 . We obtain an excellent description of G E and G M up to Q 2 ∼ 2 GeV 2 with controled theoretical accuracy. Our results represent genuine theoretical predictions, as no fits are performed and no spacelike FF data are used in determining the parameters. In the following we describe the calculation and results and discuss potential applications of our FF parametrization.
Method
The FFs are analytic functions of the invariant momentum transfer t ≡ −Q 2 and satisfy dispersion relations They allow one to reconstruct the spacelike FFs from the spectral functions Im G p,n i (t ) on the cut at t > t thr . For theoretical analysis one uses the isovector and isoscalar combinations, In the isovector FF the lowest singularity is the two-pion cut with t thr = 4M 2 π . The spectral functions on the two-pion cut can be obtained from the elastic unitarity conditions, which in the N/D representation take the form [13,19,20] where k cm = t /4 − M 2 π is the center-of-mass momentum of the ππ system in the t-channel.
are the ratios of the ππ → NN partial-wave amplitudes and the timelike pion FF, which are real for t > 4M 2 π and free of ππ rescattering effects. These functions can be computed in χEFT with good convergence [17,18]. |F π (t )| 2 is the squared modulus of the timelike pion FF, which contains the ππ rescattering effects and the ρ meson resonance. This function is measured in e + e − → π + π − exclusive annhihilation experiments with high precision and can be taken from a parametrization of the data; see Ref. [21] for a review. Because the ππ state practically exhausts the e + e − annihilation cross section at t 1 GeV 2 , the elastic unitarity relations Eqs. (2) and (3) are assumed to be valid up to t = 1 GeV 2 .
The calculation of the J 1 ± functions in relativistic χEFT is described in Ref. [18]. At LO they are given by the N and ∆ Born terms in the ππ → NN amplitudes and the Weinberg-Tomozawa term. At NLO corrections arise at tree-level from an NLO ππNN contact term in the chiral Lagrangian. At N2LO pion loop corrections appear, and the structure becomes considerably more complex. In Ref. [18] we estimated the N2LO corrections to J 1 + by assuming that the full N2LO result has the same structure as the tree-level N2LO result, in which the dominant contribution is the term proportional to d 1 + d 2 . No such estimate was performed for J 1 − , since its N2LO corrections arise entirely from loops. In order to extend the reach of our calculation we now want to estimate J 1 + and J 1 − at the same level. This becomes possible with a generalizaton of our previous arguments. Inspecting the structure of the N2LO loop corrections in the πN → πN amplitude, we find that the dominant t-channel correction can be parametrized as where A and B are the invariant amplitudes [22]. In this form the N2LO loop result in J 1 − has the same structure as a tree-level correction arising from contact terms, and the parameter λ can be determined in the same way as in our previous estimate for J 1 + . In order to extend the isovector spectral integrals to masses t > 1 GeV 2 we need to parametrize the isovector spectral function beyond the two-pion cut. The e + e − exclusive annihilation data show that the isovector cross section above t ∼ 1 GeV 2 is overwhelmingly in the 4π channel and peaks at t ≈ 2.3 GeV 2 [21]. (Incidentally, this value coincides with the squared mass of the ρ resonance observed in the ππ channel.) It is reasonable to assume that the strength distribution in the nucleon spectral function follows a similar pattern. The simplest way to parametrize the high-mass contribution to the isovector spectral function is by a single effective pole, where we choose M 2 1 = M 2 ρ = 2.1 GeV 2 . The total isovector spectral function is given by the sum of the ππ cut (calculated in DIχEFT) and the high-mass part (parametrized by the effective pole), We then determine the parameters of the N2LO contributions in G V E,M [ππ] and the strength of the effective pole in G V E,M [high-mass] by imposing the sum rules for the isovector charge and magnetic moment, and for the electric and magnetic radii (here t thr = 4M 2 π ): Since the charge and magnetic moment are known precisely, the unknown parameters are essentially determined in terms of the isovector charge and magnetic radii, which can be allowed to vary over a reasonable range (see below). This makes our parametrization particularly convenient for applications where the nucleon radii are regarded as basic parameters or extracted from data. In the isoscalar FF the lowest singularity is the 3-pion cut (t thr = 9M 2 π ). The strength at t < 1 GeV 2 is overwhelmingly concentrated in the ω resonance, which we describe by a zerowidth pole. At t 1 GeV 2 the KK and other channels open up. The exclusive e + e − annihilation data show that the strength at t ∼ 1 GeV 2 is concentrated in the φ resonance [21]. We therefore parametrize the high-mass isoscalar strength by an effective pole at the φ mass. Altogether, our parametrization of the isoscalar spectral function is Table 1: Parameters of the effective poles describing the high-mass isovector spectral function, Eq. (5), and the isoscalar spectral function, Eq. (11), as determined by the sum rule Eqs. (7)-(10) and the corresponding isoscalar sum rule.
The strength of the ω and high-mass (φ) poles are fixed by imposing the sum rules for the isoscalar charges and radii, i.e., the analog of Eqs. (7)-(10) with V → S and (p − n) → (p + n).
In fixing the isovector and isoscalar spectral function parameters through the sum rules Eqs. (7)-(10) and their isoscalar analog, we use the Particle Data Group (PDG) values of the proton and neutron charge radii [23], together with a recent dispersive calculation of the isovector charge radius [24]. For the proton and neutron magnetic radii we use the results of Refs. [16,25], which are compatible with the PDG values in the neutron case. The empirical variation of the radii generates a range of the parameters, which then produces the uncertainty bands in our predictions. The resulting parameters are summarized in Table 1. The uncertainty induced by the empirical pion timelike FF in the isovector calculation using Eqs. (2) and (3) is small and can be neglected.
In the present calculation we parametrize the high-mass states in the spectral functions by a single effective pole, whose strength can be fixed by the sum rules. The approximation is justified as long as we restrict ourselves to the spacelike FFs at moderate momentum transfers |t| ∼ 1 GeV 2 . We can demonstrate this explicitly for the isovector FF, using a techique described in Ref. [13]. We take the difference of the empirical spacelike FF and the finite dispersive integral over the ππ cut up to t max = 1 GeV 2 , This quantity represents the high-mass part of the dispersive integral, which is to be approximated by the dispersive integral with the effective pole, a (1) E /(t − M 2 1 ). Plotting 1/∆ E (t) at t < 0 (see Fig. 1) one sees that the dependence on t is approximately linear, and that the single-pole form provides an adequate description up to |t| < 2 GeV 2 . Note that this is achieved with the pole parameters fixed by the sum rules Eqs. (7)- (10), and that we do not perform a fit of the spacelike FF data in Fig. 1.
The nucleon FFs obey superconvergence relations which guarantee the absence of powers t −1 in the asymptotic behavior for |t| → ∞. In the present calculation we focus on the FFs at limited spacelike momenta |t| 1 GeV 2 and are not concerned with the asymptotic behavior. The relation Eq. (13) could easily be implemented in our approach by parametrizing the high-mass spectral density in a more flexible form; however, this would require fitting the spacelike FF data in order to determine the parameters, which is not our intention here.
Results
The spectral functions are the primary quantities calculated in our approach. The results for the isovector spectral function on the two-pion cut, Eqs. (2) and (3), are shown in Fig. 2. The bands show the total uncertainty of our calculation, resulting from the uncertainty of the low-energy constants in the χEFT calculation and the empirical uncertainty of the nucleon radii used to fix the parameters (see above). Compared to Ref. [18] the electric and magnetic spectral functions are now calculated at the same order (LO + NLO + partial N2LO). Both spectral functions now show a trend to negative values above the ρ peak. Our results agree overall very well with those obtained in an analysis of πN scattering data using Roy-Steiner equations [24]; only in the ρ peak our Im G V E is ∼15% larger. Our uncertainties are comparable to those of the Roy-Steiner analysis. Also shown in Fig. 2 are the empirical spectral functions of Ref. [26].
The spacelike EM FFs calculated with the dispersion integrals Eq. (1) are shown in Fig. 3. The proton and neutron FFs were obtained as G p,n . Contrary to Ref. [18] we now do not perform any subtractions and calculate the dispersive integral without a cutoff in t , as the highmass parts of the spectral functions are now parametrized consistently through the effective poles. Our results show excellent agreement with the recent FF parametrization of Ref. [3] for all momentum transfers Q 2 1 GeV 2 , and even up ∼2 GeV 2 , which is remarkable in view of our simple parametrization of the high-mass spectral functions. Note that G n E involves substantial cancellations between the isovector and isoscalar components, so that its relative uncertainties are larger than that of the other FFs.
The higher derivatives of the FFs (moments) are needed in the extraction of the proton radius from experimental data. In Table 2: FF moments obtained from the dispersive integral Eq. (14) with the DIχEFT spectral functions (LO + NLO + partial N2LO). * The r 2 moments are input values (see text). our dispersive approach they are evaluated as see Ref. [18] for details. The moments obtained with our spectral functions are summarized in Table 2. Compared to the results quoted in Ref. [18] the isovector LO and NLO parts are exactly the same; the only changes are the estimated partial N2LO contributions and the added isovector high-mass contribution. The isoscalar part is the same as in Ref. [18]; only the couplings have now been determined through the charge and radius sum rules. Our new moments have smaller uncertainty than those of Ref. [18]. They confirm the "unnatural size" of the higher moments (compared to the dipole expectation) observed in Ref. [18].
Discussion
DIχEFT enables first-principles dynamical calculations of the isovector two-pion spectral functions with controled uncertainties and results in good agreement with empirical amplitude analysis. Together with a minimal effective pole parametrization of the high-mass isovector and isoscalar states, the method provides an accurate dispersive description of the nucleon FFs up to momentum transfers |t| ∼ 1 GeV 2 and above. The method is predictive in the sense that the dynamical input is provided by chiral dynamics and e + e − annihilation data, and no fitting of nucleon FFs is performed. This represents major progress in the theory of nucleon FFs at low momentum transfers.
Our results provide a FF parametrization with exact analyticity in t and can be used for theoretical or empirical studies in which this property is essential: (a) Determination of the peripheral charge and magnetization densities in the nucleon [28]; (b) extraction of the proton charge radius from ep elastic scattering data; (c) calculation of two-photon-exchange corrections in ep elastic scattering.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177. This work was also supported by the Spanish Ministerio de Economía y Competitividad and European FEDER funds under Contract No. FPA2016-77313-P. [3]. Black dots: Data of the MAMI A1 experiment [4,5]. Green dots: Lattice QCD results from Ref. [27]. | 3,855.6 | 2018-03-26T00:00:00.000 | [
"Physics"
] |
Analysis of Stadium Operation Risk Warning Model Based on Deep Confidence Neural Network Algorithm
In this paper, a deep confidence neural network algorithm is used to design and deeply analyze the risk warning model for stadium operation. Many factors, such as video shooting angle, background brightness, diversity of features, and the relationship between human behaviors, make feature attribute-based behavior detection a focus of researchers' attention. To address these factors, researchers have proposed a method to extract human behavior skeleton and optical flow feature information from videos. The key of the deep confidence neural network-based recognition method is the extraction of the human skeleton, which extracts the skeleton sequence of human behavior from a surveillance video, where each frame of the skeleton contains 18 joints of the human skeleton and the confidence value estimated for each frame of the skeleton, and builds a deep confidence neural network model to classify the dangerous behavior based on the obtained skeleton feature information combined with the time vector in the skeleton sequence and determine the danger level of the behavior by setting the corresponding threshold value. The deep confidence neural network uses different feature information compared with the spatiotemporal graph convolutional network. The deep confidence neural network establishes the deep confidence neural network model based on the human optical flow information, combined with the temporal relational inference of video frames. The key of the temporal relationship network-based recognition method is to extract some frames from the video in an orderly or random way into the temporal relationship network. In this paper, we use several methods for comparison experiments, and the results show that the recognition method based on skeleton and optical flow features is significantly better than the algorithm of manual feature extraction.
Introduction
People's fitness and health needs and the requirements of implementing the national fitness strategy for all are interdependent. e implementation of a health strategy requires a shift from being disease-centered to health-centered [1]. To achieve this goal, we must actively promote the indepth integration of national fitness and national health, so that health knowledge and active participation in physical activity become the general quality and ability of the people and give full play to the unique advantages of sports in health promotion, disease prevention, and rehabilitation [2]. e rapid development of modern information technology is bringing new opportunities for the development of sports, using the Internet, Internet of ings, big data, cloud computing, artificial intelligence, and other modern information technologies combined with national fitness, to create an integrated online and offline public service system for national fitness and provide more convenient, efficient and accurate sports services for the community residents. e material and cultural aspects have greatly improved, which provides the necessary material basis and broad developmental space for the comprehensive development of national fitness [3]. e number of people who regularly participate in physical exercise, level of residents' sports consumption, and healthy life expectancy of the population increase year by year substantially.
Over the years, there are still many deep-rooted problems and contradictions in the development of grassroots national fitness. e problem of unbalanced and inadequate development between urban and rural areas is relatively prominent, and there is still a big gap between continuously improving the basic public sports service system and expanding the supply of quality sports resources and the expectations and requirements of the people [4]. e awareness on innovation in fitness for all is still not strong, with insufficient motivation and lack of service capacity. e total number of public sports facilities is insufficient and the utilization rate is not high, and the problem of "where to go for fitness" is still difficult to solve. However, in general, the construction of the community wisdom sports system is not fast, and the actual application is still in the exploration pilot stage [5]. For Anhui Province, although a large number of modern information technologies are combined in the community wisdom sports, there is still a big gap between the requirements of the wisdom city and wisdom community on the platform of the community and the information construction goal of coupling modern information technology in the community sports management services. Together with inherent problems such as imperfect institutional mechanism, lagging construction of venues and facilities, and shortage of talents, the overall level of community sports services is in urgent need of improvement [6].
In surveillance systems in different settings, the detection and identification of dangerous behavior need to be done quickly to intervene and resolve. In addition to poor video quality, dangerous behaviors can be present in public places at any time of the day. erefore, a fast and accurate recognition algorithm is needed to detect and identify dangerous behaviors in unused scenarios. Video data can be captured by video surveillance devices, including cameras inside and outside buildings, electronic filming devices for traffic violations, and police law enforcement recorders, among others. As one of the special categories of human behavior recognition, dangerous behavior detection requires special optimization and processing because it is applied to its characteristics. First, places where dangerous behavior detection is applied are usually schools, prisons, hospitals, and other places with strict public order. e high real-time nature of dangerous behavior detection requirements, therefore, requires special measures to be taken for the behavior in as short a time as possible without using the scenario, which places higher demands on the complexity of the algorithm. Second, dangerous behaviors are associated with multiple human objects, and their activity patterns are very complex because of the strong interactions between the human objects. It is usually difficult to identify, track, and recognize these behaviors one by one, which is the focus and difficulty of the research.
Related Work
Wang et al. sorted out the key contract terms and possible risks that need to be focused on according to many EPC contracts, as well as some problems that project managers often encounter during the project progress, to provide a basis for managers to perform contract risk management [7]. Martini et al. proposed a case-based evaluation method. is method makes the quantitative analysis of risk assessment more accurate and objective, and the model has achieved good results with five cases. e bidding cost of EPC projects is high, accounting for about 5% of the total contract cost, and good risk management in the bidding stage can help the general contractor save and reduce the pressure of risk management in the later contract negotiation and performance stages [8]. Zhang et al. pointed out that the study of organizational structure mainly refers to the aspects of the composition of the organization and the relationship between the members of the organization [9]. e study of clubs is mainly about studying the morphological types and club business models. Sports associations usually refer to community sports as life-circle sports, sports activities that are performed through people's interconnection in their daily lives within a common geographical area and the common use of living environment and facilities, thereby achieving the purpose of generating and consolidating community feelings [10]. In addition, there are two main types of sports organizations that exist in society: administrative and public service. From the experience of community sports construction in developed countries, the funding for community sports mainly comes from government input, but families, enterprises, lotteries, and other channels are also gradually becoming important sources [11][12][13][14].
Human behavior recognition plays an important role in human-person and human-object interaction. Human behavior is related to a person's identity, personality, and psychological state, so it is difficult to extract relevant information directly [15,16]. Recognition of human behavior is one of the main topics of research in the field of computer vision and machine learning science. e recognition of human behavior and the applications of this research (including video surveillance systems, human-computer interaction, and robotics for human behavior recognition) require multiple systems to support each other. Although much of the current research recognizes human behavior based on frame sequences in the video, recognizing human behavior from still images remains a daunting task [17]. Currently, most of the research in human behavior recognition is related to facial expression recognition or pose estimation techniques. Dorie et al. summarized methods for recognizing human behavior from still images and classified them into two main categories based on the level of abstraction and the feature type approach used [18]. Huan et al. first introduced a spatiotemporal map representing human behavior. en, a clustering algorithm is used to construct the input video as a spatiotemporal map [19]. Due to the nonrigidity of the constructed 3D spatiotemporal maps and the inherent differences between the temporal and spatial dimensions, traditional 3D feature analysis methods cannot be applied to spatiotemporal maps. erefore, Poisson's equation is used to enhance the significance of the local spatiotemporal and derive its directional features. e input video is segmented into spatiotemporal maps using clustering [20][21][22][23].
ese segmented regions are then comparatively classified using a classifier and compared with traditional contour classification methods [24][25][26]. Unlike previous contour-based methods, the proposed shape-based classification method does not require removing the background or specifying a particular scene.
In the first layer, a clustering feature vector based on joints is introduced to form the initial class by clustering them according to the joints with the highest relevance. Since different sequences of the same action are grouped into different clusters, it facilitates the resolution of high intraclass variance. In the second layer, only the relevant joints in a specific cluster are used for feature extraction, thus enhancing the effectiveness of the features and reducing the computational effort. In this paper, we propose a target detection algorithm applied to high-resolution aerial images. We first describe the motivation of the algorithm, which is interested in using a segmentation-based rather than a regression-based approach to better predict the rotated rectangular boxes of targets in aerial images and using a training data enhancement method based on subimage block synthesis to deal with the need for cropping and data imbalance in aerial images. en, the structure and flow of the network are described, and each part of the algorithm is described in detail. Finally, the algorithm is fully compared and analyzed in the experimental section to prove the effectiveness of the algorithm.
Improved Deep Confidence Neural Network
Algorithm. e non-maximum suppression (NMS) sorts all the results according to their confidence scores from the largest to smallest and takes out the result with the highest confidence. en, the next detection result with the highest confidence level is taken, and the previously mentioned operation is continued until all detection results are processed. e positive and negative samples are judged based on the size of the IoU between the recommended boxes and the true value. If the EU is greater than 0.5, it is considered as a positive sample, and if it is less than 0.5, it is considered as a negative sample. However, an IoU greater than 0.5 only means that there will be many target frames of lower quality. If the IoU threshold is simply increased, it will lead to a significant reduction in the number of positive samples, which does not improve the detection performance. To improve the quality of target frames, Cascade Mask RCNN (region-based CNN) proposes a cascaded network based on the Mask RCNN algorithm to continuously improve the quality of target frames. As shown in Figure 1, the network F for extracting features, the RPN (Faster RCNN) network, and the network of the first step (branch M1 for instance segmentation network, branch C1 for classification network, and branch B1 for target frame regression network) are the same as the Mask RCNN algorithm. erefore, the Cascade Mask RCNN is equivalent to cascading two additional output networks (M2, C2, B2, M3, C3, and B3) on top of the Mask RCNN algorithm. e recommended frames in the latter step of the Cascade Mask RCNN use the results of the regression optimization of the previous step, so the quality of the target frames is gradually improved.
Under a given sample size, the confidence level (same confidence level) and accuracy are mutually restricted. e higher the confidence level, the lower the accuracy; on the contrary, the higher the accuracy, the lower the confidence level. e confidence level determines the size of the confidence interval. If the confidence level is very high (e.g., close to 1), the confidence interval will be very wide. At this time, no matter how the sampling is done, the interval estimate obtained will almost always contain the true value to be estimated. However, because the range is too large, the estimated interval loses its meaning (the accuracy is too low).
On an image, the convolution operation uses some fixedsize convolution kernels to scan the input image. As shown in Figure 1, a weight matrix with the same size extracts a pixel matrix of centroids and neighborhood points of each pixel, and the coordinates of the convolution output values are obtained by combining the feature vector inner product space of the pixels with the parameter vectors of the convolution kernel. When extending the convolution operation of 2D images to a graph of arbitrary structure, the neighborhood and weight matrices of arbitrary nodes can be defined. In the process of constructing a deep confidence neural network, the temporal vector of the graph is composed of the same key points of two adjacent frames. erefore, the confidence neural network can be extended to a deep confidence neural network. e average coordinates of all joints in the skeleton are taken as its center of gravity, as shown in (1). is strategy is inspired by the fact that the motion of human body parts can be broadly categorized as concentric and eccentric motions: e deep confidence neural network first builds a confidence neural network on a single frame of an image. On a single frame at time T, there will exist N nodes tV, skeleton contours E S (T) � W ti t ti |t � T 2 ,(i, j) ∈ H . According to the definition of the convolution operation on 2D natural images or feature maps, both the node set V and the edge set E can be considered as 2D meshes. e output feature map of the convolution operation is based on 2D. Based on the deep confidence neural network, a behavioral model is built for Computational Intelligence and Neuroscience the spatial time in the skeleton sequence. By detecting the behavior of the input video, a spatiotemporal map is constructed on the skeleton sequence. Multilevel spatiotemporal graph convolution (ST-GCN) is performed on each frame to generate higher-level feature maps. en, the standard SoftMax classifier classifies them into the corresponding hazard classes. e whole model is trained by backpropagation.
e in vivo concatenation of picture nodes in a single frame is represented by the adjacency matrix A, and the concatenation between video frames is represented by the unit matrix I. In the single-frame case, the output of ST-GCN is shown in the following, according to the spatial configuration partitioning strategy: (2) e input feature map is represented as a (C, V, T) dimensional tensor in the space-time condition. e convolution of the graph is achieved by performing a standard 2D convolution and λ 1/2 (A − I) multiplying the resulting tensor with a second-dimensional normalized adjacency matrix. However, this adjacency matrix has been decom- (2) can be transformed into We train the network to predict the IoU between the target frame and the matching annotation to improve the confidence level to satisfy the property that high-quality target frames have higher confidence scores. Like other network branches in the detector, the IoU estimation network branch consists of three fully connected layers. e middle fully connected layer has a dimension of 1024, and the final output dimension of the IoU estimation network is 1. When we obtain the recommended frames from the RPN (or from the previous stage in the cascade structure), we extract the features from these frames by RoI Align. en, the IoU between the recommendation frame and the labeled truth frame is calculated and considered as the truth value of the IoU estimation network. en, we feed the features into the IoU estimation network branch and train it with the IoU truth values. However, as we can tell, the IoU between most of the recommendation frames and the true value frames is low. is uneven distribution of training data will lead to the IoU estimation network being dominated by a large number of low IoU samples. In this chapter, an IOU focal loss function is proposed for training the IoU estimation network. is loss function is specified as Both the foreground/background classification network branch and the category classification network branch consist of three fully connected layers. e middle fully connected layer has a dimension of 1024. e final output dimension of the foreground/background classification branch is 1, and the output dimension of the category classification branch is 80, i.e., the same number of categories of interest as in the MS COCO dataset. To train foreground/background classification, we sample positive and negative samples of recommendations in the ratio of 1 : 3 based on the IoU of the recommendation frame and the truth frame. e features of the recommendation box region are then extracted using RoI Align and fed into the foreground/background classification network branch. We train it with BCELoss (binary cross-entropy loss function): For category classification web training practice, we use only positive samples as training samples. Since the IoU thresholds of the training samples for category classification do not need to be as strict as those for foreground/background classification, we resample the positive samples with a slightly smaller IoU threshold. Although their IoU is slightly lower, they still contain enough features for classification.
is results in more training samples with greater robustness. We use the cross-entropy loss function to train this network branch: e confidence level can be improved in three ways. Separating the classification task frees each category of interest from competing with a background region that is approximately 240 times larger than the training sample. e estimated IoU can help meet the desired confidence properties; i.e., higher quality target frames (higher IoU) should have higher confidence scores (Figure 2). e cross-execution makes the confidence-related network branches feature the corresponding target frames rather than the target frames before regression. With these three improvement strategies, we can obtain three confidence scores, namely, the estimated IoU, the foreground/background confidence score, and the category confidence score. en, we use the geometric mean of the three confidence scores as the final improved confidence score, which is defined as follows: where f (fg) is the confidence score of the foreground probability for each target frame, f (cat) is the confidence score of each category probability, M fg/bg is the foreground/ background classification branch, M fg/cat is the prediction of the category classification network branch, δ(M fg/bg ) is the Sigmoid function, and S(M fg/cat ) is the SoftMax function. e A prep is the predicted IoU, whose range of values is not explicitly limited, so we set its minimum value to 0.01 by the maximum function [20]. By these methods, the improved confidence is more reliable than the original method and achieves better results in both target detection and instance segmentation. e reasonableness of the selection of the research object is directly related to the correctness of the research conclusions.
e selection of the object should not only be feasible (data can be collected) but also consider the typicality and representativeness of the field in which it is located. e questionnaire was based on a 5-point Likert scale, and the respondents were asked to judge the degree of influence of the indexes on the risk of the runners during the race according to the given evaluation scores and criteria and select the corresponding scores.
Risk Warning Design for Stadium Operation.
e bidding cost of Engineering Procurement Construction (EPC) projects can generally reach about 5% of the total project cost, and the bidding cost of EPC projects for stadiums may reach 8%-10% of the total project cost. is is mainly determined by the design difficulty of the stadium project. e design difficulty of stadium projects is not only reflected in the high requirements of appearance design, but also the high requirements of safety and practicality. First, the number of spectators in stadiums is usually tens of thousands of people, so there are strict requirements for fire prevention and firefighting, admission of facilities, weightbearing of stands, evacuation of personnel, etc. Second, the view of the audience should be considered when setting up spectator seats, and the requirements of the opening ceremony should be considered when setting up the bureau and torch stand. Finally, stadium projects not only need to meet the design standards of civil construction but also have their design standards for various competition venues and competition appliances and high precision requirements, which eventually need to be accepted by the project country's department or even the organizing committee of the upcoming competition. erefore, if the general contractor fails to meet the design requirements for any reason, they will face large losses.
Large stadiums are usually local landmarks, so the owner usually requires the general contractor's design to be both innovative and recognizable, and such a design usually leads to excessive construction difficulties and the need to choose new construction techniques, which will make the general contractor unable to effectively identify and respond to possible problems in construction due to lack of experience. In addition, at present, general contractors generally lack experience in the construction of overseas stadiums, and poorly considered design and construction organization plans occur, and owners are often slow to make decisions due to lack of experience in the construction of large-scale projects, as shown in Figure 3.
Contract risk refers to the risk directly related to contract bidding, negotiation, conclusion, and performance, which mainly consists of bidding risk, contract condition risk, contract price risk, and contract management risk. e bidding risk mainly comes from the general contractor's Computational Intelligence and Neuroscience inadequate preparation and lack of seriousness for the bidding work. Generally, the bidding cost of overseas EPC projects can reach about 5% of the total contract cost, and the bidding cost of sports stadium projects is higher due to their difficult design and high design requirements. If the general contractor is found to be unable to perform the contract, the general contractor will face huge losses and compensation. Although the owner and the general contractor share the same goal, they represent different interests. When the contract is not perfect and clear, both parties will inevitably interpret it in their favor, which will easily lead to disputes. e owner will use their advantageous position in drafting the contract to add a large number of clauses unfavorable to the contractor, which will destroy the balance of the contract and even affect the recovery of claims and retainage, bringing risks to the general contractor. Contract price risk is mainly generated by the EPC mode of fixed lump sum contract. e FIDIC Silver Book only provides for the adjustment of the contract price under force majeure, law change, the owner's exercise of the right to change, etc. If no additional price adjustment clause is added, the general contractor will bear greater risk. e contract management risk is mainly generated by the imperfect contract management system, the insufficient level of contract management personnel, and the lack of attention to contract management by project managers. e insufficient level of contract management will lead to confusion and loss of contract files, which will affect the normal conduct of the contract process and affect the claim work at a later stage. Legal risk is caused by the lack of a sound legal system in the host country, inconsistent law enforcement standards, and government interference in the judiciary, which can disrupt the policy performance of the contract. While some countries have strict labor access and environmental protection systems or have strict requirements for overtime and legal holidays, other labor systems may also increase the performance costs of the general contractor. Social risks are mainly caused by differences in culture, language, work habits, and local social security. Different cultural and religious backgrounds and work habits may easily lead to friction and conflicts between Chinese and foreign employees and may even cause local xenophobia or excessive reactions from local governments. An unstable social security will not only damage employees' personal and property safety and reduce their sense of security but may also lead to theft of project equipment and materials damage. Natural risks are designed to cover the severe climate, geological factors, hydrological conditions, and sanitary conditions during the contract life cycle, as shown in Figure 4.
Deep Confidence Neural Network Algorithm Model Performance Results.
Training directly with the positive and negative sample sets described previously will result in severe data imbalance, as the targets are very sparse and sparsely distributed in the aerial images. Performing data resampling can slightly mitigate this problem but is slightly less efficient. is chapter, therefore, proposes a data enhancement method using synthetic images as an alternative solution that not only achieves a balance between positive and negative samples but also increases the diversity of the training data, resulting in more efficient network training. We randomly place a block of subimages from the positive sample set on top of a random block of subimages from a larger negative sample set at any position to obtain an enhanced new training sample. In more detail, for a positive sample subimage block of a training image whose side length is less than 800 pixels (i.e., 400 or 200 pixels), we randomly select a negative sample subimage block of larger size from that image and then place that positive sample randomly on top of the negative sample subimage block at a random position, thus synthesizing a new training sample with a different background, as shown in Figure 5. In this paper, we determine whether the previously mentioned image synthesis data enhancement operation is performed on each positive sample subimage block with random probability p.
e positive and negative sample ratios can be further adjusted. In the experiment, the deep confidence neural network algorithm in this paper can get better performance.
As shown in Figure 6, the MAP of the deep confidence neural network algorithm without AC and BBC is 73.95, the MAP of the deep confidence neural network algorithm with AC is 74.08, and the MAP of the deep confidence neural network algorithm with BBC is 74.14. When AC and BBC are combined, the MAP of the detection algorithm is 74.15. us, the regional connectivity can be improved by 0.13%, the target frame consistency can be improved by 0.19%, and the combination of both can be improved by 0.2%. Both methods improve performance in almost all categories, but this improvement is not very significant. is is because both methods can improve performance by reducing the confidence scores of unreliable rotating target frames, but most rotating target frames are reliable and the confidence scores of these frames do not change, so there is some but not significant performance improvement. Since the impact of both methods is the same, that is, reducing the confidence level of unreliable detection results, the improvement of combining the two methods will be smaller than the sum of the individual improvements of both.
To evaluate the effectiveness of the proposed mechanism of ignoring incomplete targets (IPIO) and image synthesisbased training data enhancement (IS), we performed ablation experiments.
ese results are tested on the DOTA validation set only at a doubled single-scale scaling. It is worth noting that we only trained the network on the DOTA training set. e baseline detection algorithm without IPIO and IS has a MAP of 70.56. When we train with IPIO, the detection algorithm has a MAP of 72.5, which is a 1.94% improvement. We believe that this significant improvement is mainly because the network is not confused by incomplete targets with unclear semantic information during the training process, and therefore, the network can be trained Computational Intelligence and Neuroscience 7 better. When we also use IPIO in the test, MAP improves slightly, from 72.5 to 72.63. is is because the number of incomplete targets in the test is much smaller than the number of complete targets, so the impact is not significant and only a small improvement is observed. When we add the IS mechanism, the MAP can be further improved significantly by 1.52% to 74.15. e IS mechanism is effective because the enhancement of the training data based on image synthesis improves the diversity of the training data and results in fewer false positives in the background region.
Risk Warning Results for Stadium
Operations. e risk response system usually consists of an expert consultation system and a general case base. e risk response system is connected to the expert consultation, which enables the efficient and convenient use of the case base and expert consultation, and provides timely risk response strategies for amateur runners or race managers, etc. e system stores routine cases related to marathon risks in the computer system to form a general expert database, and when a risk problem arises, relevant cases can be called up according to the nature, category, and relevance of the event to provide coping strategies and ideas for solving the risk. In addition, when an unconventional risk arises, the experts in the system are called up and timely consultation is completed through the Internet to find risk response strategies and solve risk problems, while the consultation results are saved in the expert database for future use, forming a risk response cycle system.
Information openness, timeliness, and reliability are prerequisites for the prevention and disposal of risk events and play a key role in the process of solving and resolving risks. e information release system is mainly to report the nature, scale, and degree of impact of risks to the management promptly, so that the relevant departments can address them promptly and make preliminary judgments based on the factual situation of the risks to take timely and effective countermeasures. In addition, the risk issues and risk treatment results that the marathon may face should be announced to the participants and related personnel promptly to reduce unnecessary panic and public opinion, as shown in Figure 7.
Because of the high pressure and high risk of the logistics and transportation of this project, the project team made the following responses: first, purchase all insurance for all goods and vehicles and transfer the risks during transportation to the insurance company; second, plan near the construction site. In the material storage area, some materials in high demand are stored on-site to prevent material shortages due to poor logistics; third, to avoid the risks of low customs clearance efficiency and high taxes and fees in Laos, the project team has strengthened the cooperation with the Lao Prime Minister's Office. e connection with the government has obtained administrative support from the government, accelerated the efficiency of entry and exit of people and goods, opened a green channel for logistics, and exempted some taxes and fees. e logistics risk results of the stadium project are shown in Figure 8. e project site lacked municipal facilities, and a new power supply and water supply and drainage facilities were required, which took a long time to provide due to the level of municipal construction in Laos and might affect the start of the project on schedule. e project team communicated with the owner after the inspection and finally reached an agreement that the owner would provide the materials for the facilities and the approval of all procedures, and Y Group would be responsible for the construction. Although this move caused some losses to the general contractor, it ensured the normal start of the project and gained the trust of the owner.
Conclusion
is paper identifies the contract risks of overseas stadium EPC projects more comprehensively by conducting in-depth interviews with relevant experts and constructing a risk factor model using root theory, to assist the general contractors involved in overseas projects. Combining the constructed risk factor model and risk evaluation model to identify and evaluate the case contract risks, first, the overall contract risks were evaluated using AHP-fuzzy comprehensive evaluation method, second, eleven key risk factors were screened out using hierarchical total ranking, and the key risk factors were analyzed using ISM method, and finally, the successful experience of the project in dealing with risks was summarized. e training efficiency was improved by including all targets with as few subimage blocks as possible. Synthetic data enhancement using foregroundbackground subimage blocks solves the problem of many regions as background and is less likely to produce false positives for complex backgrounds. e detection effect is also further improved by ignoring the effect of incomplete targets with unclear semantic information. Simultaneously, considering that previous approaches to predict the target frame orientation in aerial images using regression are plagued by angular periodicity, i.e., small target rotations may lead to large changes in the network output, this paper uses a segmentation-based approach to predict the target frame angle to avoid this problem to obtain a better-rotated target frame, which makes the network easier to train and the segmentation-based results more accurate. Extensive experiments show that the algorithm achieves better detection performance in the DOTA database, which proves the effectiveness of the deep confidence neural network algorithm.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest | 7,831.6 | 2021-07-05T00:00:00.000 | [
"Computer Science"
] |
Robust modeling and planning of radio-frequency identification network in logistics under uncertainties
To realize higher coverage rate, lower reading interference, and cost efficiency of radio-frequency identification network in logistics under uncertainties, a novel robust radio-frequency identification network planning model is built and a robust particle swarm optimization is proposed. In radio-frequency identification network planning model, coverage is established by referring the probabilistic sensing model of sensor with uncertain sensing range; reading interference is calculated by concentric map–based Monte Carlo method; cost efficiency is described with the quantity of readers. In robust particle swarm optimization, a sampling method, the sampling size of which varies with iterations, is put forward to improve the robustness of robust particle swarm optimization within limited sampling size. In particular, the exploitation speed in the prophase of robust particle swarm optimization is quickened by smaller expected sampling size; the exploitation precision in the anaphase of robust particle swarm optimization is ensured by larger expected sampling size. Simulation results show that, compared with the other three methods, the planning solution obtained by this work is more conducive to enhance the coverage rate and reduce interference and cost.
Introduction
With the development of information technology, there is an increasing usage of radio-frequency identification (RFID) network in logistics. 1,2 How to deploy the minimum number of readers for covering all tags in the entire space is known as the radio-frequency identification network planning (RNP) problem, 3 which is one of the fundamental problems in large-scale RFID networks. 4 Coverage, inference, and cost, which are key elements of the RNP problem, are largely influenced by the number and positions of the RFID devices. 5,6 However, uncertainties, such as radio channel, antenna read range and the influence on readers' identification ability by different materials and objects around tags, may exist in the actual RFID network system. These uncertainties bring great influence to coverage, interference, and cost of the RFID network. Therefore, how to adopt the intensive way of deploying the readers to obtain higher coverage rate and less interference within low budget becomes a key issue for the popularization and application of RFID networks in logistics. 1 In the past decades, extensive efforts have been made to model and plan RFID network under certain environment, [5][6][7][8][9][10] and coverage problem is one of the fundamental issues in RNP. Gong et al. 5 formulated a model of RNP, in which tag coverage, number of readers, interference, and the sum of transmitted power were considered. Liu and Ji 6 proposed an optimization model of RFID network system to solve the problem of how to place readers, taking the coverage rate and load balance into consideration. Di Giampaolo et al. 8 developed a simple and effective model, in which the performance indicators consisted of coverage efficiency, overall overlapping, total power, and cost of the network. Tao et al. 10 conducted the reader deployment in large-scale RFID systems as a problem of multiobjective combination optimization by taking the coverage, signal interference, and load balance as the optimization objectives and deducing the objective ranges.
The RFID network planning problem has been proved to be NP-hard for its nonlinearity and complexity. In recent years, evolutionary computation (EC) and swarm intelligence (SI) have become an effective tool for solving the planning of RFID network problem under certain environment, such as genetic algorithm (GA), 3,7 plant growth simulation algorithm (PGSA), 4 particle swarm optimization (PSO), [8][9][10] and artificial colony algorithm. 11 Yang et al. 7 proposed a GA-based RNP method, which included mapping the RFID network, presenting the problem states using gene and chromosome, and implementing the mechanisms of individual selection and genetic operation. Simple and effective models of electromagnetic elements involved in RNP are developed and included in the frame of PSO algorithm. 8 An improved PSO algorithm based on genetic algorithm (GA-PSO) was proposed to solve the problem of how to place readers so that the readers can effectively get the information of multiple tags. 9 Tao et al. 10 proposed an improved particle swarm algorithm, which can restrict the position change of original and new particles in the iteration process and accelerate the convergence speed of the algorithm, to solve the reader deployment in large-scale RFID systems. A k-coverage model, which is formulated as a multidimensional optimization problem with constraint conditions is developed to evaluate the network performance. And the PGSA is used to optimize the RFID networks by determining the optimal adjustable parameters in the model. 4 Ma et al. 11 proposed a cooperative multi-objective artificial colony algorithm to find all the pareto optimal solutions and to achieve the optimal planning solutions by simultaneously optimizing four conflicting objectives (tag coverage, reader interference, economic efficiency and load balance) in multiobjective RNP. Lin and Tsai 3 proposed a micro-genetic algorithm (mGA) with novel spatial crossover and correction schemes to cope with this constrained threedimensional reader network planning problem. The mGA was computationally efficient, which allowed a frequent replacement of RFID readers in the network to account for the short turnaround time of cargo storage and guaranteed 100% tag coverage rate to avoid missing the cargo records.
Although the modeling and planning of RFID network has drawn extensive attention, few studies have considered several objectives under uncertainties about decision-making for the RNP problem. Meanwhile, there have been some studies on the sensor deployment of wireless sensor networks (WSNs) under uncertainties. Li et al. 12 and Ozturk et al. 13 proposed a probability sensing model when the detection region of sensor was uncertain. Vu and Zheng 14 presented a systematic study of the impact of location uncertainty on the coverage properties of WSNs and devised an efficient polynomial algorithm. Vu and Zheng 15 carried out a rigorous study of the impacts of location uncertainty on the accuracy of target localization and tracking, and proposed an effective algorithm based on order-k max and min Voronoi diagrams. However, different properties of WSNs and RFID network make these approaches useful in WSNs inapplicable to RFID networks. 4 The robust optimization method is very important in complicated RFID network planning under uncertainties. To deal with complex uncertainty problem, it is simple and effective to combine intelligent optimization algorithms and robust optimization. [16][17][18][19] However, robust optimization is generally based on Monte Carlo integral, while cyclic iteration is usually used in intelligent optimization algorithms. This straightforward method needs more number of fitness evaluations, which brings about a large calculation. Therefore, how to keep the search performance while reducing the computational complexity, in other words, how to improve the search performance in the case of limited sampling size, is a problem deserving study.
Uncertainties, such as radio channel, antenna read range, and the influence on readers' identification ability by different materials and objects around tags, can be converted to uncertain positions of tags with respect to readers' identification. Therefore, to enhance the reliability and stability in logistics, this work focuses on tags' uncertain positions and studies robust modeling and planning of RFID networks under uncertainties. First, a robust planning model of an RFID network in logistics is built, in which the coverage rate is analyzed on the basis of the probability sensing model of WSN under uncertainty. Second, the concentric map-based Monte Carlo method is applied to calculate the interference. Third, to enhance the searching capability, robust particle swarm optimization (RPSO), which can trade off the exploitation speed and precision, is put forward to solve the RFID planning problem under uncertainties. Finally, the simulation results indicate that the proposed method possesses a better robust optimization capability.
RFID network planning problem under uncertainties
The key to the RFID network planning problem is the deployment of readers to satisfy multi-objective requirements due to the limited range of reader-tag communication. First, it is hoped that the tags can be identified as much as possible. Second, reading interference is closely related to the reader collision problem, 20 which may occur when the tags are located in the overlapping area of any two readers' interrogation zones, and both readers read tags simultaneously. 3 Consequently, the number of tags located in the overlapping area of interrogation zones should be as small as possible. Third, the smaller the number of placed readers is, the lower the cost is. Therefore, the proposed planning model of RFID network aims to optimize a set of objectives (such as tag coverage, reading interference, and economic efficiency) simultaneously by adjusting the control variables (the coordinates of readers, the number of readers, etc.) of the system. In view of uncertainties which can be converted to uncertain positions of tags with respect to readers' identification, a novel study related to the coverage and interference analysis is considered in this work.
The deployment region of the RFID network system is supposed as a two-dimensional (2D) square domain which consists of several tags. Assume that uncertainties are arbitrary and then the uncertain area of tags' positions to readers' identification is circles with radius R T . A conceptual view of tags' uncertainty positions is shown in Figure 1, where coarse dots indicate tags and small circles are the ranges of tags' uncertainty positions.
Coverage rate
Coverage is the main task of an RFID network. By referring the probability sensing model for WSNs, 12,15 reader ij , which denotes the capability of the ith reader to identify the jth tag, is described using equation (1) where R R is the sensing radius of each reader; R b is the radius of each tag's uncertainty position; are the coordinates of the ith reader and the jth tag, respectively. Then, c j O , which denotes the coverage of the tag overlapped by readers, is described using equation (2). In equation (2), N R denotes the number of deployed readers In order to make the coverage rate comparable in different numbers of tags, f 1 denotes the coverage rate of the RFID network, which can be obtained using equation (3) by referring to Gong et al., 5 Liu and Ji, 6 and Di Giampaolo et al. 8 In equation (3), N b denotes the number of tags Interference Interference mainly occurs in an environment with dense readers, where several readers try to interrogate tags simultaneously. Interference will result in unacceptable misreading 21 and failure of information collection. 10 Due to uncertainty of tags' positions, it is complicated to use geometric analysis method to deal with the interference problem. And it is hard to build an approximate mathematical coverage model. Here the Monte Carlo sampling method is employed.
If the ith reader can identify the sampling site o j k , then reader j ik = 1; otherwise, reader j ik = 0, where k = 1, 2, . . . , K, K is the number of sampling sites, o j k , the coordinate of which is (x 0j k , y 0j k ), denotes the kth sampling site within the uncertainty range of the jth tag, reader j ik is the capability of the ith reader to identify the When 22 is applied. For a unit circle, the center of which is in the origin of the coordinate, the polar coordinates of the sampling sites are , if e 1 . e 2 j j ð5Þ , if e 1 À e 2 , e 1 \e 2 ð7Þ , if e 2 e 1 À e 2 , e 2 6 ¼ 0 where e 1 and e 2 are the random real numbers in the interval [21,1]. The Cartesian coordinate of a sampling site is (x, y) = (r cos u, r sin u). In this work, the Cartesian coordinate of the jth tag is (x, y) = (x j O + R b r cos u, y j O + R b r sin u), because the jth tag is located in (x j O , y j O ) and the uncertainty area of a tag's position is a circle with radius R b .
Precision of random sampling is low, because e 1 , e 2 are random real numbers in the interval [21,1]. A lowdiscrepancy sampling, Korobov Lattice, 23 is adopted to enhance coverage calculation precision. Korobov Lattice is defined as Over j k , which denotes the reading overlap of the kth sampling site within the uncertainty area of the jth tag, is described using equation (10). Referring to Di Giampaolo et al., 8 f 2 , which denotes the average interference, is described using equation (11) Over j k = 0,
Cost
The quantity of readers is a key impact factor of the RFID network cost. f 3 in equation (12) denotes the RFID network cost. In equation (12), N max is the maximum number of readers that can be deployed
RFID network planning model under uncertainties
The robust planning model of RFID network is built, in which the coverage rate, interference, and cost are considered. Equation (13) is taken as the objective function, where g 1 , g 2 , and g 3 are the weight coefficients and Any deployed reader must be in the region of the RFID network. Let A be the region of the RFID network. Then the feasible region is shown in equation (14) x Constraints of the RFID network planning model are equations (1)-(3), (10)- (12), and (14).
Robust particle swarm optimization algorithm
From section ''RFID network planning model under uncertainties,'' it can be known that the RFID network planning problem under uncertainties is mainly a continuous problem. In this work, PSO is applied to the RFID network planning problem, because PSO possesses ease of implementation, high quality of solutions, computational efficiency, and speed of convergence 6 and also exhibits good ability to solve continuous optimization problem. 5 From section ''Interference,'' it can be shown that there is a trade-off between accuracy and speed in interference calculation. Larger number of sampling sites can improve the calculation accuracy and lead to lower speed, or vice versa. In order to balance the calculation of accuracy and speed, a novel robust particle swarm optimization algorithm is proposed.
Particle swarm optimization (PSO) algorithm
PSO is a population-based heuristic search technique, in which a group of particles search the best solution by iterations. The iteration formulations 24 are as follows where d = 1, 2, . . . , D with D being the number of particles' dimensions; t is the iteration number, t = 1, 2, . . . , T , with T being the maximum iteration; v is the inertia weight; c 1 and c 2 are the acceleration constants; R 1 and R 2 are random numbers between 0 and 1; X pb l is the best solution of the lth particle; X gb is the best solution of the swarm; and X pb ld and X gb d are the dth dimensions of X pb l and X gb , respectively. Besides, the velocity V ld is limited by the maximum velocity V max,d .
Basic idea of RPSO
In the prophase of iterations, PSO algorithm should explore the search space to find the optimum region with faster speed. In the anaphase of iterations, PSO is supposed to develop the optimum region to search the optimal solution with higher precision. The sampling size is closely related to the exploring speed and exploitation precision. The smaller the sampling size is, the faster the exploring velocity is. The larger the sampling size is, the higher the exploitation precision is.
According to this line of thinking, a robust particle swarm optimization is proposed. To be specific, some sampling sizes formed a set S N = fn 1 , n 2 , . . . , n N g. In S N , some larger sampling sizes formed a subset S s N , while the other smaller sampling sizes formed a subset S l N . In the prophase of iterations, the selection probabilities of sampling sizes in S s N are larger than those in S l N , or vice versa in the anaphase of iterations. Then the exploring speed in the prophase of iterations and the exploitation precision in the anaphase of iterations can be ensured. In the prophase of iterations, sampling sizes in S l N are given low selection probabilities to get some high-reliability solutions, which can reduce blind exploration, while in the anaphase of iterations sampling sizes in S s N are given low selection probabilities to get some unreliability solutions, which can help RPSO to escape local extrema.
In this work, the sampling sizes in S N are symmetrical about n av = (n 1 + n N )=2.
Method of sampling size selection
Here, an asymmetric 2D sigmoid function is designed to set the selection probability of each sampling size in S N . u(n, t), which is described in equation (17), denotes the probability of the sampling size n in the tth iteration. u 1 (n) and u 2 (t) in equation (17) are given by equations (18) and (19) u(n, t) = B½u 1 (n)u 2 (t) + 1 2 ð17Þ u 1 (n) = 2 1 + exp½ÀA 1 (n À n av ) À 1 ð18Þ where B 2 (0, 1), A 1 , A 2 2 (0, +'), t a is an integer, t a 2 ((1 + T )=2, T), and T is the maximum iteration of the RPSO algorithm. It can be seen that u 1 (n) and u 2 (t) are both changed from the sigmoid function. Here u 1 (n) 2 ( À 1, 1), u 2 (t) 2 ( À 1, 1), and u(n, t) 2 (0, 1). u 1 (n) and u 2 (t) are symmetrical about (n av , 0) and (t a , 0), respectively. The changes of u(n, t) with n and t are modified by updating the parameters A 1 and A 2 . Besides, the sum of selection probabilities of sampling sizes in S N should be 1, then It can be known that if Expected sampling size The expected sampling size, E(t), which is relevant to the computational complexity of RPSO, is analyzed here.
Calculation of expected sampling size. E(t) can be derived as follows The sampling sizes in S N are symmetrical about n av . Therefore, equation (21) holds From equation (21), it can be seen that the change of E(t) with the iteration t depends entirely on u 2 (t). It means that E(t) is a monotonically increasing function of t, just like u 2 (t).
Change of expected sampling size. n s av and n l av denote the average sampling sizes in S s N and S l N , respectively, and n s m denotes the maximum sampling size in S s N . If A 2 ) ln 2=(T À t a ), exp½ÀA 2 (T À t a ) ) 2, then u 2 (t) ' 1 and equation (22) holds because t a 2 1 + T 2 , T Therefore, when t 2 ½1, 2t a À T , u 2 (t) ' À 1, and equation (23) holds Therefore, E(t) is a monotonically increasing function approximately from n av À 1 . And E(t) is almost unchanged when t 2 ½1, 2t a À T .
RPSO for the RFID network planning problem
Here, the proposed RPSO is applied for the RFID network planning problem. The objective function, equation (13), acts directly as the fitness function of RPSO. In the objective function, the coverage of RFID network is calculated by referring to section ''Coverage rate''; interference is determined by Monte Carlo sampling method introduced in section ''Interference''; cost is counted by equation (12) in section ''Cost.'' The pseudo-code of RPSO is as follows
Simulations
In an ultra-high-frequency (UHF) RFID network area of 30 m 3 30 m, 100 RFID tags (JY-T9662) are randomly distributed. Tag (JY-T9662), which is made of copperplate paper, is an UHF passive tag characterized by 860-960 MHz. Parameters are set according to Table 1, where R R is set in accordance with Gong et al.; 5 l 1 , l 2 , b 1 , and b 2 are set in accordance with Ozturk et al.; 13 M and T are the population size and the number of iterations of optimization algorithms detailed in the following paragraphs, respectively.
To verify the performance of the proposed method, four algorithms are implemented and compared, among which RGA_MC is real-coded GA based on traditional Monte Carlo method, RPSO_MC is PSO based on traditional Monte Carlo method, RGA_SC is real-coded GA based on the sampling method introduced in this work, and RPSO_SC is RPSO described in section ''Robust particle swarm optimization algorithm.'' In all these four algorithms, coverage of the RFID network is calculated by referring to section ''Coverage rate''; cost is calculated using equation (12); interference in RGA_MC and RPSO_MC is determined by traditional Monte Carlo method, while interference in RGA_SC and RPSO_SC is calculated by Monte Carlo method based on the sampling method presented in this work. M and T of all algorithms are set according to Table 1. In RGA_MC and RGA_SC, the uniform crossover and uniform mutation are implemented, where the crossover and mutation rates are 0.7 and 0.01, respectively. v = 0:729 and c 1 = c 2 = 1:49445 in RPSO_MC and RPSO_SC. Every chromosome or particle is coded as a 3 3 N max matrix. Each reader has three codes: abscissa, ordinate, and whether to be deployed. The third code is a real number in the interval (0, 1). If the third code is greater than 0.5, the reader is deployed, or vice versa. Parameters on sampling size selection in RGA_SC and RPSO_SC are set according to section ''Case of parameter setting.'' To compare these four algorithms, let the calculation quantity of the sampling method introduced in this work be the same as that in the traditional Monte Carlo method. In view of additional selection probability calculation for sampling size in RGA_SC and RPSO_SC, the sampling sizes of RGA_MC and RPSO_MC are both set at 18, which is slightly higher than E(N , t) in RGA_SC and RPSO_SC.
These four algorithms are continuously executed for 50 times. The best and average results are shown in Tables 2 and 3, respectively, where f exp is obtained by traditional Monte Carlo method in which the sampling size is 1000. Table 4 shows Error(f exp ) of interference and fitness (Error(f exp ) = jf exp Àf exp j, wheref exp is obtained by RGA_MC, RPSO_MC, RGA_SC, and RPSO_SC). The average CPU times spent by these four methods are shown in the last row of Table 3. The unit of CPU time is seconds.
The average fitness value comparisons among these four methods are shown in Figure 4(a). The average fitness error comparisons among these four methods are shown in Figure 4(b). The best results acquired by RPSO_AS are shown in Figure 5, where * represents the reader, coarse dots indicate the tags, large circles represent the identification ranges of readers, and small circles are uncertain positions of tags.
From Tables 2 and 3, the best and average results displayed by RGA_SC and RPSO_SC are better than those by RGA_MC and RPSO_MC. RPSO_SC shows more excellent interference and fitness than RGA_SC. CPU time spent by RPSO_SC is obviously less than those spent by the other three methods.
From Table 4, individual evaluation errors in the best and average results obtained by RGA_SC and RPSO_SC are significantly smaller than those obtained by RGA_MC and RPSO_MC. It can be reflected from the expected sampling sizes in the last few iterations. For example, in the 77th-100th iterations, the expected sampling size of RGA_SC and RPSO_SC is between 30 and 31, which are drastically larger than those of RGA_MC and RPSO_MC, which is 18. In addition, it can be seen that the average fitness error obtained by RPSO_SC is smaller than that obtained by RGA_SC.
From Figure 4(a), the convergence rate of RPSO_SC is faster than those of the other three methods. From Figure 4(b), the fitness errors are clearly influenced by the sampling size. The average fitness error of RPSO_SC is obviously larger than those of RGA_MC and RPSO_MC in the 1st-65th iterations, but it is gradually smaller than those of the other three methods after the 70th iteration. From Figure 5, most tags have been identified by readers and lower reading interferences occur among readers.
To verify the model and method proposed in this work comprehensively, RFID network planning is done using different numbers of tags. In areas of 50 m 3 50 m, 50 m 3 100 m, and 100 m 3 100 m, the values of N max are 40, 75, and 140, respectively; and there are 250, 500, and 1000 tags, respectively, which are randomly distributed; the population sizes are 20, 40, and 50, respectively; the numbers of iterations are 300, 500, and 1000, respectively; the t a values are 210, 350, and 700, respectively; the other parameters of sampling size selection in RGA_SC and RPSO_SC are set according to section ''Case of parameter setting.'' Table 5 shows the average optimization results and average CPU times of these three examples for the four algorithms. Figures 6(a), 7(a), and 8(a) show the average fitness value comparisons among these four methods, while Figs. 6(b), 7(b), and 8(b) show the average fitness error comparisons among these four methods.
From Table 5, it can be seen that in different tag quantities RPSO_SC which spent the shortest CPU time is the best among the four algorithms referring to the interference and fitness. From Figures 6-8, in robust planning of the RFID network with different Table 1. Parameter setting.
Parameter numbers of tags, the convergence rate of RPSO_SC is faster than those of the other three methods and the average fitness error of RPSO_SC is gradually smaller than those of the other three methods. The average fitness value obtained by RPSO_SC is obviously larger than those obtained by the other three methods from the 44th, 25th, and 119th iterations in Figures 6(a), 7(a), and 8(a), respectively. The average fitness error in fitness values obtained by RPSO_SC is obviously larger than those obtained by RGA_MC and RPSO_MC in
Conclusion
To sum up, this work builds a RFID network planning model in logistics under uncertainties which can be converted to uncertain positions of tags with respect to readers' identification. The probability sensing model is employed to analyze the coverage rate, and the Monte Carlo method is applied to calculate the interference among readers. To enhance the planning efficiency of an RFID network under uncertainties, a robust particle swarm optimization algorithm is proposed. To reduce the computational complexity and improve the search performance simultaneously, the sample sizes are smaller and larger in the prophase and anaphase of iterations, respectively. The expected sampling size is analyzed for the convenience of performance comparisons between RPSO_SC and the other algorithms based on traditional sampling method. Several simulations are executed in different network scales. With respect to coverage, interference, network cost, fitness, CPU time, convergence rate, and fitness error, the proposed approach can provide a better planning scheme for an RFID network system in logistics under uncertainties.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural Science Foundation of China (No. 51579143), The Ministry of education of Humanities and Social Science project (Nos 15YJC630145 and 15YJC630059), and the Natural Science | 6,448.4 | 2018-04-11T00:00:00.000 | [
"Computer Science"
] |
Investigating the biomechanical function of the plate-type external fixator in the treatment of tibial fractures: a biomechanical study
Background The design of an external fixator with the optimal biomechanical function and the lowest profile has been highly pursued, as fracture healing is dependent on the stability and durability of fixation, and a low profile is more desired by patients. The plate-type external fixator, a novel prototype of an external tibial fixation device, is a low profile construct. However, its biomechanical properties remain unclear. The objective of this study was to investigate the stiffness and strength of the plate-type external fixator and the unilateral external fixator. We hypothesized that the plate-type external fixator could provide higher stiffness while retaining sufficient strength. Methods Fifty-four cadaver tibias underwent a standardized midshaft osteotomy to create a fracture gap model to simulate a comminuted diaphyseal fracture. All specimens were randomly divided into three groups of eighteen specimens each and stabilized with either a unilateral external fixator or two configurations of the plate-type external fixator. Six specimens of each configuration were tested to determine fixation stiffness in axial compression, four-point bending, and torsion, respectively. Afterwards, dynamic loading until failure was performed in each loading mode to determine the construct strength and failure mode. Results The plate-type external fixator provided higher stiffness and strength than the traditional unilateral external fixator. The highest biomechanics were observed for the classical plate-type external fixator, closely followed by the extended plate-type external fixator. Conclusions The plate-type external fixator is stiffer and stronger than the traditional unilateral external fixator under axial compression, four-point bending and torsion loading conditions.
Background
Traditionally, external fixators have been selected as osteosynthesis devices for the treatment of open tibial fractures and certain closed tibial fractures with severe injury to soft tissue [1,2]. External fixation devices provide a promising and satisfactory alternative for better soft tissue care and for preserving periosteal perfusion to the regions of fracture [3,4]. They can also be selected for use as interim or definite devices of fracture fixation [4]. However, previous studies have demonstrated that the stiffness of a fixation device is a principal determinant of interfragmentary movement, which has a significant effect on the mechanism and progression of fracture healing [5,6]. Excessive interfragmentary movement, due to insufficient stiffness of external fixators, can result in deficient callus formation, eventually leading to delayed union or even nonunion with ultimate implant failure [7][8][9]. Meanwhile, an external fixator with high strength can contribute to durable fixation to allow progressive functional training [10]. In addition, the large profile of the implants, which tends to confer inconvenience to patients during dressing and ambulation, has led to low acceptance of the implants among patients [11,12].
External fixators used in present clinical practice have various limitations, including insufficient fixation stiffness and strength, leading to poor healing, or high construct profiles, resulting in nonpatient friendly physical burden [13][14][15][16][17]. It is therefore essential to design a novel prototype of an external fixator for tibial fracture to increase rigidity and strength while reducing the profile of the fixation constructs. Our research group designed the plate-type external fixator, a novel prototype of an external tibial fixation device, with a lower profile than traditional unilateral external fixators, which was designed for greater construct stability and durability to treat open tibial fractures and certain closed tibial fractures with severe injury to soft tissue, and we are the first group worldwide to describe such a prototype. In addition, the length of the novel fixator can be adjusted for people of different heights.
Since the biomechanical function of plate-type external fixators remains unclear, this study was performed to investigate the biomechanical parameters of a novel external fixator by comparing with the unilateral external fixator [4,6,18]. We hypothesized that the plate-type external fixator would provide higher fixation stability and durability than traditional external tibial fixation devices.
Fracture model
A total of fifty-four fresh and unembalmed tibias were selected from fifty-four voluntarily donated adult male cadavers (Department of Anatomy, Air Force Medical University, Xi'an, China) between the ages of 18 and 50. The average length of the selected tibias was 340 mm (range from 310 mm to 375 mm). All selected tibias specimens were examined for bone mineral density, and osteoporosis was ruled out by means of dual energy Xray absorptiometry (LUNAR IDAX, GE Inc., Boston, Massachusetts, USA). The tibias were then cleaned of any soft tissues for use in this study. T-scores were selected to represent the values of bone mineral density.
The novel external tibial fixation device prototype, namely, the plate-type external fixator, consists of a proximal tibial fixation lath with a transverse slat at the proximal end and a distal tibial fixation lath with a transverse slat at the distal end. The distal end of the proximal fixation slat is equipped with a slot, and the proximal end of the distal fixation slat can be inserted into the slot and can slide along lath to adjust the length of the fixator to accommodate various lengths of the human lower limb. In addition, the tibial fixation laths and the transverse slats are both equipped with locking screw holes, and all the screws used are fully threaded self-tapping locking screws (Fig. 1). With a lower profile than the traditional unilateral external fixator, the novel external tibial fixator, designed to match perfectly with the crus, is expected to make it easier to adjust the plate close to the bone surface. For this study, we lengthened the plate-type external fixator by 30 mm, namely, twofold the hole spacing, to clarify whether the extended plate-type external fixator could also provide sufficient stiffness and strength.
The fifty-four tibias were randomly divided into three groups of eighteen specimens each for fixation by a classical plate-type external fixator (CPF), an extended plate-type external fixator (EPF) or the the unilateral external fixator (UEF). Subsequently, the eighteen specimens of each configuration group were randomly divided into three groups of six specimens each for axial compression, four-point bending, and torsion testing, respectively.
A standardized midshaft osteotomy by means of an oscillating saw was performed in all tibias to create a 20 mm fracture gap, measured with the aid of a Vernier caliper, to simulate a comminuted tibial shaft fracture and to ensure no contact between both ends of the fracture. Eighteen specimens were stabilized with a 13-hole stainless steel CPF (300 mm in length, 21 mm in width, 10 mm in thickness, Kangding Medical Alliance Co., Ltd., Shanghai, China), with three 5 mm diameter stainless steel locking screws placed proximally in the first, third and fifth locking holes and three 5 mm diameter stainless steel locking screws placed distally in the ninth, eleventh and thirteenth locking holes. Another eighteen specimens were stabilized with a 15-hole stainless steel extended plate-type external fixator (330 mm in length, 21 mm in width proximally and 16 mm in width distally, 10 mm in thickness proximally and 5 mm in thickness distally, Kangding Medical Alliance Co., Ltd., Shanghai, China), with three 5 mm diameter stainless steel locking screws placed proximally in the second, fourth and sixth locking holes and three 5 mm diameter stainless steel locking screws placed distally in the tenth, twelfth and fourteenth locking holes. Both plate-type external fixators have a hole spacing of 15 mm.
The final eighteen specimens were stabilized with a stainless steel UEF (Kangding Medical Alliance Co., Ltd., Shanghai, China) as the control group. Three stainless steel half-pins (5 mm in diameter) were fixed per fragment and linked with pin clamps to a stainless steel rod (300 mm in length, 11 mm in diameter). The positions of the half-pins corresponded to the proximal locking screws of the CPF in the first, third and fifth holes and to the distal locking screws in the ninth, eleventh and thirteenth holes.
The choice of three locking screws/half-pins per fracture fragment in our study adhered to the AO principles of external fixation that a minimum of three screws were needed to achieve stable fixation on either fragment of the fracture. The AO recommended having a screw near and a screw far from the fracture end in both fragments; however, for the sake of comparison, the most distant screws were inserted into the second and fourteenth locking holes in the extended plate-type external fixator group instead of into the first and fifteenth locking holes, so the same three locking screws/half-pins positions were used in both fragments of the fracture among the three fracture fixation configuration groups. We acknowledge that this represents a limitation of our study, as the adjustment of the locking screws may influence the fixation stiffness of the extended plate-type external fixator.
The offset distance was restricted to 15 mm between the bone surface and the external plates/rods to allow the swelling of soft tissue without disturbance of the configuration and to provide sufficient space for postoperative care. We chose an offset of 15 mm instead of 20 mm or 30 mm for the purpose of increasing the fixation stability of the configuration to prevent excessive interfragmentary movements [4,19,20]. The inner locking screws/half-pins were inserted at a distance of 20 mm from the fracture end. The locking screws/half-pins used were long enough to ensure adequate purchase of the bilateral cortex.
Mechanical testing
The proximal and distal ends of all the fracture fixation configurations were potted in polymethylmethacrylate for mechanical testing (Fig. 2) [6]. Subsequently, the bone-implant constructs were mounted in the testing machine with a customized clamp. The classical platetype external fixation constructs, the extended plate-type external fixation constructs and the unilateral external fixation constructs were tested to determine the fixation stiffness under three loading conditions (axial compression, four-point bending and torsion) (Fig. 3) [6,21]. The relative displacements at the fracture site were recorded on a computer to calculate the stiffness of the configuration. Subsequently, the three constructs underwent dynamic loading until failure under each loading mode to determine the construct strength and the failure modes. Construct strength was defined as the peak load at the moment of construct failure during progressive dynamic loading to failure under each loading mode. Configuration failure was defined either by catastrophic fracture or by nonrecoverable deformation in the region of fracture, whichever occurred first [5,[22][23][24].
Axial compression test
Both ends of the constructs were mounted with the use of a customized axial compression clamp in the Zwick/ Roell-Z005 electronic materials testing machine (Ulm, Germany) (Fig. 3a). The applied loading was gradually increased from 0 N to a maximum load of 700 N, corresponding to the weight of a 70 kg person during a onelegged stance [25], at a rate of 0.1 mm/s for six cycles. The interfragmentary displacements at the fracture site were determined by means of laser displacement sensors (LK-G10, KEYENCE Inc., JAPAN). Axial compression stiffness was determined by dividing the axial load values by the vertical displacement values and was expressed in N/mm.
After the static test, sinusoidal loading with a constant load amplitude was applied for each construct. Every 100 loading cycles, the load amplitude was increased stepwise by 100 N until configuration failure occurred, while preloading was applied to 50 N to ensure that construct failure occurred within a reasonable number of loading cycles (< 10,000) by increasing the load stepwise to failure [5,21].
Four-point bending test
The constructs were placed in turn by means of a customized bending clamp on a Zwick/Roell-Z005 electronic materials testing machine (Ulm, Germany) (Fig. 3b). The bending moment was calculated by multiplying the bending force by the bending length. The distance between the lower supports was set to 200 mm, while the upper supports were separated by 100 mm. The bending length, defined as the distance between the upper and lower supports on either side of the fracture, was set to 50 mm. The bending force applied was constantly increased up to 400 N, corresponding to a bending moment of 20 Nm, at a rate of 1 mm/min. The bending stiffness was calculated by dividing the bending moment by the bending angle and was expressed in Nm/deg [26,27]. Afterwards, sinusoidal loading with a constant amplitude was applied for each configuration. The load amplitude was increased gradually every 100 loading cycles by 1 Nm until configuration failure occurred, while the preload was applied to 1 Nm [5].
Torsion test
The torsional testing was performed by using a CTS-500 microcomputer controlled torsion test machine (Hualong Testing Instrument Co., Ltd., Shanghai, China) equipped with a custom-made torsional clamp, with the proximal and distal ends of the constructs being rigidly clamped by means of the clamp (Fig. 3c). The implemented torque was constantly increased from 0 Nm to 10 Nm at a rate of 0.1 deg/s for six cycles. Torsional stiffness was obtained by dividing the torque value by the relative rotation value and was expressed in Nm/deg. Subsequently, sinusoidal loading with a constant amplitude was applied for each configuration. The load amplitude was increased every 100 loading cycles in steps of 1 Nm until construct failure occurred, while the preload was adjusted to 1 Nm [5].
Statistical analysis
The collected data were statistically analyzed with SPSS 23.0 software (SPSS, Chicago, Illinois, USA). First, the results were tested for normality and homogeneity of variance. When a normal distribution and homogeneity of variance were found, the data were analyzed by means of one-way analysis of variance to determine the significance of differences in the means among the three groups. The LSD test was used for post hoc testing, if necessary. A p < 0.05 was considered statistically significant.
Age and bone mineral density
The mean age was similar among the CPF group (33.0 years), the EPF group (31.0 years) and the UEF group Table 1.
There was also no significant difference (F = 0.100, p = 0.905) in the mean T-score among the CPF group (− 0.81), the EPF group (− 0.84), and the UEF group (− 0.83). Table 1 displays the results. Since the mean Tscore values of the three fixation groups were all greater than − 1, we can conclude that all the specimens were normal bone and excluded of osteoporosis.
Construct stiffness
One-way analysis of variance demonstrated that the mean axial stiffness was significantly (F = 24.642, p < 0.0001) different among the CPF group (1898.8 N/mm), the EPF group (1715.8 N/mm), and the UEF group (1157.8 N/mm). The axial stiffness parameters of the three groups are displayed in the chart in Fig. 4a. The LSD test revealed that the axial stiffness of the CPF group was significantly (p < 0.0001) higher than that of the UEF group by 0.64. The axial parameter of the EPF group was also significantly (p < 0.0001) higher than that of the UEF group by 0.48.
There was a significant (F = 17.365, p < 0.0001) difference in the mean four-point bending stiffness among the CPF group (26.7 Nm/deg), the EPF group (24.1 Nm/deg), and the UEF group (15.0 Nm/deg). The chart in Fig. 4b shows these values. The post hoc LSD tests were performed for pairwise comparisons, which revealed that the bending stiffness of the CPF group was significantly (p < 0.0001) greater than that of the UEF group by 78%. The stiffness value of the EPF group was also significantly (p = 0.001) greater than that of the UEF group by 61%.
One-way ANOVA revealed significant (F = 130.824, p < 0.0001) differences in mean torsional stiffness among the CPF group (3.0 Nm/deg), the EPF group (2.6 Nm/deg), and the UEF group (1.3 Nm/deg). The results are displayed in Fig. 4c. The subsequent LSD test demonstrated that the torsional stiffness of the CPF group was significantly (p < 0.0001) greater than that of the UEF group by 1.31. The stiffness value of the EPF group was also significantly (p < 0.0001) greater than that of the UEF group by 1.
Construct strength
For axial compression, the strength of the CPF group (2792.2 N) was significantly (p < 0.0001) higher than that of the UEF group (1769.0 N) by 0.58. The axial parameter of the EPF group (2560.5 N) was also significantly (p < 0.0001) greater than that of the UEF group by 0.45. The results are displayed in Fig. 5a. Both CPF and EPF constructs failed by catastrophic fracture of the diaphysis through the screw hole (Fig. 6a). After fracture, the CPF constructs in four specimens displayed no implant hardware failure, screw breakage occurred in one specimen, and screw bending occurred in one specimen. Among the EPF constructs, no implant hardware failure was found in one specimen, screw and plate bending were observed in three specimens, and screw breakage occurred in two specimens. The UEF constructs failed as a result of nonrecoverable fracture gap closure due to half-pin and rod bending in five specimens and due to fracture of the diaphysis in one specimen.
In four-point bending, the construct strength of the CPF group (58.2 Nm) was significantly (p < 0.0001) greater than that of the UEF group (47.9 Nm) by 22%. The strength value of the EPF group (56.4 Nm) was also significantly (p < 0.0001) higher than that of the UEF group by 18%. A chart of these results is shown in Fig. 5b. All constructs failed by catastrophic fracture of the diaphysis (Fig. 6b). After fracture, none of the CPF constructs displayed implant hardware failure; the EPF constructs showed no implant hardware failure in three specimens and screw bending in three specimens; and the UEF constructs showed half-pin and rod bending in four specimens and half-pin breakage and rod bending fracture in two specimens.
For torsion, the strength of the CPF group (34.2 Nm) was significantly (p < 0.0001) greater than that of the UEF group (24.2 Nm) by 0.41. The strength value of the EPF group (30.0 Nm) was also significantly (p < 0.0001) greater than that of the UEF group by 0.24. The results are displayed in Fig. 5c. The CPF constructs failed by screw and plate bending in four specimens, screw breakage in one specimen and spiral fracture in one specimen. The EPF constructs failed as a result of screw and plate bending in two specimens, screw breakage and plate bending in two specimens and spiral fracture in two specimens. The UEF constructs exhibited oblique fracture in four specimens (Fig. 6c) and half-pin and rod bending in two specimens, resulting in nonrecoverable deformation in the region of fracture.
Discussion
The results of this study support the hypothesis that the plate-type external fixator can remarkably increase the stiffness of the fracture fixation construct while retaining sufficient strength. In this experiment, the two configurations employing the plate-type external fixator exhibited higher stiffness and strength than the traditional UEF in axial compression, four-point bending and torsion. Furthermore, our study showed that the stiffness and strength of the construct would decrease with thinner plate thicknesses. However, the extended plate-type external fixator was still stiffer and stronger than that of the traditional UEF.
Construct stiffness
The stiffness of external fixators reported in previous literatures has ranged from 50 N/mm to 2500 N/mm in axial compression, 10 Nm/deg to 100 Nm/deg in fourpoint bending, and 1 Nm/deg to 4 Nm/deg in torsion, respectively [4,6,[28][29][30][31]. The stiffness values of all of the bone-implant constructs in our study research were within these ranges, and the plate-type external fixator provided remarkably higher torsional stiffness than the UEF. The highest stiffness was achieved in the fracture model with an offset distance of 5 mm, while the lowest stiffness was achieved with distances up to 30 mm. Our study reveals the possibility that stiffness may decrease with increasing offset.
Construct strength and failure mode
According to our study, we conclude that the plate-type external fixator is stronger than the UEF, thus contributing to more durable fixation to allow progressive functional training of greater intensity and duration. Moreover, for progressive dynamic loading to failure, the UEF group displayed the greatest implant hardware failure and the minimum peak load.
Factors infecting the stiffness and strength
Previous studies that have investigated how fixation stiffness and strength can be influenced have shown that several factors affect the stability and durability of boneimplant constructs [19,29,[32][33][34][35][36][37]. The working length, defined as the distance between the first two screws on both sides of the fracture gap, has a marked impact on biomechanical fixation. Meanwhile, altering the offset distance between the plate and the bone surface can significantly change the stability and durability of the boneimplant construct. Moreover, the number and positions of the screws, the fracture gap size, and the material properties of the external fixator all influence the biomechanical parameters. In our study, the working length of the three fixation groups was set to 60 mm, namely, fourfold the hole spacing. A 15 mm offset was maintained between the plate/rod and the bone surface in the three fixation groups. In addition, the number of screws, the fracture gap size, and the material properties were the same in the three external fixation groups. Therefore, it can be concluded that the plate-type external fixator provides more sufficient stability and durability than the UEF because the stiffness and strength of the plate-type external fixator were higher in axial compression, four-point bending and torsion.
Implications of the novel fixator
The plate-type external fixator described herein by our research group is the first such prototype to be described worldwide. These tests represent the first biomechanical study comparing the stiffness and strength of the novel fixator with the UEF. In addition to the higher stiffness and strength and the lower profile, the plate-type external fixator, which is designed to match perfectly with the crus, provides sufficient skin distance by allowing the distance between the bone surface and the external plate to be adjusted. In addition, the length of the novel fixator can be adjusted for people of different heights. When applying the novel fixator to treat tibial fractures without an open approach, we need to make closed reduction a priority; if unsuccessful, limited exposure is needed.
The angle-stabilizing property The locking plate, which is based on an angle-stabilizing property, is used as an external fixator and can be considered as a promising and satisfactory procedure, which has already been reported in many literatures [1,2,38,39]. The plate-type external fixator, also depending on the angle-stabilizing property, is expected to yield excellent clinical results when used for treating tibia fracture.
Limitations
There remain several limitations of this study. First, the investigation of stiffness and strength was performed in vitro, and all specimens were cleaned of any soft tissues; therefore, in the load applied in this fixation model may not completely stimulate the multifaceted load pattern in vivo. Second, the fixation parameters were only investigated for nonosteoporotic specimens. Ideally, we the biomechanical parameters for osteoporotic specimens should also be investigated, since stiffness and strength are highly affected by bone quality.
Feasibility and practicality
Despite the aforementioned limitations, we believe that the model used in this study is appropriate for Fig. 6 Photographs of the failure modes of configurations for a axial compression, catastrophic fracture of the diaphysis through the screw hole found in the classical plate-type external fixator group (indicated by the black arrow); b four-point bending, catastrophic fracture of the diaphysis found in the extended plate-type external fixator group (indicated by the black arrow); and c torsion, oblique fracture found in the unilateral external fixator group (indicated by the black arrow) comparing stiffness and strength between the plate-type external fixator and the UEF. The biomechanical parameters were investigated individually for the main load bearings that a fracture-fixator configuration might sustain, namely, axial compression loading, bending loading and torsional loading. These parameters are extremely useful for developing a comprehensive understanding of the relative benefits of plate-type external constructs under in vivo multifaceted loading modes, which comprise some of the combinations of forces investigated herein. Therefore, we believe that the results of this study are appropriate for extrapolation to human applications and can be applied to make clinical judgments.
Conclusions
The design of an external fixator with optimal biomechanical function and a low profile has been a research priority. In this study, we found that the was significantly stiffer and stronger than the traditional UEF, and the stiffness of plate-type external fixator was closer to the optimal value. Moreover, the low profile of the platetype external fixator reduces inconvenience during dressing and ambulation, thus increasing comfort and improving its acceptance among patients. In conclusion, the plate-type external fixator provides a promising and satisfactory alternative to the traditional UEF, since its sufficient stiffness and strength and lower profile. | 5,693.8 | 2020-02-04T00:00:00.000 | [
"Medicine",
"Engineering"
] |
An Integrated Classification Scheme for Efficient Retrieval of Components
: Reuse is the key paradigm for increasing productivity and quality in software development. To be able to reuse software components, whether it is code or designs, it is necessary to locate the component that can be reused. Locating components, or even realizing they exist, can be quite difficult in a large collection of components. These components need to be suitably classified and stored in a repository to enable efficient retrieval. Four schemes had been previously employed, free text, enumerated, attribute value and faceted classification. Experiences revealed that individual classification schemes were unable to solve the problems associated with component classification. We required a combination of classification techniques to meet the problems with individual schemes and to improve retrieval efficiency. This research looked at each of the classification techniques above, and proposes a new method for classifying component details within a repository.
INTRODUCTION
One of major impediments to realizing software reusability in many organizations is the inability to locate and retrieve existing software components. There often is a large body of software available for use on a new application, but the difficulty in locating the software or even being aware that it exists results in the same or similar components being re-invented over and over again. In order to overcome this impediment, a necessary first step is the ability to organize and catalog collections software components and provide the means for developers to quickly search a collection to identify candidates for potential reuse [2,16] .
Software reuse is an important area of software engineering research that promises significant improvements in software productivity and quality [4] . Software reuse is the use of existing software or software knowledge to construct new software [11] . Effective software reuse requires that the users of the system have access to appropriate components. The user must access these components accurately and quickly, and be able to modify them if necessary.
Component is a well-defined unit of software that has a published interface and can be used in conjunction with components to form larger units [3] . Reuse deals with the ability to combine separate independent software components to form a larger unit of software. To incorporate reusable components into systems, programmers must be able to find and understand them. Classifying software allows reusers to organize collections of components into structures that they can search easily. Most retrieval methods require some kind of classification of the components. How to classify and which classifications to use must be decided, and all components put into relevant classes. The classification system will become outdated with time and new technology. Thus the classification system must be updated from time to time and some or all of the components will be affected by the change and need to reclassified.
Component classification:
The generic term for a passive reusable software item is a component. Components can consist of, but are not restricted to ideas, designs, source code, linkable libraries and testing strategies. The developer needs to specify what components or type of components they require. These components then need to be retrieved from a library, assessed as to their suitability, and modified if required. Once the developer is satisfied that they have retrieved a suitable component, it can then be added to the current project under development. The aim of a good component retrieval system [13] is to be able to located either the exact component required, or the closest match, in the shortest amount of tie, using a suitable query. The retrieved component (s) should then be available for examination and possible selection.
Classification is the process of assigning a class to a part of interest. The classification of components is more complicated than, say, classifying books in a library. A book library cataloguing system will typically use structured data for its classification system (e.g., the Dewey Decimal number). Current attempts to classify software components fall into the following categories: Free text, enumerated, attribute-value, and faceted. The suitability of each of the methods is assessed as to how well they perform against the previously described criteria for a good retrieval system, including how well they manage 'best effort retrieval'.
Existing techniques:
Free text classification: Free text retrieval performs searches using the text contained within documents. The retrieval system is typically based upon a keyword search [16] . All of the document indexes are searched to try to find an appropriate entry for the required keyword. An obvious flaw with this method is the ambiguous nature of the keywords used. Another disadvantage is that a search my result in many irrelevant components. A typical example of free text retrieval is the grep utility used by the UNIX manual system. This type of classification generates large overheads in the time taken to index the material, and the time taken to make a query. All the relevant text (usually file headers) in each of the documents relating to the components are index, which must then be searched from beginning to end when a query is made. Once approach to reducing the size of indexed data is to use a signature matching technique, however space reduced is 10-15% only.
Enumerated classification: Enumerated classification uses a set of mutually exclusive classes, which are all within a hierarchy of a single dimension [6] . A prime illustration of this is the Dewey Decimal system used to classify books in a library. Each subject area, e.g., Biology, Chemistry etc, has its own classifying code. As a sub code of this is a specialist subject area within the main subject. These codes can again be sub coded by author. This classification method has advantages and disadvantages pivoted around the concepts of a unique classification for each item. The classification scheme will allow a user to find more than one item that is classified within the same section/subsection assuming that if more than one exists. For example, there may be more than one book concerning a given subject, each written by a different author.
This type of classification schemes is one dimensional, and will not allow flexible classification of components into more than one place. As such, enumerated classification by itself does not provide a good classification scheme for reusable software components.
Attribute value: The attribute value classification schemes uses a set of attributes to classify a component [6] . For example, a book has many attributes such as the author, the publisher, it's ISBN number and it's classification code in the Dewey Decimal system. These are only example of the possible attributes. Depending upon who wants information about a book, the attributes could be concerned with the number of pages, the size of the paper used, the type of print face, the publishing date, etc. Clearly, the attributes relating to a book can be: • Multidimensional. The book can be classified in different places using different attributes • Bulky. All possible variations of attributes could run into many tens, which may not be known at the time of classification Faceted: Faceted classification schemes are attracting the most attention within the software reuse community. Like the attribute classification method, various facets classify components, however, there are usually a lot fewer facets than there are potential attributes (at most, 7). Ruben Prieto-Diaz [2,8,12,17] has proposed a faceted scheme that uses six facets.
• The functional facets are: Function, Objects and Medium • The environmental facets are: System type, Functional area, Setting Each of the facets has to have values assigned at the time the component is classified. The individual components can then be uniquely identified by a tuple, for example. <add, arrays, buffer, database manager, billing, book store> Clearly, it can be sent that each facet is ordered within the system. The facets furthest to the left of the tuple have the highest significance, whilst those to the right have a lower significance to the intended component. When a query is made for a suitable component, the query will consist of a tuple similar to the classification one, although certain fields may be omitted if desired. For example: <add, arrays, buffer, database manager, *,*> The most appropriate component can be selected from those returned since the more of the facets from the left that match the original query, the better the match will be.
Frakes and Pole conducted an investigation as to the most favorable of the above classification methods [9] . The investigation found no statistical evidence of any differences between the four different classification schemes, however, the following about each classification method was noted:
RESULTS AND DISCUSSION
Proposed classification: Whilst it is obvious that some kind of classification and retrieval is required, one problem is how to actually implement this, most other systems follow the same principle: Once a new component has been identified, a librarian is responsible for the classification must be highly proficient with the classification system employed for two reasons. Firstly, the librarian must know how to classify the components according to the schema. Secondly, a lexicographical consistency is required across the whole of the system. The classification system is separate to the retrieval system, which is for all of the users.
Most established systems tend to rigidly stick with one classification and retrieval scheme, such as free text or faceted. Others tend to use one type of retrieval system with a separate classification system such as the Reusable Software Library, which uses an Enumerated classification scheme with Free Text search.
In this research, we propose a new classification scheme that incorporates the features of existing classification schemes. In this system (Fig. 1) the administrator or librarian sets up the classification scheme. The developers develop and put their components into library. The users who are also developers can retrieve components from the library. Query tracking system can be maintained to improve the classification scheme. The proposed system will provide the following functionality to the users.
• Storing components • Searching components • Browsing components The librarian task is to establish classification scheme. Each of the four main classification schemes has both advantages and disadvantages. The free text classification scheme does not provide the flexibility required for a classification system, and has too many problems with synonyms and search spaces. The faceted system of classification provides the most flexible method of classifying components. However, it can cause problems when trying to classify very similar components for use within different domains. The enumerated system provides a quick way to drill down into a library, but does not provide the flexibility within the library to classify components for use in more than one way. The attribute value system allows multidimensional classification of the same component, but will not allow any ordering of the different attributes.
Our solution to these problems would be to use an attribute value scheme combined with faceted classification scheme to classify the components details. The attribute value scheme is initially used to narrow down the search space. Among the available components in the repository only components matching with the given attributes are considered for faceted retrieval. Restrictions can be placed upon which hardware, vendor, O.S., type and languages attributes. This will be the restrictive layer of the classification architecture, by reducing the size of the search space. All, some or none of the attributes can be used, depending upon the required specificity or generality required. Then a faceted classification scheme is employed to access the component details within the repository. The proposed facets are similar to those used by Prieto-Diaz, but are not the same. Figure 2 shows the architecture of proposed system. The repository is library of component information, with the components stored within the underlying platforms. Users are provided with interface through which they can upload, retrieve and browse components. User will provide the details of the components to upload components. While retrieving components the user can give his query or he can give details so that the system will find the matching components and give the results. Then the user can select his choice from the list of components. The user can select from versions of components also. In addition the descriptions of the components are stored along with components facilitate text searches also. Figure 3 and 4 shows the experimental prototype screens. The user can browse components by developer, language or platform. In the search screen the user can give his requirement along with fields that are optional.
CONCLUSION
This study presented the methods to classify reusable components. Existing four main methods (free text, attribute value, enumerated and faceted classification) are investigated and presented advantages and disadvantages associated with them. The proposed classification system takes advantage of the positive sides of each classification scheme. This classification scheme for different parts of a component. The attribute value scheme is initially used within the classification for specifying the vendor, platform, operating system and development language relating to the component. This allows the search space to be restricted to specific libraries according to the selected attribute values. Additionally, this method will allow the searches to be either as generic or domain specific as required. The functionality of the component is then classified using a faceted scheme.
In addition to the functional facets is a facet for the version of the component. The version of a component is directly linked to its functionality as a whole, i.e. what it does, what it acts upon and what type of medium it operates within. Future work involved with this classification scheme will be to refine the scheme, and formalize it for implementation. A prototyped system for presenting and retrieving software reusable components based on this classification schema is now under implementation. | 3,182.2 | 2008-10-31T00:00:00.000 | [
"Computer Science"
] |
Flow and Heat Transfer Performances of Liquid Metal Based Microchannel Heat Sinks under High Temperature Conditions
Developments in applications such as rocket nozzles, miniature nuclear reactors and solar thermal generation pose high-density heat dissipation challenges. In these applications, a large amount heat must be removed in a limited space under high temperature. In order to handle this kind of cooling problem, this paper proposes liquid metal-based microchannel heat sinks. Using a numerical method, the flow and heat transfer performances of liquid metal-based heat sinks with different working fluid types, diverse microchannel cross-section shapes and various inlet velocities were studied. By solving the 3-D steady and conjugate heat transfer model, we found that among all the investigated cases, lithium and circle were the most appropriate choices for the working fluid and microchannel cross-section shape, respectively. Moreover, inlet velocity had a great influence on the flow and heat transfer performances. From 1 m/s to 9 m/s, the pressure drop increased as much as 65 times, and the heat transfer coefficient was enhanced by about 74.35%.
Introduction
In many industrial applications, such as rocket nozzles, miniature nuclear reactors and solar thermal generation, one of the major concerns is heat dissipation [1]. If the large amount heat cannot be removed quickly, the devices or systems cannot function or, worse, can be damaged. Hence, high-efficient heat transfer technology is important. In the aforementioned applications, the challenges of heat removal are attributed to high heat flux density, limited space for heat transfer equipment and accompanying high temperature. In this case, liquid metal-based microchannel heat sinks are well suited.
Microchannel heat sink has attracted worldwide attention from researchers because of its advantages, including high heat transfer coefficient, direct integration on the substrate and compactness. Since the pioneering work by Tuckerman and Pease [2], a lot of research has been conducted to study the thermal performance and hydraulic characteristics of various microchannel heat sinks. Wu and Little [3] tested silicon and glass microchannels with nitrogen, hydrogen and argon as the working fluids. The experimental data was fitted to give Nusselt number and friction factor correlations. Xu et al. [4] investigated the pressure drop characteristics in aluminum and silicon microchannels. It was found that the Poiseuille number remains constant in laminar flow region, as in conventional channels, but its value is different for different materials. The Poiseuille number was determined to be constant in laminar flow by Judy et al. [5] for fused silica and stainless steel microchannels using water, isopropanol and methanol as working fluids. Qu and Mudawar [6] manufactured a copper microchannel and tested it at different heat fluxes. The obtained pressure drop and temperature distribution were validated with a mathematical model developed by the authors, using the classical theory of conventional channels. Lee et al. [7] tested silicon microchannels with water and conclude that classical correlations can be applied to determine both the heat transfer and fluid flow characteristics of microchannel heat sinks. Liu and Garimella [8,9] conducted water flow tests on plexiglass microchannel samples, and found that the friction factor data are in agreement with conventional correlations for both laminar and turbulent flow. According to the research, it can be concluded that the single-phase fluid flow and heat transfer behaviors in microchannels are similar to those under normal scale and convectional theory is still appropriate to the flow characteristics [10,11].
In order to improve the cooling performance of microchannel heat sinks, the effect of geometrical factors including cross-sectional shape, microchannel pattern, manifolds and input/output ports have been widely studied [12]. The studies show that changing the geometrical factors could result in the optimization of average temperature, pressure drop, heat transfer coefficient, Nusselt number, as well as flow and temperature uniformity.
Jing and He [13] numerically studied the hydraulic and thermal performances of microchannel heat sinks with three different channel cross-sectional shapes: rectangle, ellipse and triangle. The calculating results show that the triangular-shaped microchannel has the smallest hydraulic resistance and the smallest convective heat transfer coefficient. Gunnasegaran et al. [14] studied the flow and convective heat transfer characteristics of water in rectangular, trapezoid and triangular shaped microchannels and found that for a given cross-sectional shape, the channel with smaller hydraulic diameter had larger convective heat transfer coefficient and pressure drop. Xia et al. [15] researched fluid flow in microchannels with different header shapes and inlet/outlet arrangements. They concluded that I-type had better flow uniformity than other cases due to the symmetrical flow distribution. Moreover, the effect of walls friction decreased the static pressure, which ultimately led to uniform velocity distribution. Kumar et al. [16,17] summarized the works focusing on the influence of inlet and outlet arrangements on the thermal and hydraulic performance. It was observed that vertical supply and collection system showed lower temperature non-uniformity and better liquid flow distribution.
Ahmed et al. [18] investigated the effect of geometrical parameters on laminar water flow and forced convection heat transfer characteristics in grooved microchannel heat sink. By means of numerical calculations, they found that with the optimum groove tip length ratio, groove depth ratio, groove pitch ratio and grooves orientation ratio, Nusselt number and friction factor could be improved by 51.59% and 2.35%, respectively. Zhu et al. [19] compared the thermal and hydraulic performance of wave microchannel heat sinks with wavy bottom ribs and side ribs design. The results showed that the up-down wavy design exhibited a better performance when small wavelengths were adopted. Wang et al. [20] proposed a new double layer wave microchannel heat sink and porous-ribs, they found that the porous-ribs exhibited obvious superiority at a low pumping power, and the wavy microchannel became domain at a high pumping power. Ermagan et al. [21] replaced the conventional walls with superhydrophobic walls in wave microchannel heat sinks to reduce the pressure loss. It was found that the beneficial effect of superhydrophobic walls on the comprehensive performance weakened as the waviness of the channel increased.
Gong et al. [22] designed a porous/solid compound fins microchannel heat sink to enhance the cooling performance. They found that the viscous shear stress decreased at the interfaces between fluid and porous fins, which resulted in the reduction of pressure drop. Ghahremannezhad et al. [23] proposed a new porous double-layered microchannel heat sink and found that the new microchannel heat sink showed not only good heat transfer performance, but also exhibited lower pressure drop. Li et al. [24] compared the thermal performance and flow characteristics of five heat sink designs, including porous-ribs singlelayered, solid-ribs single-layered, solid-ribs double-layered, porous-ribs double-layered and mixed double-layered. It was found that the mixed double-layered microchannel heat sink processed a combination of low pressure drop and high thermal performance.
Besides geometric parameters, the proper selection of working fluid type is also a crucial factor to the performances of microchannel heat sinks [25,26]. Conventional cooling liquids, like water, could not meet the rapidly increasing demand of high-density heat dissipation because of its low thermal conductivity. In order to enhance the effective thermal conductivity, the method of adding conductive nano metal particles such as copper or aluminum to the solution and suspension is proposed by researchers. Jang and Choi [27] and Farsad et al. [28] showed numerically that water-based nanofluids enable microchannel heat sinks to dissipate heat fluxes as high as 1350-2000 W/cm 2 . Sohel et al. [29,30] showed analytically that 0.5-4.0 vol% CuO nanofluid flow in a copper microchannel heat sink having circular channels provides far better heat transfer performance and lower friction factor than Al 2 O 3 and TiO 2 nanofluids. Sivakumar et al. [31] also showed that CuO nanofluid provides better heat transfer coefficient enhancement than Al 2 O 3 nanofluid. Salman et al. [32,33] showed numerically that dispersing SiO 2 nanoparticles in ethylene glycol (base liquid) provides the highest Nu, followed by ZnO, CuO and Al 2 O 3 , and Nu increases with decreasing nanoparticles size. Kumar et al. [34] conducted thermofluidic analysis of Al 2 O 3 -water nanofluid cooled, branched wavy heat sink microchannels using a numerical method. The results showed that apart from disruption of the boundary layer and its reinitialization, vortices were formed near the secondary channel, which improved thermal performance. The heat transfer coefficient increased with increasing nanofluids concentrations for any investigated Reynolds number. Wang et al. [35] numerically investigated the forced convection in microchannel heat sinks using multi-wall carbon nanotube-Fe 3 O 4 hybrid nanofluid as coolant working fluid. According to the results, the heat sinks, which consist of metallic foam, have better cooling performance and are able to decrease the surface temperature. However, the performance improvements brought by nanofluids are still limited due to the base fluids used. Particularly, the mixed solution may easily be subject to additional troubles such as susceptibility to fouling, particle deposition or conglomeration, degeneration of solution quality and flow jamming over the channels [36,37]. Besides, the low evaporation point of these liquids may imply potential dangers in preventing the device from burning out, since the liquids may easily escape to the ambient air.
Apart from nanofluids, liquid metal could also lead to an excellent cooling capacity because of its high thermal conductivity. In fact, liquid metals such as sodium, sodiumpotassium alloy and lithium have been used as coolant in nuclear engineering for a long time. Nowadays, liquid metal cooling has been widely used in nuclear-powered plants with many different metals having been tried. Liquid metal cooling is also used in accelerators, solar power generation, LEDs and lasers [38][39][40]. Since Liu and Zhou [41] proposed for the first time to use low-melting-point liquid metals as an ideal coolant for the thermal management of computer chips, a lot of research has been performed to investigate the heat transfer capability. Miner and Goshal [36] carried out analytical and experimental work on liquid metal flow in a pipe. Their results indicated that the heat transfer is enhanced in both laminar and turbulent regimes using liquid metal coolant. Goshal et al. [42] used a GaIn alloy-based heat sink in a cooling loop and achieved a thermal resistance of 0.22 K/W for the entire system. Hodes et al. [43] studied the optimum geometry for water-based and Galinstan-based heat sinks in terms of minimum thermal resistance. It was shown that in the optimized configurations, Galinstan can reduce the overall thermal resistance by about 40% compared to water. Zhang et al. [44] carried out a follow-up work and according to their experimental data, liquid metal could enhance the convective heat transfer due to the superior thermophysical properties.
Currently, studies on liquid metal cooling or microchannel heat sink are widely conducted, but research on microchannel heat sinks using liquid metal as the working fluid is scarce. Some related investigations aim at dealing with the heat dissipation problem of computer chips or electronic devices at room temperature. However, in some applications, such as rocket nozzles, miniature nuclear reactors and solar thermal generation, high-density heat needs to be dissipated or transferred in a limited space under high temperature. In order to solve the problem of high-density heat dissipation under high temperature and limited space, this paper takes liquid metal-based microchannel heat sinks as object, establishing 3D physical and mathematical numerical models, so that flow and heat transfer performances with diverse working fluid types, various microchannel cross-section shapes and different inlet velocities could be obtained and analyzed.
Physical Model
The structure of the investigated microchannel heat sink is shown in Figure 1. The cylindrical inlet and outlet passages are connected with 10 microchannels by inlet and outlet manifolds. In the cooling process, liquid metal enters the inlet passage and then flows into the microchannels through inlet manifold. After that, the liquid metal successively flows through the outlet manifold and outlet passages and finally flows out of the heat sink.
as object, establishing 3D physical and mathematical numerical models, so that flow and heat transfer performances with diverse working fluid types, various microchannel crosssection shapes and different inlet velocities could be obtained and analyzed.
Physical Model
The structure of the investigated microchannel heat sink is shown in Figure 1. The cylindrical inlet and outlet passages are connected with 10 microchannels by inlet and outlet manifolds. In the cooling process, liquid metal enters the inlet passage and then flows into the microchannels through inlet manifold. After that, the liquid metal successively flows through the outlet manifold and outlet passages and finally flows out of the heat sink. The overall size of the heat sink is L × W × H = 19 mm × 10 mm × 2 mm. The radius and height of inlet and outlet passages are 0.5 mm and 0.5 mm, respectively. Both the inlet and outlet manifolds are 3 mm long. All the microchannels have the same size of Lc × Wc × Hc = 13 mm × 0.4 mm × 1 mm. Besides rectangular microchannel cross-section shape, three cross-section shapes (circle, trapezoid and parallelogram) are also studied in this paper. Different microchannel cross-section shapes are shown in Figure 2. The four different cross-section shapes have the same hydraulic diameter (Dh = 0.57 mm). The overall size of the heat sink is L × W × H = 19 mm × 10 mm × 2 mm. The radius and height of inlet and outlet passages are 0.5 mm and 0.5 mm, respectively. Both the inlet and outlet manifolds are 3 mm long. All the microchannels have the same size of L c × W c × H c = 13 mm × 0.4 mm × 1 mm. Besides rectangular microchannel cross-section shape, three cross-section shapes (circle, trapezoid and parallelogram) are also studied in this paper. Different microchannel cross-section shapes are shown in Figure 2. The four different cross-section shapes have the same hydraulic diameter (D h = 0.57 mm). In order to investigate the flow and heat transfer performances of the microchannel heat sink under high temperature conditions, silicon carbide (SiC) was applied for solid wall materials and alkalis with high melting point were selected as the liquid metal coolant. The thermo-physical properties of SiC were assumed to be constant. The density, specific heat capacity and thermal conductivity of SiC were 3220 kg/m 3 , 800 J/(kg·K) and 80 W/(m·K) respectively. In order to investigate the flow and heat transfer performances of the microchannel heat sink under high temperature conditions, silicon carbide (SiC) was applied for solid wall materials and alkalis with high melting point were selected as the liquid metal coolant. The thermo-physical properties of SiC were assumed to be constant. The density, specific heat capacity and thermal conductivity of SiC were 3220 kg/m 3 , 800 J/(kg·K) and 80 W/(m·K) respectively.
As for the alkalis, sodium (Na), potassium (K), sodium-potassium alloy (Na-K) and lithium (Li) were considered in this paper. For these alkalis, their thermo-physical property correlations as well as the range of validity of each correlation were given [45,46]. For all the correlations, temperature (T), density (ρ), specific heat capacity (Cp), thermal conductivity (k) and dynamic viscosity (η) were respectively given in K, kg/m 3 , J/(kg·K), W/(m·K) and Pa·s.
Specific heat capacity (273 K ≤ T ≤ 1573 K), Thermal conductivity (273 K ≤ T ≤ 1573 K), Dynamic viscosity (273 K ≤ T ≤ 1573 K), Specific heat capacity (455 K ≤ T ≤ 1500 K), Thermal conductivity (455 K ≤ T ≤ 1500 K), According to previous research [47], when the size of the microchannel is over 200 µm, the continuous medium hypothesis of fluid is still applicable. Hence, the governing equations for continuity, momentum and energy could be written as follows [48].
Continuity equation, ∂u ∂x where u, v, w in m/s are respectively the velocity in x, y, z direction Momentum equations, where p in Pa is pressure, ρ in kg/m 3 is density and η in Pa·s is viscosity. Energy equations, in the fluid region, in the solid region, The commercial software ANSYS Fluent was used to perform the numerical calculations. When the calculations were complete, the velocity, pressure and temperature fields in solid and fluid domains of the heat sink could be obtained and the flow and heat transfer performances could be characterized by the following parameters.
Mean heat transfer coefficient hm [49], where Aw is the heat transfer area between the fluid and the walls; Tf and Tw are the average temperature of the fluid and the walls respectively; Q is the heat exchange capacity which could be calculated as, where q and Ab are the heat flux and area of the heat sink bottom surface. The commercial software ANSYS Fluent was used to perform the numerical calculations. When the calculations were complete, the velocity, pressure and temperature fields in solid and fluid domains of the heat sink could be obtained and the flow and heat transfer performances could be characterized by the following parameters.
Mean heat transfer coefficient h m [49], where A w is the heat transfer area between the fluid and the walls; T f and T w are the average temperature of the fluid and the walls respectively; Q is the heat exchange capacity which could be calculated as, where q and A b are the heat flux and area of the heat sink bottom surface.
Mean flow resistance coefficient f [50], where ∆P is the pressure drop, ρ and v are the density and velocity of the fluid.
Mesh Independence and Model Validation
In order to ensure that the solutions are independent of grid size, three different grid systems with cell number of 1,627,410, 3,480,055 and 5,741,015 are generated and used to perfume the mesh independence study. Under the same boundary conditions, the fluid pressure drop calculated by the latter two grid systems varies only by 0.9%, as shown in Table 1. In order to save calculating time, the grid system with cell number of 3,480,055 is finally adopted in this paper. Besides the mesh independence test, the validity of aforementioned numerical method was verified by means of theoretical results obtained by Mortensen et al. [51]. Figure 4 shows the comparison of hydraulic resistances between the theoretical and numerical results. The definition equation of the hydraulic resistance is, where R H is the hydraulic resistance, ∆P is the pressure drop, V fr is the volume flow rate. Compared to the theoretical values, the mean relative difference of the calculating values are 2.9%, 3.6% and 0.38% with liquid Na, liquid K and liquid Li, respectively; it can be concluded that the numerical results are agreeable with the existing theoretical results, which means the numerical method applied in this study has satisfactory accuracy.
The heat transfer calculating method is verified by comparing the result of Adeel Muhammad et al. [52]. Table 2 shows the comparison of total thermal resistance of the whole microchannel heat sink with the channel height varies from 3 mm to 9 mm. The maximum deviation is found to be less than 1%, which indicates the heat transfer calculating method applied in this paper is reliable. fr V where RH is the hydraulic resistance, ΔP is the pressure drop, Vfr is the volume flow rate.
Compared to the theoretical values, the mean relative difference of the calculating values are 2.9%, 3.6% and 0.38% with liquid Na, liquid K and liquid Li, respectively; it can be concluded that the numerical results are agreeable with the existing theoretical results, which means the numerical method applied in this study has satisfactory accuracy. The heat transfer calculating method is verified by comparing the result of Adeel Muhammad et al. [52]. Table 2 shows the comparison of total thermal resistance of the whole microchannel heat sink with the channel height varies from 3 mm to 9 mm. The maximum deviation is found to be less than 1%, which indicates the heat transfer calculating method applied in this paper is reliable. The total thermal resistance of the heat sink is defined as, where T max is the maximum temperature of the whole heat sink, T in is the inlet temperature of the liquid metal, Q is the heat exchange capacity.
Taking a rectangular microchannel cross-section, heat flux 200 W/cm 2 , inlet temperature 600 K and inlet velocity 3 m/s as typical condition, Figure 4 shows the temperature distributions of the heat sink with different working fluids (temperature contours have the same scale). It could be seen that taking Li as the working fluid has the best cooling effect while taking K as the working fluid has the worst. For the heat sinks with Na-K alloy as the working fluid, their cooling performances are intermediate between the Na-based and K-based heat sinks. Moreover, the bigger the weight fraction of Na, the better the cooling effect.
For all the heat sinks shown in Figure 5, the temperature gradient is mainly in the fluid flow direction. In the direction perpendicular to the fluid flow, the temperature distribution shows good uniformity, the working fluid and the heat sink base basically have the same temperature. Take the heat sink with Li as an example, Figure 6 shows the temperature and velocity distributions at the center section of microchannels (1.5 mm high from the bottom surface). It can be seen that the difference of temperature between fluid and solid regions is very small, which indicates the liquid alkali-based microchannel heat sink has excellent heat exchange ability. the same scale). It could be seen that taking Li as the working fluid has the best cooling effect while taking K as the working fluid has the worst. For the heat sinks with Na-K alloy as the working fluid, their cooling performances are intermediate between the Nabased and K-based heat sinks. Moreover, the bigger the weight fraction of Na, the better the cooling effect.
For all the heat sinks shown in Figure 5, the temperature gradient is mainly in the fluid flow direction. In the direction perpendicular to the fluid flow, the temperature distribution shows good uniformity, the working fluid and the heat sink base basically have the same temperature. Take the heat sink with Li as an example, Figure 6 shows the temperature and velocity distributions at the center section of microchannels (1.5 mm high from the bottom surface). It can be seen that the difference of temperature between fluid and solid regions is very small, which indicates the liquid alkali-based microchannel heat sink has excellent heat exchange ability. Figure 7 shows the mean heat transfer coefficients and Nusselt numbers under various conditions with rectangular microchannel cross-section, inlet temperature 600 K and inlet velocity 3 m/s. For all the investigated alkalis, taking Li as the working fluid obtains the highest mean heat transfer coefficient while taking K as the working fluid gets the lowest. When the working fluid is Na, the mean heat transfer coefficient is the second highest, slightly lower than that when the working fluid is Li. Affected by the thermal conductivity, the changing tendency of Nusselt number is different with that of mean heat transfer coefficient. Liquid Na-K alloy-based heat sinks have bigger Nusselt numbers because of Na-K alloys' smaller thermal conductivities. On the contrary, the relatively big thermal conductivity of Na leads to a small Nusselt number. Moreover, with the same working fluid, both Nusselt number and mean heat transfer coefficient vary slightly with the change of heat flux. This is because under different heat fluxes, the working fluid has different temperatures, which leads to the variation of thermo-physical properties as well as the heat transfer performances. Figure 7 shows the mean heat transfer coefficients and Nusselt numbers under various conditions with rectangular microchannel cross-section, inlet temperature 600 K and inlet velocity 3 m/s. For all the investigated alkalis, taking Li as the working fluid obtains the highest mean heat transfer coefficient while taking K as the working fluid gets the lowest. When the working fluid is Na, the mean heat transfer coefficient is the second highest, slightly lower than that when the working fluid is Li. Affected by the thermal conductivity, the changing tendency of Nusselt number is different with that of mean heat transfer coefficient. Liquid Na-K alloy-based heat sinks have bigger Nusselt numbers because of Na-K alloys' smaller thermal conductivities. On the contrary, the relatively big thermal conductivity of Na leads to a small Nusselt number. Moreover, with the same working fluid, both Nusselt number and mean heat transfer coefficient vary slightly with the change of heat flux. This is because under different heat fluxes, the working fluid has different temperatures, which leads to the variation of thermo-physical properties as well as the heat transfer performances. The convective heat transfer of liquid metals in microchannels depends on the liquid metals' thermal properties. According to the convective heat transfer theory, Nusselt number, Nu, is associated with Reynold number, Re, and Prandtl number, Pr. The considered alkalis' different thermo properties (thermal conductivity, specific heat capacity and viscosity) results in different Re and Pr, which leads various Nu as shown in Figure 7b The convective heat transfer of liquid metals in microchannels depends on the liquid metals' thermal properties. According to the convective heat transfer theory, Nusselt number, Nu, is associated with Reynold number, Re, and Prandtl number, Pr. The considered alkalis' different thermo properties (thermal conductivity, specific heat capacity and viscosity) results in different Re and Pr, which leads various Nu as shown in Figure 7b. Moreover, based on the definition equation of Nu, the mean heat transfer coefficient can be calculated as, Influenced by thermal conductivity, the relative magnitude of mean heat transfer coefficient among the investigated alkalis is different with that of Nu. Because of bigger thermal conductivities, the mean heat transfer coefficients of liquid Li and Na are obviously higher than liquid K and liquid Na-K alloys. Figure 8 shows the pressure drops and flow resistance coefficients under various conditions with rectangular microchannel cross-section, inlet temperature 600 K and inlet velocity 3 m/s. It could be seen that the pressure drop and the flow resistance coefficient changes in different degrees with the change of heat flux, among which the biggest change happens when the working fluid is K. This is because the changes of K's temperature and thermo-physical properties are the greatest. Furthermore, the flow resistance coefficient among all the investigated alkalis has no great difference, while the pressure drop is significantly smaller when the working fluid is Li. This is because the pressure drop is positively associated with the flow resistance as well as the working liquid's density, as shown in Equation (26). Li's density is the smallest among the considered liquid metals, which leads to the smallest pressure drop. Considering flow and heat transfer performances synthetically, Li is the optimum working fluid among all the investigated alkalis. Influenced by thermal conductivity, the relative magnitude of mean heat transfer co efficient among the investigated alkalis is different with that of Nu. Because of bigger ther mal conductivities, the mean heat transfer coefficients of liquid Li and Na are obviously higher than liquid K and liquid Na-K alloys. Figure 8 shows the pressure drops and flow resistance coefficients under various conditions with rectangular microchannel cross-section, inlet temperature 600 K and inle velocity 3 m/s. It could be seen that the pressure drop and the flow resistance coefficien changes in different degrees with the change of heat flux, among which the biggest change happens when the working fluid is K. This is because the changes of K's temperature and thermo-physical properties are the greatest. Furthermore, the flow resistance coefficien among all the investigated alkalis has no great difference, while the pressure drop is significantly smaller when the working fluid is Li. This is because the pressure drop is posi tively associated with the flow resistance as well as the working liquid's density, as shown in Equation (26). Li's density is the smallest among the considered liquid metals, which leads to the smallest pressure drop. Considering flow and heat transfer performances syn thetically, Li is the optimum working fluid among all the investigated alkalis. For comparative analysis, pumping power (pressure drop) should be kept consisten [53][54][55][56]. Therefore, in order to obtain the optimum working fluid, the mean heat transfer coefficient, hm, is compared under the same pressure drop. Figure 9 shows the changing trend of hm with different pressure drop at rectangular microchannel cross-section, inle temperature 600 K, heat flux 200 W/cm 2 . Obviously, since using liquid Li could get the biggest heat transfer coefficient, it is the optimum working fluid among all the investi gated alkalis. For comparative analysis, pumping power (pressure drop) should be kept consistent [53][54][55][56]. Therefore, in order to obtain the optimum working fluid, the mean heat transfer coefficient, h m , is compared under the same pressure drop. Figure 9 shows the changing trend of h m with different pressure drop at rectangular microchannel crosssection, inlet temperature 600 K, heat flux 200 W/cm 2 . Obviously, since using liquid Li could get the biggest heat transfer coefficient, it is the optimum working fluid among all the investigated alkalis.
Effects of Microchannel Cross-Section
In order to study the effects of microchannel cross-section on the flow and heat transfer performances, four microchannel cross-section shapes (rectangle, circle, trapezoid and parallelogram, as shown in Figure 2) are considered in this paper. Figure 10 shows temperature distributions of the heat sink with different microchannel cross-section shapes when working fluid is Li, heat flux is 200 W/cm 2 , inlet temperature is 600 K and inlet velocity is 3 m/s. Obviously, the temperature distribution is hardly influenced by the change of microchannel cross-section shape.
Effects of Microchannel Cross-Section
In order to study the effects of microchannel cross-section on the flow and heat transfer performances, four microchannel cross-section shapes (rectangle, circle, trapezoid and parallelogram, as shown in Figure 2) are considered in this paper. Figure 10 shows temperature distributions of the heat sink with different microchannel cross-section shapes when working fluid is Li, heat flux is 200 W/cm 2 , inlet temperature is 600 K and inlet velocity is 3 m/s. Obviously, the temperature distribution is hardly influenced by the change of microchannel cross-section shape. Figure 11 shows the mean heat transfer coefficients and Nusselt numbers under various conditions with working fluid Li, inlet temperature 600 K and inlet velocity 3 m/s. For the Nusselt number, its value depends on the channel geometry [13,50]. Within the study scope shown in Figure 11, using rectangular, trapezoidal or parallelogram crosssection shaped microchannel has basically the same Nusselt number, while using circular microchannel could obtain significantly higher Nusselt number. This is because under the same hydraulic diameter, the area of circular cross-section is smaller than that of rectangular, trapezoidal and parallelogram cross-section, hence, the working fluid has bigger Reynold number inside the circular microchannel, which leads to the higher Nusselt number. According to the equation, hm = Nu·k/Dh, the variation of the mean heat transfer coefficient is accordance with Nusselt number. So, as shown in Figure 11a, using a circular cross-section obtains the highest heat transfer coefficient. Compared to the other three Figure 11 shows the mean heat transfer coefficients and Nusselt numbers under various conditions with working fluid Li, inlet temperature 600 K and inlet velocity 3 m/s. For the Nusselt number, its value depends on the channel geometry [13,50]. Within the study scope shown in Figure 11, using rectangular, trapezoidal or parallelogram crosssection shaped microchannel has basically the same Nusselt number, while using circular microchannel could obtain significantly higher Nusselt number. This is because under the same hydraulic diameter, the area of circular cross-section is smaller than that of rectangular, trapezoidal and parallelogram cross-section, hence, the working fluid has bigger Reynold number inside the circular microchannel, which leads to the higher Nusselt number. According to the equation, h m = Nu·k/D h , the variation of the mean heat transfer coefficient is accordance with Nusselt number. So, as shown in Figure 11a, using a circular cross-section obtains the highest heat transfer coefficient. Compared to the other three crosssection shapes, using circular cross-section could increase the mean heat transfer coefficient by about 14,000 W/(m 2 ·K), which indicates the heat sink with circle microchannel crosssection has the best flow performance. Figure 12 shows the pressure drops and flow resistance coefficients under various conditions with working fluid Li, inlet temperature 600 K and inlet velocity 3 m/s. Obviously, the heat sink with parallelogram cross-section has the lowest pressure drop and flow resistance coefficient while the heat sink with circular cross-section has the highest. This is because with the same hydraulic diameter, the investigated four microchannel cross-section shapes have different cross-sectional area, which leads to different velocity magnitudes as well as the pressure drop. Among the four different kinds of microchan- Figure 12 shows the pressure drops and flow resistance coefficients under various conditions with working fluid Li, inlet temperature 600 K and inlet velocity 3 m/s. Obviously, the heat sink with parallelogram cross-section has the lowest pressure drop and flow resistance coefficient while the heat sink with circular cross-section has the highest. This is because with the same hydraulic diameter, the investigated four microchannel cross-section shapes have different cross-sectional area, which leads to different velocity magnitudes as well as the pressure drop. Among the four different kinds of microchannels, the circular microchannel's cross-sectional area is the smallest. Hence, under the same heat sink's inlet velocity, the working fluid within the circular microchannel has the highest speed (as shown in Figure 13), which leads to the highest pressure drop. In order to find the best cross-section shape, the performance evaluation criteria PEC, which represents the comprehensive performance of flow and heat transfer, is calculated. The PEC is defined as [57], where Nu0 and f0 are the benchmark Nusselt number and mean flow resistance coefficient. In this paper, Nu and f of the heat sink with rectangular microchannel cross-section under heat flux 100 W/cm 2 are appointed as Nu0 and f0. Figure 14 shows the PEC values of heat sinks with different microchannel cross-section shapes. Obviously, using circular microchannel could obtained the highest PEC values, indicating circle is the best choice for microchannel cross-section shape. In order to find the best cross-section shape, the performance evaluation criteria PEC, which represents the comprehensive performance of flow and heat transfer, is calculated. The PEC is defined as [57], where Nu 0 and f 0 are the benchmark Nusselt number and mean flow resistance coefficient. In this paper, Nu and f of the heat sink with rectangular microchannel cross-section under heat flux 100 W/cm 2 are appointed as Nu 0 and f 0 . Figure 14 shows the PEC values of heat sinks with different microchannel cross-section shapes. Obviously, using circular microchannel could obtained the highest PEC values, indicating circle is the best choice for microchannel cross-section shape.
Effects of Inlet Velocity
Five inlet velocities (1 m/s, 3 m/s, 5 m/s, 7 m/s and 9 m/s) are considered in this paper so that the effects of inlet velocity on the flow and heat transfer performances could be
Effects of Inlet Velocity
Five inlet velocities (1 m/s, 3 m/s, 5 m/s, 7 m/s and 9 m/s) are considered in this paper so that the effects of inlet velocity on the flow and heat transfer performances could be investigated. Figure 15 shows temperature distributions of the heat sink with different inlet velocities under rectangular microchannel cross-section, heat flux 200 W/cm 2 , inlet temperature 600 K and Li as the working fluid. It could be seen that the inlet velocity has a significant effect on the temperature distribution. With the increase of inlet velocity, the temperature of the heat sink becomes obviously lower. Under the inlet velocity of 1 m/s, the high temperature region covers most of the heat sink, indicating the heat sink's poor cooling effect. When the inlet velocity increases to 3 and 5 m/s, the high temperature region becomes much smaller. When the inlet velocity reaches 7 and 9 m/s, the entire heat sink is basically at a relatively low temperature level.
Effects of Inlet Velocity
Five inlet velocities (1 m/s, 3 m/s, 5 m/s, 7 m/s and 9 m/s) are considered in this paper so that the effects of inlet velocity on the flow and heat transfer performances could be investigated. Figure 15 shows temperature distributions of the heat sink with different inlet velocities under rectangular microchannel cross-section, heat flux 200 W/cm 2 , inlet temperature 600 K and Li as the working fluid. It could be seen that the inlet velocity has a significant effect on the temperature distribution. With the increase of inlet velocity, the temperature of the heat sink becomes obviously lower. Under the inlet velocity of 1 m/s, the high temperature region covers most of the heat sink, indicating the heat sink's poor cooling effect. When the inlet velocity increases to 3 and 5 m/s, the high temperature region becomes much smaller. When the inlet velocity reaches 7 and 9 m/s, the entire heat sink is basically at a relatively low temperature level. Figure 16 shows the mean heat transfer coefficients and Nusselt numbers under various conditions with working fluid Li, rectangular microchannel cross-section and inlet temperature 600 K (using pressure drop under corresponding inlet velocity as x-axis label so that the heat transfer performance could be compared properly [53][54][55][56]). For all the investigated heat fluxes, both the heat transfer coefficient and Nusselt number increase with the increase of inlet velocity. From 1 m/s to 9 m/s, the heat transfer coefficient and Nusselt number, respectively, enhance 74.35% and 80.5% at most. Moreover, the increasing rates of the heat transfer coefficient and Nusselt number are considerably high at low inlet velocity, but quite low at high inlet velocity. This implies that under small inlet velocity, the cooling ability of the heat sink could be improved by increasing inlet velocity. However, when the inlet velocity is high enough, continuing to increase the inlet velocity brings little benefit.
with the increase of inlet velocity. From 1 m/s to 9 m/s, the heat transfer coefficient and Nusselt number, respectively, enhance 74.35% and 80.5% at most. Moreover, the increasing rates of the heat transfer coefficient and Nusselt number are considerably high at low inlet velocity, but quite low at high inlet velocity. This implies that under small inlet velocity, the cooling ability of the heat sink could be improved by increasing inlet velocity. However, when the inlet velocity is high enough, continuing to increase the inlet velocity brings little benefit. Figure 17 shows the pressure drops and flow resistance coefficients under various conditions with working fluid Li, rectangular microchannel cross-section and inlet temperature 600 K. It could be seen that the pressure drop rises rapidly with the increase of inlet velocity. From 1 m/s to 9 m/s, the pressure drop increases as much as 65 times. As for the flow resistance coefficient, it decreases gradually with the increase of the inlet velocity. This is because although the flow resistance coefficient is proportional to the pressure drop (∆P), it is inversely related to the square of velocity (v 2 ) (according to Equation (26)).
conditions with working fluid Li, rectangular microchannel cross-section and inlet temperature 600 K. It could be seen that the pressure drop rises rapidly with the increase of inlet velocity. From 1 m/s to 9 m/s, the pressure drop increases as much as 65 times. As for the flow resistance coefficient, it decreases gradually with the increase of the inlet velocity. This is because although the flow resistance coefficient is proportional to the pressure drop (ΔP), it is inversely related to the square of velocity (v 2 ) (according to Equation (26)).
Conclusions
Numerical investigations of flow and heat transfer performances in liquid metal-based microchannel heat sinks are presented in this paper. During the simulation processes, different working fluid types, microchannel's cross-sectional geometries and inlet velocities are considered so that their influences on the heat sink's characteristics can be analyzed. Based on the calculating results, some conclusions were drawn as follow: (1) Among all the seven investigated alkalis, lithium is the best option for working fluid because the lithium-based microchannel heat sink has the best cooling ability and the lowest pressure drop. (2) For the four considered microchannel cross-section types (rectangle, circle, trapezoid and parallelogram), utilizing a circular microchannel cross-section obtains a higher mean heat transfer coefficient, while using a parallelogram obtains the lowest pressure drop. Considering flow and heat transfer performances comprehensively, the circle is the optimum choice for microchannel cross-section shape because using a circular microchannel has the highest PEC value. (3) Inlet velocity has a significant influence on the heat sink's flow and heat transfer performances. When the inlet velocity rises from 1 m/s to 9 m/s, the heat transfer coefficient enhances 74.35% at most, while the pressure drop increases up to 65 times. In order to obtain a favorable overall performance, the inlet velocity should be selected carefully. | 9,486.4 | 2022-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Comparative Analysis of Free Optical Vibration of Lamination Composite Optical Beams Using the Boubaker Polynomials Expansion Scheme and the Differential Quadrature Method
1 Department of Mathematics, Faculty of Science and Letters, Pamukkale University, 20020 Denizli, Turkey 2 Institut Supérieur des Etudes Technologiques de Radès, 2098 Tunis, Tunisia 3 Equipe de Physique des Dispositifs à Semiconducteurs, Faculté des Sciences de Tunis, Campus Universitaire, 2092 Tunis, Tunisia 4 Department of Mechanical Engineering, Pamukkale University, 20017 Denizli, Turkey 5 Department of Software Engineering, Faculty of Engineering, Gümüşhane University, 29100 Gümüşhane, Turkey 6 Department of Geomatic Engineering, Faculty of Engineering, Gümüşhane University, 29100 Gümüşhane, Turkey 7 Department of Mathematics, Quaid-i-Azam University, Islamabad 45320, Pakistan
Introduction
Free optical vibration of generally laminated beams has been of increasing interest in the last decades' literature [1][2][3][4][5][6][7][8][9][10][11][12].Vo et al. [1] investigated free vibration of axially loaded thin-walled composite beams with arbitrary lay-ups.The proposed model was based on equations of motion for flexural-torsional coupled vibration which were derived from the Hamilton's principle.In the same context, Li et al. [2] studied the free vibration and buckling behaviors of axially loaded laminated composite beams using the dynamic stiffness method.The model took into account influences of axial force, Poisson effect, axial deformation, shear deformation, and rotary inertia.Hu et al. [4] Karami et al. [5], and Malekzadeh et al. [6] proposed a differential quadrature element method (DQEM) by using Hamilton's principle for free vibration analysis of arbitrary nonuniform Timoshenko beams on elastic supports.
In this paper, a model on the vibration analysis of laminated composite beam has been developed and studied using two resolution protocols.For the beam used, it is assumed that Bernoulli-Euler hypothesis is valid.The results obtained by the two methods are compared.It has been concluded that all of the results are very close to each other.
Problem Formalization
The normal stress in jth layer of a composite laminated beam shown in Figure 1 can be written in the following way: According to Bernoulli-Euler hypotheses, the deformation at a certain distance from neutral plane is where ρ is the curvature of the beam.The relationship between normal stress and bending moment is given by or where h and b are the height and the width of the beam, N is the number of layer and z j is the distance between the outer face of jth layer, and the neutral plane.The relationship between the bending moment and the curvature can be written as follows: where E ef is the effective elasticity modulus and I yy is the cross-sectional inertia moment of the beam.Flexural motion of a linear elastic laminated composite beam without shear or rotary inertia effects is described by Bernoulli-Euler equation: As a solution of ( 6), it can be used a separation of variables solution for harmonic free vibration: where ω n is the frequency and W(x) is the mode shape function of the lateral deflection.Substitution of this solution into (6) eliminates the time dependency and yields the following characteristic value problem: where λ is the dimensionless frequency of the beam vibrations given by For a cantilever composite laminated beam shown in Figure 1, the boundary conditions at the two ends are due to the deflection and rotation both being zero at the clamped end, and due to the bending moment and shear force both vanishing at the free end.The analytical solution of (8) subjected to (10) and (11) yields the frequency equation: which may be found in the relevant literature [13].
DQM Solution
DQM method is carried out for the approximate solution of the characteristic value problem in (8) with the boundary conditions given by ( 10) and ( 11) by first discretizing the interval [0, L] such that 0 where N is the number of grid points.Application of the DQM to discrete the derivative in (8) leads to where A (4) i j are the weighting coefficients of the fourthorder derivative which can be calculated using the explicit relations given by Shu [14].Note that we have two boundary conditions specified at both ends.These two conditions at the same point provoke a great challenge for the DQM, because we have only one quadrature equation at one point in the DQM, which prevents implementing the two boundary conditions.We use δ-point technique to eliminate the difficulties in implementing two conditions at a single boundary point (Figure 2).Following the same approach presented in [15], the boundary conditions at x = 0 can be discretized as Similarly, the boundary conditions at x = L can be discretized as The assembly of ( 13) through (15) yields the following set [14] of linear equations: where the subscripts b and d indicate the grid points used for writing the quadrature analog of boundary conditions and the governing differential equation, respectively.By matrix substructuring of ( 17), one has the following two equations: From the first part of ( 18), one obtains Back-substituting (19) into the second part of (18), one gets where [S] is of order (N − 4) × (N − 4) and given by Both the eigenvalues being the frequency squared values and the eigenvectors {W d } describing the mode shapes of the freely vibrating beam may be obtained simultaneously from the [S] matrix.
Figure 2: A one-dimensional quadrature grid with adjacent δpoints.
BPES Solution
The BPES [16][17][18][19][20][21][22][23] is applied to (8) through setting the expression where B 4k are the 4k-order Boubaker polynomials, x ∈ [0, L] is the normalized time, r k are B 4k minimal positive roots, N 0 is a prefixed integer, and λ k | k=1,...,N0 are unknown pondering real coefficients.Consequently, it comes for (8) that The related boundary conditions expressed through (10) and (12).The BPES protocol ensures their validity regardless main equation features.In fact, thanks to Boubaker polynomials first derivatives properties are Boundary conditions are inherently verified: The BPES solution is obtained through five steps: (i) Integrating, for a given value of N 0 , the whole expression given by ( 23) along the interval [0, L].
(v) Ranging the obtained frequencies.
Results and Discussion
Natural frequencies of the symmetric laminated composite cantilever beam have been estimated using the Boubaker Polynomials Expansion Scheme (PBES) and the Differential Quadrature Method (DQM), and for parameters values indicated in Table 1. Figure 3 presents the obtained values.
The results have been evaluated as quite close to each other.The natural frequency alteration as a direct result of the change in the stacking sequence causes resonance if the changed frequency becomes closer to the working frequency.Hence, selection of the stacking sequences in the laminated composite beams has to be outlined.
Conclusion
This work deals with two protocols for the calculation of natural frequency of the symmetric laminated composite cantilever beam.Calculations performed by means of Boubaker Polynomials Expansion Scheme PBES and Differential Quadrature Method DQM yielded coherent and similar results.All considered results have been seen to be in accordance with each other.Changes in the stacking sequence, which likely allow tailoring of the material to achieve desired natural frequencies and respective mode shapes without changing its geometry, are the subject of following studies.
Figure 3 :
Figure 3: Comparison of BPES and DQM frequencies for cross-ply laminated beam.
Table 1 :
Geometry and material properties of the composite materials. | 1,671.6 | 2012-01-30T00:00:00.000 | [
"Physics"
] |
Theoretical Modeling of Intensity Noise in InGaN Semiconductor Lasers
This paper introduces modeling and simulation of the noise properties of the blue-violet InGaN laser diodes. The noise is described in terms of the spectral properties of the relative intensity noise (RIN). We examine the validity of the present noise modeling by comparing the simulated results with the experimental measurements available in literature. We also compare the obtained noise results with those of AlGaAs lasers. Also, we examine the influence of gain suppression on the quantum RIN. In addition, we examine the changes in the RIN level when describing the gain suppression by the case of inhomogeneous spectral broadening. The results show that RIN of the InGaN laser is nearly 9 dB higher than that of the AlGaAs laser.
Introduction
InGaN laser diodes have been the subject of considerable attention because of their applications in high density optical disc storage and optical data processing. In particular, blueviolet laser diodes operating at a wavelength around 400 nm are required for Blu-ray disc systems if the disk storage capacity is to be increased up to 25 Gbytes [1]. Much progress in the developments of violet-blue lasers has been made since the first operation of such a laser was reported by Nakamura et al. [2,3]. Meanwhile, several groups have reported continuous-wave operation at room temperature using different fabrications methods [4][5][6][7][8].
A typical limiting factor for the optic-disc application is the noise in the laser emission, which may cause errors in the reading/recording processes. Semiconductor laser radiation intrinsically shows intensity and phase fluctuations which induce broadening of the spectral line. These fluctuations are associated with quantum transitions of charge carriers between the conduction and valence bands through the recombination processes of charge carriers and the processes of photon emission and absorption [9]. This intrinsic noise is unavoidable and is called quantum noise [10] or "optical shot noise" [11]. The laser noise may be amplified by other effects such as competition among the oscillating modes [12], external modulation [13], and external optical feedback [14]. These types of noise have been intensively investigated in both theory and experiment for near infrared GaAs and InP-based laser diodes [14][15][16][17][18][19][20]. Experimental studies on the noise of blue-violet InGaN lasers showed that the properties of noise in the blue-violet laser are not so different qualitatively from those in the infrared lasers [11]. However, the quantum noise in the blue-violet laser is eight times higher than that in the near infrared lasers in terms of RIN at the same output power [11]. To the best of the authors' knowledge, no theoretical investigations on the noise issue in these short-wavelength laser candidates have been reported.
The dynamics and noise of semiconductor lasers are described by a set of stochastic rate equations that describe the mechanisms of time evolution of the adding/dropping of photons and charge carriers [21]. The intensity noise is characterized in terms of the spectral properties of RIN. Analysis of noise and dynamics in semiconductor lasers is dependent on the form of the optical gain in the rate equations and more realistic model should include the effect of the gain suppression [22]. Abdullah [23] indicated that the gain suppression has an important effect on the dynamics of InGaN-based laser diodes, because these lasers may operate with high power where the gain suppression is pronounced. Ahmed and Yamada [24] showed that, in the limit of high power, nonlinear gain is inaccurately described by the commonly used third-order gain expression, which corresponds 2 The Scientific World Journal to partial homogeneous broadening of gain [25]. Instead, the nonlinear gain expression should include higher-order terms from the infinite gain expansion in terms of the electric field intensity to account for the case of high power emission [23].
In this paper, we introduce modeling and simulation on the spectral properties of RIN in the blue-violet InGaN laser diode. The studies are based on appropriate modeling of the rate equations and intensive computer simulations of the laser dynamics and noise. We enhance the novelty of the present work by comparing the simulated results with the previous publications such as in [11]. In addition, we compare the noise results of 410 nm InGaN lasers with those of 830 nm AlGaAs lasers. Also, we study the influence of gain suppression on the noise properties and compare the noise results when using the expressions of inhomogeneous gain broadening. We show that RIN is suppressed remarkably with the increase in the injection current in the regime near the threshold level and that the RIN level in the InGaN laser is nearly 9 dB higher than that of the AlGaAs laser. The noise results are compared with the experimental results in [11]. The increase in the gain suppression was found to suppress the quantum noise within 1 dB/Hz. Finally, we point out that the case of inhomogeneously broadened gain overestimates the RIN level by about 4 dB in the limit of high power emission.
Theoretical Model
The noise properties of the InGaN laser are described by solving the following rate equations of the photon number ( ) and injected carriers ( ) in the active region at a given current : where is the optical gain and is composed of linear term and nonlinear term [25], The linear term is a linear function of and is characterized by the gain slope and the injected carrier number at transparency as [25] whereas the nonlinear gain is given in terms of the injected carrier number as [25] where and are characteristic parameters of the nonlinear gain. The influence of the nonlinear gain is to suppress the optical gain under the threshold gain level th due to the increase in the photon number when the laser operates above threshold [24]. In (3), is the confinement factor of the optical field in the active layer whose volume is . The other parameters in (1) include as the electron charge and as the electron lifetime due to the spontaneous emission. The last terms ( ) and ( ) are Langevin noise sources with zero mean values and are added to the equations to account for intrinsic fluctuations in ( ) and ( ), respectively [9]. These noise sources are assumed to have Gaussian probability distributions and to be -correlated processes [9]. The frequency content of intensity fluctuations is measured in terms of RIN, which is calculated from the fluctuations ( ) = ( ) − in ( ), where is the time-average value of ( ). Over a finite time , RIN is given as [14] where is the noise frequency. The noise performance of the laser is evaluated also in terms of the average value of the RIN components at frequencies lower than 100 MHz, LF-RIN. The power ( ) emitted from the front facet is determined from the photon number ( ) as [9] ( ) = ℎ 2 2 (6) where is the emission wavelength, ℎ is Planck's constant, and and are the power reflectivities at the front and the back facets, respectively.
Numerical Calculations
Rates (1) are solved by the 4th order Runge-Kutta method using a time integration of Δ = 10 ps. At each integration instant, the noise sources ( ) and ( ) are generated following the technique developed in [14] using two uniformly distributed random numbers generated by the computer. RIN of the total output is calculated from the fast Fourier transform (FFT) of the time fluctuations ( ) via (5) as follows: In the calculations, we assume 410 nm multiple quantum well (MQW) InGaN single mode laser with the parameters listed in Table 1. The active region is assumed to contain three quantum wells (QWs) with well thickness, barrier thickness, and stripe width of 5 nm, 10 nm, and 5 m, respectively. The optical confinement factor in each quantum well layer is 0.0342.
RIN in InGaN Laser
Diodes. Figure 1 plots the spectral characteristics of RIN at different injection levels: near above threshold ( = 1.01 and 1.1 th ), above threshold ( = 1.5 th ), and far above threshold ( = 3.0 th ). The RIN spectra are flat (white noise) in the low-frequency regime due to the small amplitude of the intensity fluctuations and the large signal-tonoise ratio. In the high-frequency regime, the spectra exhibit The Scientific World Journal 3 the well-known carrier-photon resonance peak around the relaxation frequency . The figure shows suppression of RIN with the increase in the injection current due to the improvement in the degree of laser coherency [9]. This increase in the injection level is associated also with an increase in the relaxation oscillation peak; = 330 MHz when = 1.01 th and = 5.1 GHz when = 3.01 th .
It is of practical interest to study variation of the low frequency level of RIN and LF-RIN with the increase in the injection current . the laser signal and improvement in the signal to noise ratio [9]. We plot also in Figure 2 the experimental results on the quantum noise of InGaN laser reported by Matsuoka et al. [11]. The figure shows that the simulated noise results fit qualitatively the experimental data in [11]. The differences in the noise level between the simulated and experimental results could be a mode competition effect in the measured laser, which may change the noise level of the modeled single mode laser [26].
Comparison of RIN in InGaN Lasers with InGaN and AlGaAs Lasers.
In this subsection, we compare the lasing characteristics of the 410 nm GaInN laser with those of 830 nm AlGaAs laser diode. For AlGaAs laser, we consider the following parametric values: = 2.7 × 10 −12 m 2 , = 3.59, = 1.89×10 8 , = 1.63×10 8 , = 3.95×10 −5 s −1 , and = 2.7 ns. Figure 3 compares the -characteristics of the two laser diodes showing that the laser operation is linear over the relevant range of injection current . The threshold current th , which is the intercept of the -line with the current axis, is shown to increase from 14.2 mA in the AlGaAs laser to 28.2 mA in the InGaN laser. Figure 4 plots variation of the LF-RIN level with the emitted power for the two laser diodes. The figure shows the effect of noise suppression with the increase in the lasing power , with the suppression being stronger in the regime of the near above threshold. The figure shows that LF-RIN in the InGaN laser is nearly 9 dB higher than that of the AlGaAs laser. This higher level of the LF-RIN in the InGaN laser is due to the inverse proportionality of RIN on the third power of the emission wavelength [10]: 4 The Scientific World Journal Therefore, this relation predicts that the quantum noise in the 410 nm InGaN laser is nearly 8.3 dB higher than that in the AlGaAs laser. This result fits the experimental finding by Matsuoka et al. [11] on 410 nm InGaN and 830 nm AlGaAs lasers with almost identical structure design and operation characteristics [11]. It is worth noting that the corresponding variation of the quantum noise level with the current ratio / th reveals much smaller differences between both lasers, which agrees also with the experimental results in [11]. Figure 5 compares the simulated spectrum of RIN of both lasers when = 5 mW. As shown in the figure, the InGaN laser reveals RIN spectra higher than those of the AlGaAs laser. On the other hand, the position of the enhanced resonance peak of the InGaN laser occurs at a relaxation frequency lower than that of the AlGaAs laser. of semiconductor lasers. Influence of the nonlinear gain suppression on the quantum noise of the InGaN laser is illustrated in Figure 6 when = 5 th . The gain suppression is varied by varying relative to the set value in Table 1, 0 . The figure shows that the increase in results in a decrease in the LF-RIN level within 1 dB/Hz. The variation was found not to affect the position of the resonance peak of the RIN spectrum (i.e., the relaxation frequency). This suppression in the amplitude of intensity fluctuations agrees with the prediction by Abdulrhmann et al. [27] in InGaAsP lasers.
The spectral gain suppression is a nonlinear effect. In its most exact description, the optical gain is expressed as an infinite perturbation expansion of the field intensity [24]. Yamada and Suematsu [25] showed that within the normal operation of semiconductor lasers the gain suppression has partial homogeneous spectral broadening. Therefore, the gain of semiconductor lasers is usually described by the third-order perturbation form in (2) [25]. In semiconductor lasers with high power emission, the gain suppression becomes inhomogeneously broadened [25] and the infinite The Scientific World Journal (2)) and inhomogeneous gain broadening (form (9)). gain expansion in [23] is reduced to the following form developed by Agrawal [28]: Here, we examine the validity of this gain suppression to model the noise properties of the present InGaN laser. Replacing form (2) of the partial homogeneous gain suppression with form (9) of the inhomogeneous gain suppression was found not to change the -characteristics. In Figure 7, we compare the obtained noise properties of the InGaN laser by those obtained when applying the form in (9). As shown in the figure, the inhomogeneous gain broadening little overestimates the RIN level in the regime of high injection currents; the difference is just within 4 dB.
Conclusions
We modeled and simulated the noise properties of the blueviolet InGaN laser diodes. We compared the RIN results with those of the AlGaAs lasers since both lasers are the most representative light sources in the high-density disc systems. Also, we examined the influence of the gain suppression on RIN. The results showed that the RIN spectra are flat in the low-frequency regime. RIN is suppressed remarkably with the increase in the injection current up to ∼ 1.6 th , beyond which the decrease in LF-RIN becomes within 0.5 dB/Hz. This increase in the injection level is associated also with an increase in the relaxation oscillation peak towards higher frequencies. The simulated noise results are in good agreement with the experimental data reported in literature. The RIN level in the InGaN laser is nearly 9 dB higher than that of the AlGaAs laser. A big contributor to this higher RIN is the inverse proportionality of RIN on the third power of the emission wavelength. The increase in the gain suppression was found to decrease the quantum noise within 1 dB/Hz. Describing the gain suppression by the form of inhomogeneous gain broadening was found to induce 4 dB overestimation of the RIN level. | 3,418.8 | 2014-07-22T00:00:00.000 | [
"Engineering",
"Physics"
] |
Is maternal thyroid hormone deposition subject to a trade-off between self and egg because of iodine? An experimental study in rock pigeon
Maternal hormones constitute a key signalling pathway for mothers to shape offspring phenotype and fitness. Thyroid hormones (THs; triiodothyronine, T3 and thyroxine, T4) are metabolic hormones known to play crucial roles in embryonic development and survival in all vertebrates. During early developmental stages, embryos exclusively rely on the exposure to maternal THs, and maternal hypothyroidism can cause severe embryonic maldevelopment. The TH molecule includes iodine, an element that cannot be synthesised by the organism. Therefore, TH production may become costly when environmental iodine availability is low. This may yield a trade-off for breeding females between allocating the hormones to self or to their eggs, potentially to the extent that it even influences the number of laid eggs. In this study, we investigated whether low dietary iodine may limit TH production and transfer to the eggs in a captive population of Rock pigeons (Columba livia). We provided breeding females with an iodine-restricted (I- diet) or iodine-supplemented diet (I+ diet) and measured the resulting circulating and yolk iodine and TH concentrations and the number of eggs laid. Our iodine-restricted diet successfully decreased both circulating and yolk iodine concentrations compared to the supplemented diet, but not circulating or yolk THs. This indicates that mothers may not be able to independently regulate hormone exposure for self and their embryos. However, egg production was clearly reduced in the I- group, with fewer females laying eggs. This result shows that restricted availability of iodine does induce a cost in terms of egg production. Whether females reduced egg production to preserve THs for themselves or to prevent embryos from exposure to low iodine and/or THs is as yet unclear.
Introduction 24
Non-genetic inheritance is defined as the transmission of information between generations 25 beyond coding genes (Danchin et al., 2011). Parental effects are included in this non-genetic 26 inheritance, may be considered adaptive (Moore et al., 2019;Mousseau and Fox, 1998;Yin et 27 al., 2019), and parental effect of maternal origin, i.e. maternal effects, have received 28 increasing attention since the 1990's (Bernardo, 1996;Mousseau and Fox, 1998). Hormones 29 of maternal origin can be transferred to the offspring and constitute a potential pathway for 30 mothers to influence their offspring's phenotype (Groothuis et al., 2019). Hormone allocation 31 to offspring could be costly for mothers as it could induce a trade-off between allocating 32 hormones to their own metabolism or to their offspring's. Steroid hormones, the most studied 33 hormones in the context of hormone-mediated maternal effects, may not be that costly to 34 produce as they are derived from cholesterol, which is abundant in the organism (Groothuis 35 and von Engelhardt, 2005). On the other hand, thyroid hormones (THs) may be considered 36 costly, as their molecular structure includes iodine, a trace element that cannot be synthesised 37 by organisms and must therefore be found in the environment. 38 Thyroid hormones are metabolic hormones that are present in two main forms: 39 thyroxine (T 4 ) that contains four atoms of iodine, and triiodothyronine (T 3 ) that contains three 40 atoms of iodine. Iodine is concentrated into the thyroid gland and incorporated into tyrosines 41 that will be combined to form T 4 and T 3 (McNabb and Darras, 2015). The thyroid gland 42 produces mostly T 4 and lesser amounts of T 3 . In the peripheral tissues (e.g. liver, kidney, 43 muscle), T 3 is mostly obtained from T 4 via removal of an iodine atom by deiodinase enzymes 44 (McNabb and Darras, 2015). TH action mainly depends on TH receptors that have a greater 45 affinity for T 3 than for T 4 (Zoeller et al., 2007). This is why T 4 is mostly seen as a precursor of 46 T 3 , the biologically active form. THs play important roles in growth, reproduction, 47
Experimental design 123
Iodine restricted and supplemented diet 124 We provided the experimental birds with either an iodine restricted (I-, n = 19 pairs) or an 125 iodine supplemented (I+, n = 19 pairs) diet until all eggs and blood samples were collected. 126 Egg collection was ended around three weeks after the initiation of second clutches, leading to 127 a total of approx. 10 weeks (see below for more details). The restricted diet contained 0.06 mg 128 of iodine/kg of food (Altromin™ C1042) and the supplemented diet was the same food 129 supplemented with 3 mg of iodine/kg by the manufacturer. Therefore, both diets had exactly 130 the same composition of all essential micro-and macronutrients except iodine. The restricted 131 treatment corresponds to about 10% of the iodine content in the standard pigeon diet (0.65 132 mg/kg), and approximately 20% of the minimum iodine requirement for ring doves (0.30 mg 133 I/kg) according to Spear and Moon (1985). In addition, this restricted treatment corresponds 134 to a low iodine treatment (0.05 mg/kg) used in a previous experiment on Japanese quails 135 (McNabb et al., 1985a) that induced a significant decrease in circulating and yolk iodine. The 136 supplemented treatment (3 mg/kg) corresponds to five times the concentration of iodine in the 137 standard pigeon diet. In McNabb and colleagues (1985a), the maximal dietary iodine (1.2 mg 138 I/kg feed) was ca. eight times the sufficient iodine concentration required for Japanese quails 139 (0.15 mg I/ kg feed) and the authors observed no detrimental effects of this high dose. 140 Therefore, we expected no detrimental effect of our supplemented treatment either. Food, 141 water, and grit were provided ad libitum throughout the experiment. We collected both eggs 142 from first and second clutches of females fed with restricted or supplemented iodine diets. We 143 also collected blood samples after clutch completion from the two experimental groups (I-and 144 I+). The second set of samples (i.e. eggs and blood) was collected to test for the effect of 145 exposure duration of the treatment (see timeline below).
Timeline of the experiment 147
Nest boxes were opened and nesting material was provided two weeks after the experimental 148 diet was introduced to stimulate egg laying. Egg laying usually starts within a week of nest-149 box opening. Based on Newcomer (1978), who fed hatchling chicken with low iodine diet 150 (0.07 mg I/kg feed), we could expect thyroid iodine content to be the lowest from 10 days 151 onwards after introducing the experimental diet. The first eggs (i.e. from the first 152 experimental clutches) were collected 3 weeks after the introduction of the experimental diet 153 and were collected over 12 days. On average, the eggs from 1 st clutches were laid 26.4 days 154 (SD = 2.9) days after the onset of the experimental diet. Freshly laid eggs were collected, 155 replaced by dummy eggs to avoid nest desertion. Second clutches were initiated by removing 156 dummy eggs approximately 2 weeks after the completion of the first clutch (i.e., 5 weeks after 157 the start of the experimental diet), and eggs collected over a period of 18 days. On average, 158 the eggs from 2 nd clutches were laid 53.5 days (SD = 3.3) after the onset of the experimental 159 diet. We also collected some late first clutches (on average 52.6 (SD = 6.5) days after the 160 onset of the experimental diet). 161 Egg and blood sample collection 162 Table 1 summarises the number of samples collected. Eggs and blood samples were collected 163 in the exact same manner in the first and second clutches. Freshly laid eggs were collected 164 and were stored in a -20°C freezer. Not all females laid complete clutches of 2 eggs, and 165 several females did not lay an egg at all. Females were captured during incubation in the nest 166 boxes, and blood samples (ca. 400 µl) were taken from the brachial vein after clutch 167 completion (average (SD) = 4 (4.7) days after clutch completion, range = 0-21 days). 168 Unfortunately, we could not blood sample the females that did not lay eggs, as this would 169 have caused serious disturbance to all the birds in the same aviaries as we had to catch them 170 by hand netting in the large aviary. Half of the blood sample (ca. 200 µl) was taken with heparinised capillaries for plasma extraction (for TH analyses) and stored on ice until 172 centrifugation. The other half of the sample was taken with a sterile 1 ml syringe (BD 173 Plastipak ™) and let to coagulate for 30 min at room temperature before centrifugation for 174 serum extraction (for iodine analyses). Previous studies measured iodine in serum samples 175 (McNabb et al., 1985a;McNichols and McNabb, 1987), therefore we decided to measure 176 iodine in the serum for comparable results. Whole blood samples were centrifuged at 3 500 177 RPM (ca. 1164 G-force) for 5 min to separate the plasma from red blood cells (RBCs), and at 178 5 000 RPM (ca. 2376 G-force) for 6 min to separate the serum from RBCs. After separation, 179 all samples (plasma, serum and RBCs) were stored in a -80°C freezer for analyses of THs and 180 iodine. 181
Hormone and iodine analyses in plasma and yolk samples 182
Eggs were thawed, yolks separated, homogenised in MilliQ water (1:1) and a small sample 183 (ca. 50 mg) was used for TH analysis. Yolk and plasma THs were analysed using nano-LC-184 MS/MS, following . TH concentrations, corrected for extraction 185 efficiency, are expressed as pg/mg yolk or pg/ml plasma. 186 Yolk and serum iodine (ICP-MS, LOD of 3 ng/g of yolk and 1.5 ng/ml of serum) 187 analyses were conducted by Vitas Analytical Services (Oslo, Norway). Yolk iodine was 188 measured in a sample of ca. 1 g of yolk, and serum iodine was measured in a sample of ca. 0.2 189 ml of serum. 190
General information 192
Data were analysed with the software R version 4.0.2 (R Core Team, 2020). To test for the 193 effect of iodine restriction on egg laying, we compared the number of females that laid first 194 clutches in both groups, and the total number of eggs laid in first clutches with two Pearson's 195 chi-squared tests. The rest were fitted with linear mixed models (LMMs) using the R package lme4 (Bates et al., 2015) and p-values of the predictors were calculated by model comparison 197 using Kenward-Roger approximation with the package pbkrtest (Halekoh and Højsgaard, 198 2014). The response variables were plasma THs concentrations (T 3 , T 4 ), serum iodine, and 199 concentrations of yolk THs and yolk iodine. Relevant interactions between predictors were 200 tested by adding them one-by-one in the models with main effects only. Post-hoc tests of 201 interactions were performed with the package phia (de Rosario-Martinez, 2015). Model 202 residuals were inspected for normality and homogeneity with the package DHARMa with 203 1,000 simulations (Hartig, 2020). When either of the assumptions was violated, the response 204 was ln-transformed (see Tables) and in these cases the model residuals showed the required 205 distributions. Estimated marginal means (EMMs) were calculated from the models using the 206 package emmeans (Lenth, 2019). When the response was transformed, the EMMs were 207 calculated on the back-transformed data. 208 Although the treatment started for all females on the same date, each female, at the 209 time of egg laying, was exposed to the experimental diet for different durations because of the 210 varying laying dates between females. This may influence the effects of iodine manipulation 211 on circulating and yolk iodine and THs. However, because the second clutches were laid after 212 a longer exposure to the treatments than were first clutches (average exposure duration (SD) 213 2 nd clutches = 53.5 (3.3) days; 1 st clutches = 30.9 (10.6) days), clutch order and exposure 214 duration (i.e. the number of days between the onset of the experimental diet and laying date) 215 were confounded. Therefore, we used two different models, one with exposure duration and 216 another one with clutch order. We controlled for egg order (i.e. first or second egg in a clutch) 217 in our models for yolk THs initially since a previous study in the rock pigeons showed a non-218 significant trend for higher yolk T 3 concentrations in the second eggs (Hsu et al., 2016). 219 However, we detected no such effect in our models (all F < 0.57, all p-values > 0.45) and thus 220 egg order was excluded from the final models.
Model specification 222
Circulating iodine (ln-transformed) and T 3 and T 4 concentrations were analysed by fitting an 223 LMM with treatment (I-or I+), exposure duration, completeness of a clutch as a categorical 224 variable (complete or incomplete, i.e., to further test for the effect of number of eggs laid), 225 and the two-way interactions between treatment and exposure duration or completeness as 226 fixed factors. Female identity (for iodine and T 4 , but not T 3 because of singularity: variance 227 estimate collapsed to 0) and hormone extraction batch (for T 3 and T 4 ) were added as random 228 intercepts. 229 Yolk components (iodine, T 3 , T 4 ) were analysed in two different sets of models. The 230 first set of models, for yolk iodine only, compared untreated eggs (collected before the start of 231 the treatment) to the eggs collected in the two treatments (I+ and I-). This way, we could test 232 whether yolk iodine differed between untreated and experimental eggs. Here we only used 233 experimental first clutches as only one clutch of eggs per untreated female was collected. The 234 model only included treatment (a three-level categorical predictor: untreated, I+ and I-) as the 235 predictor. 236 The second set of models tested the effect of iodine treatment, exposure duration to the 237 treatment or clutch order on yolk iodine and THs. Here we included both 1 st and 2 nd clutches 238 from iodine treatments, but no eggs from untreated females. Yolk iodine (ln-transformed) was 239 first analysed in a LMM that included treatment as a categorical variable, exposure duration 240 (days since the start of the experiment), completeness of a clutch (complete or incomplete) 241 and the two-way interactions between treatment and exposure duration or completeness, and 242 female identity as a random intercept. This LMM somewhat violated the assumption of 243 homogeneity of variances between the groups because of the larger variance in yolk iodine in 244 the I+ group. Nevertheless, such a violation should not undermine our results as a recent 245 paper demonstrated that LMMs are fairly robust against violations of distributional 246 assumptions (Schielzeth et al., 2020). Yolk iodine was also analysed in a second model that 247 included treatment, clutch order (1 st or 2 nd clutch, categorical variable) and their interaction as 248 the predictors. Yolk T 3 , T 4 (ln-transformed) were analysed using the same models (with 249 exposure duration or clutch order) as for yolk iodine. Hormone extraction batch was added as 250 a random intercept for yolk T 3 and T 4 . 251
Circulating iodine and TH concentrations 253
In line with our expectations, there was a clear effect of iodine treatment on circulating iodine 254 concentrations: serum iodine was about four times lower in the I-group than in the I+ group 255 (raw data average (SE), I-= 11.1 (1.2) ng/ml serum, I+ = 44.0 (4.7) ng/ml serum; Table 2, 256 Fig. 1). The effects of clutch completeness, exposure duration and their interactions with 257 treatment on serum iodine were statistically unclear (Table 2). 258 Plasma T 3 was not affected by supplementation or restriction of iodine, nor by 259 exposure duration (Table 2; Fig. 2A). There was an almost statistically significant interaction 260 between iodine treatment and exposure duration of the treatment on plasma T 4 ( Table 2) Fig. 2B). Yet, the 264 large confidence intervals warrant due caution in interpreting this trend. There were no clear 265 effects of clutch completeness and its interaction with treatment on plasma THs (Table 2). 266
Discussion 295
In this study we tested whether dietary iodine limits mothers' circulating TH concentration, 296 TH transfer to the yolk, and egg production. To our knowledge, our study is the first to 297 investigate the potential trade-off between circulating and yolk THs induced by low dietary 298 iodine. We found that fewer females laid first clutches under the iodine-restricted diet 299 compared to the females under the iodine-supplemented diet, resulting in a lower total number 300 of eggs laid. We found that the iodine restricted diet decreased circulating and yolk iodine 301 levels, though circulating and yolk THs were unaffected. Longer exposure to restricted iodine 302 had no clear effect on circulating or yolk iodine and THs. Finally, we observed a slight 303 increase in plasma T 4 and in yolk T 3 across time that was unrelated to the dietary iodine and is 304 likely explained by seasonal changes or clutch order effects. Yet, because exposure duration to 305 the treatment and clutch order are partly confounded, our experimental design does not allow 306 us to fully disentangle both variables. 307
Does restricted iodine induce a cost and a trade-off between circulating and yolk THs? 308
Our iodine-restricted diet successfully decreased circulating iodine concentrations compared 309 with the supplemented diet. Despite this effect, we observed no differences in circulating TH 310 concentrations. This is consistent with a previous study that showed that Japanese quails 311 under limiting iodine availability maintained normal circulating THs concentrations (McNabb 312 et al., 1985a). However, a similar study on ring doves found a decrease in circulating T 4 313 concentrations with no changes in circulating T 3 , suggesting increased peripheral conversion 314 from T 4 to T 3 to maintain normal T 3 levels (McNichols and McNabb, 1987). The causes for 315 discrepancies between our study and that of the dove study (McNichols and McNabb, 1987) 316 are not clear. One potential explanation is that, in our study, we could only sample the females 317 that laid eggs and thus apparently managed to maintain normal circulating THs despite 318 restricted iodine whereas those that did no lay eggs may have suffered from low circulating 319 TH concentrations. 320 In the yolk, like in the circulation, restricted dietary iodine decreased yolk iodine but 321 not yolk TH concentration, in contrast to our predictions. The result of decreased yolk iodine 322 is in line with a previous study on quails, which found that mothers fed with low dietary 323 iodine produced eggs with low iodine but did not report yolk TH concentrations (McNabb et 324 al., 1985a). Low egg iodine concentration in turn disturbs thyroid function in embryos and 325 hatchlings (McNabb et al., 1985b;Stallard and McNabb, 1990). Circulating TH 326 concentrations of embryos, however, were not affected by low egg iodine (McNabb et al., 327 1985b;Stallard and McNabb, 1990). 328 Contrary to previous studies that manipulated dietary iodine, we found that limited 329 iodine availability hampered egg production, with 40% fewer females producing eggs in the I-330 group compared to the I+ group. However, females that managed to maintain normal 331 circulating THs were also able to lay eggs with normal yolk THs, similar to the study by 332 McNabb and colleagues (1985a). At the moment it is unclear why some females were affected 333 and others not, but a potential explanation may be for example individual differences in the 334 ability to store iodine. Two other studies found that administration of methimazole, a TH-335 production inhibitor, ceased egg laying in Japanese quails (Wilson and McNabb, 1997), and 336 reduced egg production in chickens (Van Herck et al., 2013). These results suggest that our 337 restricted diet might have induced hypothyroidism in some females, thus preventing them 338 from laying eggs. 339 Therefore, we did not show evidence for a cost of restricted iodine in females that 340 managed to lay eggs. Those females did not appear to face any trade-off between allocating 341 iodine and THs to either self or their eggs. Yet, 40% of the females under the restricted diet 342 paid a cost in terms of egg production. Whether this effect is due to limited production of THs is as yet unclear, but these females may have faced a trade-off between maintaining normal 344 circulating THs or yolk TH deposition. 345
Is there a regulatory mechanism to cope with the cost of restricted iodine and its associated 346
trade-off? 347 The absence of an effect of restricted iodine on circulating and yolk THs suggest that mothers 348 are not able to regulate yolk TH deposition independently from their own circulating THs, as 349 recently proposed by Sarraude et al. (2020b). This is contradictory to previous studies 350 showing evidence of independent regulation (Van Herck et al., 2013;Wilson and McNabb, 351 1997). However, the latter two studies induced supraphysiological hypo-or hyperthyroidism 352 to the birds, which may explain such discrepancies. 353 Interestingly, we found that our restricted diet reduced egg production. As discussed 354 above, this may be due to a hypothyroid condition that prevented females from laying eggs. 355 This may have evolved to protect embryos from exposure to too low iodine and/or TH 356 concentrations. In breeding hens, restricted dietary iodine can decrease egg hatchability 357 (Rogler et al., 1959;Rogler et al., 1961b) and retard embryonic development (McNabb et al., 358 1985b;Rogler et al., 1959;Rogler et al., 1961b), and can induce thyroid gland hypertrophy in 359 embryos and hatchlings (Rogler et al., 1961a;McNabb et al., 1985b; but see Stallard & 360 McNabb, 1990).This may suggest that females may be able to regulate egg production when 361 iodine availability is low. Overall, our results suggest that mothers in the restricted group 362 appear to prioritise self-maintenance and offspring quality over offspring quantity. 363
Restricted iodine and trade-offs in wild populations 364
Our low-iodine diet (0.06 mg I/kg food) is comparable to what birds may sometimes 365 experience in the wild. Although relevant data are scarce, estimates of iodine content in food 366 items such as barley and maize grains, wheat, or rye is highly variable, ranging from 0.06 to 367 0.4 mg I/ kg (Anke, 2004). Insectivorous species may also encounter iodine deficiency as the 368 iodine content in insects vary from <0.10 up to 0.30 mg I/kg (Anke, 2004). As such low 369 iodine availability can also be found in the wild, it is therefore relevant to study whether 370 mothers may face trade-offs in iodine or TH allocation during the breeding season, or whether 371 it influences egg laying itself. Yet, our study did not show evidence for the existence of a 372 trade-off between circulating and yolk THs when environmental iodine is limited. 373 Nevertheless, mothers may face a trade-off between allocating resources to themselves or 374 producing eggs of sufficient quality. 375 376 In conclusion, we found that restricted dietary iodine did not decrease circulating or 377 yolk THs despite reduced circulating and yolk iodine. Nevertheless, we found evidence that 378 restricted availability of iodine induces a cost on egg production. Thus, mothers may not be 379 able to regulate yolk TH transfer, but may be able to regulate egg production when facing 380 limited iodine. Our results also indicate that females under limited iodine availability may 381 prioritise their own metabolism over reproduction, or avoid exposing their offspring to 382 detrimentally low iodine and/or THs. These explanations serve as interesting hypotheses for 383 future research to further explore the consequences of limited iodine in wild populations. 6 females could not be captured and sampled (1 in I-and 5 in I+ group). LMM on plasma T 4 included batch of hormone extraction and female identity as random intercepts. LMM on plasma T 3 only included batch as the random intercept. LMM on serum iodine only included female identity as the random intercept. P-values and estimates (SE) for main effects were calculated from a model without the interaction. Interactions were added one by one and tested by comparison with the model without any interactions. Yolk iodine and T 4 were ln-transformed to achieve homogeneity of the model residuals. Ndf = 1. See Table 1 for sample sizes. Two different LMMs (separated by the dashed line) were computed, one with exposure duration, and the other one with clutch order. The coefficient for treatment is only presented in the first model as there were no differences with the second model. LMMs on yolk THs included batch and female identity as random intercepts. LMM on yolk iodine only included female identity as the random intercept. P-values and estimates (SE) for main effects were calculated from a model without the interaction. Interactions were added one by one and tested by comparison with the model without any interactions. Yolk iodine and T 4 were lntransformed to achieve homogeneity of the model residuals. Ndf = 1. See Table 1 for sample sizes. Figure 1: Circulating iodine in rock pigeon females treated with an I-or I+ diet. Black lines and shadow areas represent average and s.e.m values within each group, and grey dashed lines connect blood samples from the same females. Some females were only captured once, hence not all dots are connected. See Table 1 for sample sizes.
Figures
Figure 2: Circulating T 3 (A) and T 4 (B) in rock pigeon females treated with an I-or I+ diet. Red dots refer to blood samples after collected the first clutches and blue dots refer to blood samples collected after the second clutches. Black lines and shadow areas represent average and s.e.m values within each group, and grey dashed lines connect blood samples from the same females. Some females were only captured once, hence not all dots are connected. See Table 1 for sample sizes. Table 1 for sample sizes. Figure 4: Yolk iodine in eggs from 1 st and 2 nd clutches laid by rock pigeon females treated with an I-or I+ diet. Eggs from the same female and the same clutch were averaged. Red dots refer to eggs from the first clutches and blue dots refer to eggs from the second clutches. Black lines and shadow areas represent average and s.e.m values within each group, and grey dashed lines connect eggs from the same females. Some females did not lay two clutches, hence not all dots are connected. See Table 1 for sample sizes. Table 1 for sample sizes.
List of symbols and abbreviations
I: iodine | 6,188 | 2021-01-04T00:00:00.000 | [
"Biology"
] |
Exercise training attenuates renovascular hypertension partly via RAS- ROS- glutamate pathway in the hypothalamic paraventricular nucleus
Exercise training (ExT) has been reported to benefit hypertension; however, the exact mechanisms involved are unclear. We hypothesized that ExT attenuates hypertension, in part, through the renin-angiotensin system (RAS), reactive oxygen species (ROS), and glutamate in the paraventricular nucleus (PVN). Two-kidney, one-clip (2K1C) renovascular hypertensive rats were assigned to sedentary (Sed) or treadmill running groups for eight weeks. Dizocilpine (MK801), a glutamate receptor blocker, or losartan (Los), an angiotensin II type1 receptor (AT1-R) blocker, were microinjected into the PVN at the end of the experiment. We found that 2K1C rats had higher mean arterial pressure (MAP) and renal sympathetic nerve activity (RSNA). These rats also had excessive oxidative stress and overactivated RAS in PVN. Eight weeks of ExT significantly decreased MAP and RSNA in 2K1C hypertensive rats. ExT inhibited angiotensin-converting enzyme (ACE), AT1-R, and glutamate in the PVN, and angiotensin II (ANG II) in the plasma. Moreover, ExT attenuated ROS by augmenting copper/zinc superoxide dismutase (Cu/Zn-SOD) and decreasing p47phox and gp91phox in the PVN. MK801or Los significantly decreased blood pressure in rats. Together, these findings suggest that the beneficial effects of ExT on renovascular hypertension may be, in part, through the RAS-ROS-glutamate pathway in the PVN.
(ACE) within the PVN 13 . Recent studies indicate that RAS in the PVN exerts its actions mainly via interaction with AT1-R and ACE, thereby contributing to sympathoexcitation and hypertensive response in hypertension 13 . Our study, along with others, has shown that AT1-R in the PVN induces mitochondria dysfunction and produces excessive amounts of ROS in peripheral angiotensin II (ANGII)-induced hypertension rats 14,15 . Glutamate is a well-known excitatory neurotransmitter, which participates in regulating neuronal excitation in the central nervous system (CNS). Neuronal activity in the PVN is regulated by glutamate and other excitatory neurotransmitters 16,17 . Previous studies show that oxidative stress contributes to modulating glutamatergic output in the PVN in hypertension rats 18 . These data suggest that RAS induces gene transcription of ROS, which leads to further glutamatergic output, and eventually to accelerated progression of hypertension.
PVN is a key site for the central control of sympathetic outflow and a predominant region for coordinating nervous system signals that regulate blood pressure, which plays a crucial role in renovascular hypertension 10,19 . Few studies on ExT in 2K1C hypertension models have focused on RAS, ROS, or glutamate within the PVN. Here, we test the hypothesis that ExT decreases blood pressure in renovascular hypertensive rats. Furthermore, we hypothesize that the favourable effect of exercise will be, in part, associated with RAS, ROS, and glutamate within the PVN of renovascular hypertensive rats.
Methods
Animal care. Experiments were performed in male Sprague-Dawley rats (eight weeks old and weighing 180-210 g). All rats were housed in a condition-controlled (21-23 °C, with the lights on from 7 pm to 7 am) room. They were permitted free access to standard rat chow and tap water. The rats were treated in accordance with the principles of the National Institutes of Health Guide for the Care and Use of Laboratory Animals (the US National Institutes of Health Publication No. 85-23, revised 1996). All protocols were approved by the Animal Care and Use Committee at Xi'an Jiaotong University.
Renal artery clipping. Eight-week-old rats were anesthetized with xylazine (10 mg/kg) and ketamine (90 mg/kg) through intraperitoneal (i.p.) injection. Then, the rats were secured on the operating table, a right-flank incision was made in the abdomen, a silver clip (0.2 mm) was placed around the right renal artery, and then the flank incision was closed. Sham-clipped (Sham) rats underwent identical surgery without the silver clip. At the end of surgery, each rat received butorphanol tartrate (0.2 mg/kg subcutaneously) for an analgesic and penicillin for disinfection 19,20 . Exercise training. Four or five days after sham or renal artery clipping, the rats were randomly assigned to four groups: 2K1C + ExT group, 2K1C+ sedentary (Sed) group, SHAM + ExT group, SHAM + Sed group. The rats in ExT groups were assigned to eight weeks of exercise protocol (16 m/min, 50 min/d, and 5 d/wk).
Measurement of mean arterial pressure (MAP).
Blood pressure was measured via a tail-cuff occlusion instrument and recording system, as described previously 21 . MAP data were averaged from 10 different measurements, which were collected either between 8 and 11 am or between 2 and 4 pm every week. After eight weeks ExT or Sed, rats were anaesthetized using a ketamine (90 mg/kg) and xylazine (10 mg/kg) mixture (i.p.). An incision along the blood vessels was made in the thigh near the groin, and the femoral artery was cannulated with polyethylene catheters to measure MAP. MAP readings were collected for 30 min and averaged. Subsequently, after the bilateral PVN microinjection of dizocilpine (MK801) or losartan (Los), MAP was measured again.
Sympathetic neural recordings and PVN microinjections. Analysis of rectified and integrated renal sympathetic nerve activity (RSNA) and PVN microinjections were carried out described as previously 1,9,21 . Briefly, rats were anaesthetized using ketamine (90 mg/kg) and xylazine (10 mg/kg) and secured in a stereotaxic apparatus; then, bilateral PVN were implanted with cannulas and coordinates for the PVN were determined at 1.8 mm posterior, 0.4 mm lateral to the bregma, and 7.9 mm ventral to the zero level. The micropipette was filled with MK801 or Los. Continuous recordings of RSNA were taken at least 60 min after bilateral injection of MK801 or Los into the PVN.
Collection of blood and tissue samples. At the end of the experiment, rats were anaesthetized for recording RSNA and MAP, and for collecting blood and brain tissue. Isolation of the PVN tissue from the brain using Palkovits's microdissection procedure was performed as previously described 12,21,22 . Plasma and tissue samples were stored at a − 80 °C for molecular and immunofluorescence analysis. Superoxide anion levels in PVN were determined by fluorescent-labelled dihydroethidium (DHE, Molecular Probes) staining. Brain sections (14 μ m) were incubated with 1 mmol/L DHE at 37 °C for 10 min as previously described 1 . Sections were imaged using a Nikon epifluorescence microscope.
Measurement of glutamate levels in the PVN.
High-performance liquid chromatography with electrochemical detection (HPLC-EC) was used for measuring the level of glutamate in the PVN, as described previously 23 .
Biochemical assays. The level of ANG II in plasma was quantified using commercially available rat ELISA kits (Invitrogen) according to the manufacturer's instructions.
Statistical analysis. All data were analysed by ANOVA, followed by a post-hoc Tukey test. Blood pressure data were analysed by repeated measurements of ANOVA. All data are expressed as mean ± standard error (SE). The differences between mean values were considered to be statistically significant when the probability value of P was smaller than 0.05 (P < 0.05).
Effects of ExT on MAP in renovascular hypertensive rats.
2K1C hypertensive rats showed a significant increase in MAP compared with control rats. MAP remained elevated throughout the eight weeks of the study. Treatment with ExT reduced MAP in 2K1C-hypertensive rats (Fig. 1). However, ExT did not change MAP in SHAM + Sed and SHAM + ExT rats. Effects of PVN microinjection of MK801 or Los on MAP. PVN microinjection of MK801 or Los decreased MAP of 2K1C groups in hypertensive and ExT-hypertensive rats. This suggests that PVN microinjection of MK801or Los attenuates renovascular hypertension. Notably, PVN microinjection of Los exhibited lower MAP than PVN microinjection of MK801 in renovascular hypertensive rats ( Table 1).
Effects of eight weeks of ExT or PVN microinjection of MK801 or Los on renalsympathetic
nerve activity (RSNA). RSNA was increased in 2K1C rats compared with that in SHAM rats. ExT treatment with PVNmicroinjection of MK801 or Los attenuated RSNA of 2K1C rats. PVN microinjection of Los exhibited lower RSNA (%of max) compared with PVN microinjection of MK801 in 2K1C rats (Fig. 2). These results suggest that ExT attenuates RSNA, in part, through decrease of glutamate in renovascular hypertensive rats.
Effects of ExT on RAS in renovascular hypertensive rats. Immunofluorescence revealed that renovascular hypertensive rats had a significant increase of ACE and AT1-R expression in the PVN compared with SHAM rats. ExT decreased ACE-and AT1-R-positive neurons in the PVN and decreased plasma levels of ANG II in renovascular hypertensive rats (Figs 3, 4 and 5a). ELISA analysis revealed that 2K1C rats had a higher level of ANG II in the plasma compared with SHAM rats (Fig. 5a). Western blotting showed that renovascular hypertension up-regulated the expression ofAT1-R compared with that in SHAM rats. ExT also attenuated the expression of AT1-R ( Fig. 5b and 5c), suggesting that ExT attenuates RAS activation in renovascular hypertension.
Effects of ExT on oxidative stress in the PVN of renovascular hypertensive rats. Immunofluorescence revealed that renovascular hypertensive rats had significant increase in the expression of p 47phox , gp 91phox , and DHE in the PVN compared with SHAM rats. ExT decreasedp 47phox , gp 91phox , and DHE-positive neurons in hypertensive rats ( Fig. 6 and 7). Western blotting indicated that 2K1C hypertensive rats had lower levels of Cu/Zn-SOD and higher levels of p 47phox compared with SHAM rats. ExT enhanced the expression of Cu/Zn-SOD and decreased the expression of p 47phox (Fig. 8). This suggests that ExT attenuates oxidative stress in renovascular hypertension.
Effects of ExT on glutamate in the PVN of renovascular hypertensive rats.
We measured glutamate, a vital excitatory neurotransmitter, in the PVN by HPLC. These results revealed that 2K1C rats had higher levels of glutamate in the PVN compared with SHAM rats. This suggests that increased glutamate in the PVN contributes to elevated levels of blood pressure in renovascular hypertension. ExT attenuated the level of glutamate in the PVN compared with that in SHAM rats (Fig. 9). This suggests that ExT significantly decreases the level of glutamate in the PVN in renovascular hypertensive rats.
Discussion
The novel findings of the present study are as follows: (1) ExT inhibited MAP and RSNA by attenuating ROS, RAS, and the excitatory neurotransmitter glutamate; (2) microinjection of MK801 or Los into the PVN decreased MAP and RSNA. We conclude that ExT decreases blood pressure in renovascular hypertensive rats, and this depressive favourable effect is associated with the RAS-ROS-glutamate pathway within the PVN. Previous studies have reported that ExT is capable of decreasing blood pressure in hypertension and has been recommended as an effective nonpharmacological treatment for hypertension [23][24][25][26] . It also repored that neuronal plasticity plays an important role in the central regulation of ExT in hypertension 27 . Up-regulated glutamatergic outflow in the PVN contributes to an increase in blood pressure and sympathetic outflow in hypertensive rats 28,29 . In the present study, we confirmed that 2K1C rats have higher levels of glutamate in the PVN and that ExT significantly decreases the levels of glutamate in the PVN in renovascular hypertensive rats. This indicates that ExT is capable of attenuating the increased tonically active glutamatergic output in the PVN; however, the detailed mechanisms of ExT on glutamatergic levels in the PVN of renovascular hypertensive rats have not been firmly established.
One possibility is that ExT reduces oxidative stress in the PVN. It has been demonstrated that increased oxidative stress in the PVN contributes to sympathetic overactivityin hypertension 30,31 . Here, we found that ExT reduced the expression of p 47phox and gp 91phox , but increased the expression of Cu/Zn-SOD in the PVN. We also showed that ExT reduced glutamatergic output in the PVN in renovascular hypertension by reducing oxidative stress.
In addition, our data suggest that ExT reduces glutamatergic output in the PVN in renovascular hypertension via reduction of oxidative stress, which is mediated by RAS. In a previous study, Li et al. found that renovascular hypertension is closely related to RAS activation in PVN 10 . AT1-R in the PVN induces over-production of ROS in rats with heart failure 32 . In our study, we proved that 2K1C rats had excessive oxidative stress and over-activated RAS in PVN. ExT significantly inhibited ACE, AT1-R, and glutamate in the PVN, and ANG II in plasma. ExT also attenuated ROS by augmenting Cu/Zn-SOD and decreasing p 47phox and gp 91phox in the PVN. However, some studies have reported that central PICs and nuclear factor-κ B are also involved in the production of ROS 33,34 . In our research, we observed that microinjection of MK801 or Los into the PVN decreased MAP and RSNA in renovascular hypertensive rats. PVN microinjection of Los exhibited lower MAP and RSNA compared with that of MK801 in renovascular hypertensive rats. These results suggest that ExT attenuates hypertension, in part, through RAS-ROS-glutamate in renovascular hypertensive rats.
In summary, our data demonstrate that renovascular hypertension alters the RAS-ROS-glutamate pathway in the PVN and increases tonically active glutamatergic input in the PVN, which partly leads to an increase in MAP and RSNA. This indicates that ExT attenuates hypertension partly through the RAS-ROS-glutamate pathway in renovascular hypertensive rats. Our findings provide further evidence and insight into the beneficial effect of ExT on renovascular hypertension. | 3,028.2 | 2016-11-24T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Experimental Simulation-Based Performance Evaluation of an SMS-Based Emergency Geolocation Notification System
In an emergency, a prompt response can save the lives of victims. This statement generates an imperative issue in emergency medical services (EMS). Designing a system that brings simplicity in locating emergency scenes is a step towards improving response time. This paper therefore implemented and evaluated the performance of an SMS-based emergency geolocation notification system with emphasis on its SMS delivery time and the system's geolocation and dispatch time. Using the RAS metrics recommended by IEEE for evaluation, the designed system was found to be efficient and effective as its reliability stood within 62.7% to 70.0% while its availability stood at 99% with a downtime of 3.65 days/year.
Introduction
Mobile health (M-Health) applications as a subset of electronic health (eHealth) is expected to make use of technologies that address healthcare challenges such as response time, access, quality, affordability, matching of resources, and behavioral norms through the exchange of information at reasonable time [1]. Mobile technologies cannot physically carry drugs, doctors, and equipment between locations, but they can carry and process information: coded data, text, images, audio, and video [2]. The various life-threatening issues arising from delayed response to emergency scenes cannot be overlooked. Delays that occur while trying to locate scenes during emergency and accident cases can lead to loss of lives. During emergency calls, the call taker tries to get the location of an emergency scene from the caller and this can cause delay or even provide inaccurate geographical information. Inaccurate geographical information is given particularly if the victim is at an unfamiliar location and/or when the immediate person to save the day has speech or hearing challenge.
Response time is a common measure in benchmarking the efficacy of emergency services. It is seen as the amount of time that it takes for emergency responders like the ambulance service units or fire fighters to arrive at the scene of an incident after the emergency response system has been activated [3]. Due to the nature of emergencies, fast response time is often a crucial component of the emergency service system. The initiation of new response time, performance standards, and call prioritization has also resulted in significant changes for ambulance services and the way the system is being operated. There are general response time standards in many jurisdictions throughout Europe and the Americas [4,5]. In achieving this, the notification system and its other components must be highly effective and efficient in terms of timeliness [6].
Emergency medical service (EMS) response providers who are also first responders in emergencies are usually operated by hospitals, private ambulance companies, fire service departments, and the police. Automobiles are also being equipped with telematics systems that automatically open up a voice call and provide valuable crash data when a car is involved in an accident [7,8]. All of these are in an effort to initiate an efficient notification and alert system in emergencies. This is why this study is based on implementing and evaluating the performance of an SMS-based geolocation notification system aimed at improving the response time and efficiency of ambulatory services in cases such as locating accident and emergency scenes.
The objectives of this study are to model an SMS-based emergency geolocation notification system by generating an architecture diagram. The designed system will be implemented and its performance will be evaluated using the reliability, accessibility, and serviceability (RAS) parameters recommended by the Institute of Electrical and Electronic Engineers (IEEE).
The rest of this paper is structured as follows: Section 2 is the review of related works in the literature. In Section 3, the system's design is discussed and the model diagram of the proposed system is presented. In Section 4, the system implementation is discussed as well as the evaluation of the system's performance through case selection and comparative analysis. We also discuss in this section how data was collected and analyzed as well as the interpretation of the results. Section 5 concludes the paper and gives suggestions for future work.
Literature Survey
SMS is widely used and universally accessible since even the simplest phone supports it [9,10]. It has found its way into medicine for delivering mobile-based health interventions especially during emergencies. A number of works were seen in literature addressing the design of SMS notification systems for use during emergencies. In a study by Kumar and Rahman [11], the SMS functionality of a mobile phone was used in the design of a wireless health recording and alert system for use by the elderly or athletes. It was designed to contain a sensor, base, and server unit. In the proposed system, the user is the sender of the precoded emergency SMS, but in this reviewed work, the system is the sender of the SMS as an alert signal to caregivers based on decisions made from data received from a sensor. In a nutshell, it does not possess the capabilities that will make it useful as an emergency notification system except a specialized sensor capable of sensing accidents is utilized. In another study by Kim et al. [12], SMS was used in the development of a personalized text message service program based on lifestyle questionnaires and data from an electronic health record. It was designed specifically to create an SMS service for weight loss in obese patients. This service becomes unsuitable during emergencies for notifications even though SMS was utilized. The Mobile for Reproductive Health (m4RH) program in Kenya also utilizes SMS [13]. The application provides contraception information to young people in Kenya. To engage the system, a user is expected to send precommunicated codes by SMS to the m4RH system. The system automatically responds with related information in a concise format of messages per respective codes. The system also provides a database of available clinics searchable by users. However, the use of this system is unsuited for emergency situations since it is not equipped with the capacity to determine the geolocation of the sender of the SMS containing precommunicated codes.
A study of the work done in [14], termed emergency SMS, focuses on establishing communication with an emergency center and providing brief and useful information about patient via SMS as against the ability of the proposed model to transmit the geolocation of emergency scenes to an ambulance point. In execution, the patient must first configure the application with his or her personal information, brief medical history, the emergency phone number, and mobile phone number of his family doctor and one or more of his relatives (initiated SMS is sent to these people in cases of emergency). This information is then saved in a text file on the mobile phone. The program, when activated or triggered during an emergency (by pressing a precoded key on the mobile phone for a few seconds) automatically sends the preconfigured text file to all the mobile numbers saved in the text file. The present location of the mobile phone sending the message is determined through global positioning system (GPS); this functionality makes it suitable for use during emergencies.
In a study by Vetulani et al. [15], SMS was used to design a crisis situation management system named POLINT-112-SMS. The system is to support information management and to assist a human in decision making during emergency situations. Its architecture consists of an SMS gateway, a natural language processing module, a situation analysis module (SAM) and a dialogue maintenance module (DMM). In crisis situations, the user initiates the system and sends the crisis report by SMS; this is received as text inputs by the system. It then gathers, processes, and interprets the received information. The interpretation of this information notifies the authorized personnel to get to work. This system was designed specifically for use in noisy or unsecured environment where the use of SMS is seen as most appropriate.
In a study by Omoregbe and Azeta [16], a mobile-based medical alert system is presented for managing diseases where adherence to medication is crucial for effective treatment. The system automatically alerts the patient and medical practitioners about information and emergencies via SMS. It also allows users to receive scheduled appointments and medication updates that will facilitate the treatment process. The system however does not provide any geolocation functionality and is unsuitable for reporting emergencies as it is mainly for health information dissemination from health personnel to patients.
Hameed et al. in their study [17] proposed the Medical Emergency and Healthcare Model (MEHM DESIGN). It is enhanced with SMS and MMS facilities focused on incorporating real-time, mobile technology medical emergency systems with a location-based access to the emergency scene. It has a design capacity to utilize both SMS and MMS functionalities. Our concern here is the SMS module of the model, which comes to play when a user in need of medical attention is unaware of the nearest health center. The SMS module was designed to provide three key features and is available only to SMS capable devices. The SMS module was also designed to be responsible for receiving SMS requests from the nearest healthcare center, locating the nearest healthcare center based on input provided by the user, and sending SMS providing information of the nearest healthcare center to the requester as shown in Figure 1.
The E-911 (both phases I and II) [18] was an improvement of the basic 911; this system tries to automatically associate a location with the origin of the call. The caller's telephone number is used in various ways to derive a location that can be used to dispatch police, fire, emergency medical response resources, and other response resources. Automatic location of the emergency makes it quicker to locate the required resources during fires, break-ins, kidnappings, and other events where communicating one's location is difficult or impossible. In a bid not to slowdown response times, wireless E-911 and VOIP-911 were deployed and have helped to overcome the challenge of locating callers by transmitting longitude and latitude information based on the location of the caller's mobile device to the 911 center. The location of cellular callers is determined either by the GPS device within the phone itself or through a network solution that employs triangulation.
The existing systems reviewed in this section all fall short of the feature that makes it possible to automatically transmit geolocations via SMS when using a non-location-based service (LBS) mobile device. This distinguishes it from our proposed system. In summary, the major reason for designing this system is to reduce the time taken by ambulance teams to arrive at emergency scenes using a combination of SMS and geolocation systems in the system design. This approach will help to improve response time to scenes of accidents to save lives and also to bring quick medication/medical help to patients suffering life-threatening ailments especially in developing countries of the Africa continent where mobile phones are very common, but use of internet and GPS is limited. This feature (use of SMS with geolocation triangulation on the GSM network through mobile phones for emergency services) of our system makes it unique in comparison to that of the existing systems.
Records obtained from Federal Road Safety Corps (FRSC), Nigeria, in 2009, state that about 4120 deaths were recorded, with 20,975 seriously injured persons in road transport accidents involving about 11,031 vehicles across Nigeria. In 2008, the commission stated that about 11,341 road transport accidents occurred, claiming a total number of 6661 lives and with 27,980 injured persons. From January to June 2010, road transport accidents amounted to 5560 cases, with 3183 deaths and 14,349 injuries [19]. The uniqueness of the design lies in the aforementioned facts; other similar systems failed to consider the peculiar challenges witnessed in third world countries like those in Africa when it comes to reporting accidents or calling for help. These terrains have sparsely-spread base stations for communication and so limit the use of any form of communication except by SMS. Also, ICT infrastructures are underdeveloped which limits the use of GPS systems, and the safety of patient data being transmitted within heath applications is not guaranteed.
The Worth of Ambulance Response Time in Emergency
Medical Services. In EMS, timeliness of care is an imperative subject. The amount of time it takes during an emergency to initiate the appropriate level of care can have a rational effect on patient outcome. This time is known as response time. Response time is a common measurement in standardizing the efficiency and effectiveness of emergency medical services. Paramedic response time to the scene of a call for emergency medical assistance has become a benchmark measure of the quality of the service provided by emergency medical services [20]. In various parts of the world, ambulance services measure their performance using indicators such as response time, on-scene time, and clients' satisfaction [20]. Response time performance and the achievement of response time standards has been the single measure against which the quality of ambulance services has been judged. Response time performance has been used as an indicator of ambulance service quality for many years [21].
Ambulance response time includes the call processing time, team preparation time, and the time it takes to travel to the scene [22]. This can be summarized as the time taken from receiving an emergency call until the time of arrival at the emergency scene. It has also been discovered that response time significantly affects mortality, but not hospital utilization [23].
There are general response time standards in many jurisdictions around the world [21]. These response time standards vary from 7 to 11 minutes in urban areas and 15 to 45 minutes in rural and remote areas [4]. In achieving this response time standards, it was concluded that the notification system with its communication components must be highly effective and efficient in terms of timeliness [6].
There are three different measures of the timeliness of ambulance response and these are the following: (i) The amount of time it takes to get to the scene of an emergency, also referred to as the time to scene.
(ii) The amount of time spent at the scene, also referred to as the time at scene.
(iii) The amount of time elapsed from when the ambulance leaves the scene to the time when the ambulance arrives at the hospital, also referred to as the time to hospital.
The scope of this research report covers ways to reduce the amount of time it takes to get to the scene, also referred to as the time to scene. This time to scene can be reduced or lowered through the gathering and storage of addresses and location information more rapidly and precisely for real-time access. Also, if ambulance or ambulatory resources are allocated efficiently, it can also reduce the time to scene.
Some of the major organizational changes and development made in study to the EMS frameworks to facilitate an improved response time performance are [24] as follows: (i) Changes to ambulance station designs which include relocations, increase in numbers, and schemes to share premises with other first responders like the fire stations.
(ii) A range of first responder schemes with the emphasis on single-manned vehicles including motorcycles.
(iii) Developments to decrease total call time (receipt of call to vehicle clear for another call) including increased use of standby points and dynamic deployment, improved stocking procedures for vehicles which decrease the need for vehicles to return to stations, and better arrangements with hospitals to enable patients to be taken to the most appropriate hospital rather than the nearest.
(iv) Increased utilization of IT developments including automatic vehicle location, predictive analysis, and data transmission to vehicles (prealert systems).
(v) Improvements in demand management including demand analysis, predictive analysis, flexible crew rostering, and the use of appropriately trained staffs as first responders.
All these developments come under new capital investment as well as reorganization of existing resources.
System Design
Experimental simulation using artificially created emergency scenarios with the use of mobile phones of different platforms including Blackberry OS, Apple iOS, Android, Symbian, and Windows Mobile was utilized for software and system validation during deployment. As depicted in Figure 2, an EMS-based ambulance service system was set up composing of an EMS point and one central coordinating center with computeraided dispatching capabilities coded using Android's XML tool and JavaScript (AJAX). Evaluation on the system was carried out using the reliability, availability, and serviceability benchmark. Reliability here is the measure of the system's ability to function correctly, including avoiding data corruption. Availability measures how often the system is available for use, even when its other communication interfaces fail. For example, the server may run forever and so has ideal availability especially during Internet communication connection failure. The key areas analyzed were SMS delivery time analysis and system geolocation and dispatch time analysis.
After setting up the notification system both at the client and at the server side as depicted in Figure 2, an SMS is sent from a location presumed to be an emergency scene. This sent SMS is delivered with all the needed parameters such as latitude, longitude, cell ID, and timestamp which will allow a mapping software on the central server that has been setup in another location to determine the origin of the SMS with the aim of locating and dispatching the closest rescue team to the origin of the SMS. The operating mechanics of the system is outlined as follows: (i) The sender composes the SMS with the keyword "ACC." (ii) The proposed Java-based system collects the GSM location's parameters such as latitude, longitude, time stamp, and cell ID of the GSM location.
(iii) The GSM's location parameters are passed to an SMS gateway, which forwards the SMS to the application server.
(iv) The application server determines the origin/geolocation of the SMS and determines the rescue team closest to the origin.
(v) The server sends feedback and dispatches the rescue team.
The only visible drawback here for the user after the system was reviewed was the error generated from typos. Adding spaces in-between the keyword "ACC"; adding spaces after the keyword or mistakenly adding a symbol counts as a character and will generate exception errors which have not been handled within the system. Figure 2 is a system stem architecture (pictorial illustration) of the proposed system showing the base station triangulation utilized by the mobile device sending the emergency notification with X/Y coordinates transmitted simultaneously.
3.1. Server Hardware and Software Requirements. The following hardware requirements must be met on the server: (i) Core 2 duo computer system.
(ii) Minimum of 2.80 GHz processing capacity.
(iii) Minimum of 2 GB RAM system memory and 32bit instruction set.
(iv) 500 GB free hard disk space for various software installations.
(vii) Uninterrupted power supply system with inverters.
Likewise, the following software must run on the server before it can serve the clients' needed services: (i) Network-based operating system such as Windows 7, Linux, and Solaris.
(ii) Web/application server software such as Apache, Internet Information System (IIS), web logic, web sphere, and Glassfish 3.1.1.
(iii) Java virtual machine (JVM) interpreter to interpret the Java Servlet bytecodes to the web server software. Glassfish was utilized for this research work.
(iv) An SMS gateway client software which may be web based or desktop version.
It is also of necessity that the geographical service area for the emergency geolocation notification system be defined and updated with images and street data using Panoramio web service on Google Map, that is the geographical area the system will cover.
Activity
Diagram for the Proposed System. Activity diagrams show graphically the workflows of stepwise activities and actions with support for choice, iteration, and concurrency. Figure 3 is an activity diagram to show the movement and flow of activities of SMS. The SMS gateway receives the sent SMS and passes it to the server on availability. On availability too, the mapping information determined at the server component is passed to the dispatch team, which in turn responds by locating the accident scene using the mapping information transmitted.
System Evaluation, Interpretation, and Comparison
For the system being reported, thirteen thousand five hundred (13,500) SMSs with the key word "ACC" were sent and used as test runs during system deployment under normal conditions and situations. This figure was chosen to accommodate the emergency-scale traffic volume via SMS configured on the SMS gateway used within the system architecture. The SMSs were sent and received discreetly and concomitantly, all at different time of the day in a space of 5 days. System evaluation was carried out by considering two key parameters: the SMS delivery time and the system geolocation and dispatch time which form the needed elements to improve on if the amount of time it takes to get to the scene of an emergency also referred to as the time to scene (as explained in Section 2) is to be improved on. This time to scene represents one of the three different measures of the timeliness of ambulance response to the emergency scene as discussed in Section 2.1. For each SMS sent, the following parameters were recorded and used for the performance evaluation of the system: (i) Time intervals between sending and receiving of SMS confirmation.
(ii) Time in seconds between sending the SMS and receipt of confirmation (x 1 ).
(iii) Time in seconds taken by the system to determine the SMS origin/geolocation and to dispatch the closest ambulance to the scene (x 2 ).
(iv) Total time in seconds from when the SMS was sent and received to when dispatching occurred (x 1 + x 2 ).
The mean or average time in seconds it took the SMS to arrive at the control center was calculated using: ∑f x/∑f with four seconds being the benchmark for the system efficiency, effectiveness, and RAS rating and calculation. This benchmark is utilized as it has been concluded that a typical SMS delivery takes approximately four seconds in a GSM network [25]. Determining originating SMS geolocation and dispatching take approximately 8 seconds.
Evaluation on the system was carried out using the RAS benchmark metrics and model. These metrics can be applied to a range of software including application programs. The Institute of Electrical and Electronics Engineers (IEEE) sponsors an organization devoted to the reliability in engineering known as the IEEE Reliability Society (IEEE RS). A reliable system does not silently continue and deliver results that include uncorrected corrupted data. Instead, it detects and, if possible, corrects the corruption. This shows the likelihood that a system component will succeed within its identified mission time with no failures [26].
Reliability here is the measure of the system's ability to function correctly while avoiding data corruption. It is often characterized in terms of mean time between failures (MTBF).
The higher the MTBF value is, the higher the reliability of the system. Reliability is quantified as MTBF for repairable systems and mean time to failure (MTTF) for nonrepairable systems. The term reliability refers to the ability of a computer-related hardware or software component to consistently perform according to its specifications. In theory, a reliable product is totally free of technical errors. In practice, vendors commonly express product reliability as a percentage. This shows the likelihood that a system component will succeed within its identified mission time with no failures [26].
Availability on the other hand is the ratio of time a system or component is functional to the total time it is required or expected to function. This can be expressed as a direct proportion (e.g., 9/10 or 0.9) or as a percentage (e.g., 90%). It can also be expressed in terms of average downtime per week, month, or year or as total downtime for a given week, month, or year (see Table 1).
Availability = MTBF MTBF + MTTR 2
Serviceability (Maintainability) is an expression of the ease with which a component, device, or system can be maintained and repaired. Some systems have the ability to correct problems automatically before serious trouble occurs. Mean time to repair (MTTR) is a basic measure of the maintainability of repairable items. It represents the average time required to repair a failed component or device. [25]. Table 2 depicts a frequency, showing the delivery time analysis of 13,500 SMSs sent during the 5-day period of implementation.
SMS Delivery Time Evaluation. SMS delivery takes approximately 4 seconds in a GSM network
(i) Overall mean time for SMS delivery in seconds computed from Table 2 is This mean value further validates the fact that the SMS service utilized within the application meets up with the established standard in literature. It only exceeds this standard by 0.01 seconds.
(ii) MTBF is computed by dividing the total time period the system was set up by the total number of times the delivery exceeded 4 seconds. The system here was set up for a period of 5 days (120 hours) which amounts to From literature, this amounts to a reliability of 69.9% ≈ 70.0% for the system. This tells us that the probability that this particular component/module will survive to its calculated MTBF is 70.0%.
(v) Availability = MTBF/ MTBF + MTTR . MTBF = 0.1430274136 hours while MTTR = 0.0019444444 hours (7 seconds) (total mean time for system reload after fault detection). This interprets to a 99% (2 nines) availability and deducing from Table 1. This system component has a downtime of 3.65 days/year. This means that the system will not be available for a total period of at most 3.65 days for an operational time of a year (365 days); this downtime comprises of the sum of scheduled and unscheduled downtime for the system. Also, the probability that this particular component/module will survive to its calculated MTBF is 70.0%. This is a little short of the systems required reliability performance of 100%, but this is acceptable since we are dealing with a repairable system that is comprised of other repairable and nonrepairable components [27]. To improve on this aggregate reliability value, the various components' reliability will need to be improved on individually, especially the least reliable and weak ones in the system. This is so because the least reliable or weak component has the biggest effect on system reliability.
System Geolocation and Dispatch Time Evaluation.
Here, 8 seconds is taken as the benchmark for geolocation and dispatch time for the system. Table 3 is a frequency table showing the dispatch time analysis of 13,500 simulations.
(i) Overall mean time for geolocation determination and dispatch in seconds from Table 3 is 〠f x 2 〠f = 108,069 13,500 = 8 01 seconds 7 This mean value further validates the fact that the geolocation and dispatch components utilized within the application meet up with the set standard. It only exceeds by 0.01 seconds.
(ii) MTBF is computed by dividing the total time period the system was set up by the total number of times the delivery exceeded 8 seconds. The system here was set up for a period of 5 days (120 hours) which amounts to 120 hours × 1 the single component being tested = 120 8 Total frequency for geolocation and dispatch time above 8 seconds from Table 4 functionality 3 is 291 + 173 + 171 + 118 = 753: This tells us that the probability that this particular component/module will survive to its calculated MTBF is only 62.7%.
(v) Availability = MTBF/ MTBF + MTTR . 10 This interprets to a 99% (2 nines) availability and deducing from Table 1; this system component has a downtime of 3.65 days/year. This means that the system will not be available for a period total of at most 3.65 days for an operational time of a year (365 days), this downtime comprises of the sum of scheduled and unscheduled downtime for the system. The calculated reliability which indicates the probability of survival of this particular component/module is 62.7%. As discussed in Section 4.1, this low value is due to the fact that we are dealing with a repairable system that is comprised of other repairable and nonrepairable components [27].
Comparative Analysis.
To check and ascertain that the designed system provides better emergency notification functionalities, a comparison with other similar existing systems was carried out as stated in Section 2. Our proposed system is strictly for users to notify ambulance points of emergencies with the sent SMS transmitting the geolocation of the sender automatically and invisibly. This makes it suitable and easy to use when making silent calls especially during risky emergencies as earlier mentioned.
Case Selection.
A selected number of systems that addresses emergency notification system with different emergency functionalities were reviewed in the literature survey. Our work is only limited to three in particular, MEHM-DESIGN [17] the emergency SMS [14], and POLINT-112-SMS [15], since only these were found to be similar in system design and setup. We are not including the other ones, for example, E-911, which is a GPS-based system. The emergency SMS system design [14] requires the use of a GPS system to get locations, and this comes with all the drawbacks of the GPS system. Our designed system utilizes Google Map application programming interfaces (APIs) to translate base station triangulated coordinates where SMS is originating from, into a map. The advantage of this technology is that it has no obstructions; it can be used even in tunnels where GSM receptions are present.
The POLINT-112-SMS [15] does not clearly address emergency situations in its use. It is essentially designed to support information management and to assist a human in decision making in emergency situations. During emergencies, frightful users will find it difficult to compose long SMS that reports the crisis at hand. Our proposed system addresses this limitation.
In the SMS module of the MEHM-DESIGN [17], the nearest healthcare center is located based on provided inputs by user through an SMS request during an emergency. This module is required to communicate with Google geocoding service and local hospital database to gather the different Table 4: A comparison of the proposed SMS notification application with others.
Functionalities
Emergency SMS [14] POLINT-112-SMS [15] MEHM-DESIGN [17] The healthcare center name, address, and contact information, retrieve them, and send by SMS to the requester or user. In risky emergencies, this is inadequate. Our designed system does not require the user to try to know the nearest healthcare center, but the nearest ambulance point will locate the user initiating the SMS using maps interpreted by the system from the user's cell triangulation position.
Data Collection and Analysis
Procedure. Table 4 presents a comparison between our designed system and the existing systems. It shows the benefits of our designed system according to a number of criteria which were selected according to the common features and functionalities required by an emergency notification system aimed at reducing ambulance response time. These are making silent calls during risky emergencies, automatic transmission of geolocation from an emergency scene by SMS to the nearest ambulance point automatically, ease of use and time-saving functionality during use, GPS technology required, SMS gateway to improve SMS reliability and delivery time, strictly designed for reporting emergencies in real time, and swift emergency geolocation transmission due to automatic geolocation transmission functionality.
4.3.3.
Result. Our designed system was implemented, evaluated, and interpreted as reported in Section 4. It meets up with the SMS delivery time of approximately 4 seconds in a GSM network [25] with an SMS mean delivery time of 4.01 seconds as reported in Section 4.1 and 8.01 seconds mean time for geolocation transmission and dispatch time for the system as reported in Section 4.2. The comparative analysis in Table 4 further proves that our proposed system is flexible, requires less time to use, provides reliable mapping data transmission, performs invisible and background geolocation transmission, and automatic system notification/response from the closest ambulance point. The three reviewed systems were found to be capable of supporting information management between the patient and the doctor. Unlike the proposed system, they do not transmit geolocation but only support the sending of a location descriptively by patients reporting emergencies. This involves the composing of location information, emergency scene crisis report, or patient biodata in text form and sending by SMS. The proposed system only transmits geolocation from point of initiation to ambulance points and does not need to send any message back to the point of initiation but transmits the received geolocations to the nearest ambulance point.
The advantages of the proposed application that differentiates it from others are summarized as follows: (i) The proposed application is highly effective when silence is nonnegotiable and voice call becomes dangerous when making an emergency call, especially during robbery, kidnapping, or gunshot attacks (silent calls).
(ii) The proposed application is convenient for speechimpaired or hearing-impaired persons to report emergencies.
(iii) The proposed application is also convenient in reporting emergency situations when the caller is not familiar with the neighborhood of the emergency scene since it transmits geolocations automatically by SMS.
(iv) It can be used in reporting emergencies in real time.
(v) The proposed application can be used on all SMS capable mobile devices; it will still function even if the person to save the day does not have a smart phone or a mobile device with LBS capabilities.
4.4. Pros, Cons, and Limitations. The advantage of the proposed system is that it uses the GSM base station triangulation to deliver location coordinates so it can be used even in tunnels where GSM receptions are present. The system has also been found useful in getting a location from persons experiencing panic or fear in situations where the person making help calls on the phone is a close relative of a dying patient. Also, it is so useful when silence is nonnegotiable and voice call becomes dangerous when making an emergency call, especially during robbery, kidnapping, or gunshot attacks. In addition, it is recommended for dumb (speech-impaired) or deaf (hearing-impaired) persons in reporting emergencies. A disadvantage of this system is its inability to interpret and deliver geolocation parameters when the server is down; this means that power supply to the server must be at all times. This makes the system a little expensive to maintain in this part of the world.
The RAS (reliability, availability, and serviceability) metrics were picked for the system evaluation and analysis because of the presence of some additional nonfunctional requirements since the system requires minimal human intervention. This is recognized as a limitation for this work since other software performance evaluation approaches exist. These other approaches that will be employed in the future to further improve this work. As a limitation also, this work did not cover the cost-benefit analysis for the system or other factors such as the business and security aspects. Some elements such as sensitivity, benefits, cost, specificity, and user satisfaction were not considered for this work, also as it was observe that they had inconsequential effects on the performance of the proposed system.
Conclusion and Future Works
We have proposed an SMS-based mobile application suitable for emergency situations. After testing and evaluating its performance, the designed system was found to be efficient and effective as its reliability stood within 62.7% to 70.0% while its availability stood at 99% with a downtime of 3.65 days/year. This shows that the effective system was effective in use in this geographic region better than the existing systems.
For future works, near field technology and the GPS technology can be combined with the GSM localization to improve transmissions and communication of geolocations with no geographical restrictions. Also, this application can be adapted for use in fire service departments for fire emergencies and for accident notification on highways. We also aim to evaluate our system against some selected evaluation approaches, one of which is the prediction-enabled component technology (PECT). The RAS metrics used in this paper provide a single reliability value for the entire system component, but PECT will be engaged to provide reliability information about each role that various system components are intended to support. PECT is used to predict the reliability of a component assembly from the reliabilities of the individual components. This can then lead to computing the effective reliability during usage of the component. The main intent of PECT is to provide useful information about the reliability of various components of a system for the prediction of assembly reliability.
Conflicts of Interest
The authors declare that they have no competing interests. | 8,464.4 | 2017-08-30T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Three-dimensional matrix fiber alignment modulates cell migration and MT1-MMP utility by spatially and temporally directing protrusions
Multiple attributes of the three-dimensional (3D) extracellular matrix (ECM) have been independently implicated as regulators of cell motility, including pore size, crosslink density, structural organization, and stiffness. However, these parameters cannot be independently varied within a complex 3D ECM protein network. We present an integrated, quantitative study of these parameters across a broad range of complex matrix configurations using self-assembling 3D collagen and show how each parameter relates to the others and to cell motility. Increasing collagen density resulted in a decrease and then an increase in both pore size and fiber alignment, which both correlated significantly with cell motility but not bulk matrix stiffness within the range tested. However, using the crosslinking enzyme Transglutaminase II to alter microstructure independently of density revealed that motility is most significantly predicted by fiber alignment. Cellular protrusion rate, protrusion orientation, speed of migration, and invasion distance showed coupled biphasic responses to increasing collagen density not predicted by 2D models or by stiffness, but instead by fiber alignment. The requirement of matrix metalloproteinase (MMP) activity was also observed to depend on microstructure, and a threshold of MMP utility was identified. Our results suggest that fiber topography guides protrusions and thereby MMP activity and motility.
independently as driving factors of cancer cell motility. Most studies have focused on only one or two matrix parameters, despite the fact that a change to any one parameter almost always affects another, or they have used non-native polymers or digested ECM proteins that do not crosslink and form microstructures that are physiologically relevant 17 . An integrated understanding of how density (which is the most commonly used descriptor), ligand presentation, crosslinking, and microstructural organization are related to each other and to cell behavior is still lacking in the context of the native acid extracted collagen-based 3D ECM now used by many researchers.
Here we take an integrative approach to characterizing and understanding these convolved features by embracing the complex combinations of matrix parameters that arise naturally in 3D self-assembling collagen I networks. By creating collagen gels of increasing density over a six-fold rage, we generated multiple complex matrix features. Embedded cells were assessed for their motility behavior (cell speed, invasion distance, and protrusion dynamics) while the matrix itself was characterized for its physical features (stiffness, density, pore size, and alignment of fibers). Then additional enzymatic crosslinking achieved changes in matrix parameters independently of density changes. Cross correlations among these measurements allowed us to uncover a distinct relationship between fiber alignment and cell motility independent of pore size and bulk matrix stiffness. Central to our approach is the fact that in 3D collagen, cancer cells move into the 3D matrix, rarely retracing the void tracks they leave behind, and so are always interacting with consistent microstructural properties 1,18 (see also Results section).
3D cell motility is biphasic with increasing collagen density.
We first asked what differences in cell motility where characteristic of increases in ligand density in 3D collagen. Cell motility parameters, including speed, invasion distance, and number and orientation of protrusions of embedded HT-1080 human fibrosarcoma cells were systematically assessed as collagen I density was increased from 1-6 mg/ ml. Interestingly, a biphasic dependence of multiple motility parameters with collagen I density was observed, which is opposite to what occurs for 2D cell motility with increasing ligand density 9,19 and for what has been predicted for 3D matrices 20 . At low collagen I concentration in 3D (1 mg/ml), cells moved rapidly and persistently with a sustained high rate of protrusion formation, and invaded to distances far from their point of origin (Fig. 1A,F-H). Cells also maintained the orientation of their protrusions over 12 h (Fig. 1I,J), i.e. the large majority of cell protrusions remained polarized along the original axis of elongation of the cell. At intermediate collagen concentrations (2 and 2.5 mg/ml), in contrast to what would have been predicted from cells moving on a 2D substrate, cells migrated more slowly and invaded smaller distances from their point of origin than cells in 1 mg/ml matrices ( Fig. 1 B,C,F,G). Cells also generated fewer protrusions (Fig. 1H) and the directionality of their protrusions was significantly more isotropic than cells in 1 mg/ml matrices (Fig. 1I,K,L). Finally, when cells were embedded in high-density collagen matrices (4 and 6 mg/ml), cell speed increased, but did not achieve speeds observed in 1 mg/ ml matrices (Fig. 1D-G). Cells also increased their rate of protrusion formation (Fig. 1H) and became highly polarized once again (Fig. 1I,M), similarly to cells in 1 mg/ml matrices. MDA-MB-231 human breast cancer cells were used to test whether this motility response to increasing 3D collagen density was cell-type specific. Highly similar motility trends were observed for MDA-MB-231 cells ( Supplementary Fig. 1A).
Cell migration speed, invasion distance, and the number of cellular protrusions were highly correlated with each other and with collagen density from 1-2.5 mg/ml ( Fig. 1N-P). However, data for the higher concentrations of 4 and 6 mg/ml shifted away from this trend and were considered outliers (Supplementary Fig. 1D-F). To determine if properties of the matrix other than collagen density were involved in modulating cell motility and to understand the switch in behavior that occurred for 4-6 mg/ ml densities, we next characterized the physical and architectural properties of each matrix.
Fiber alignment and pore size, but not matrix stiffness, correlate with cell motility. Since several physical properties of the matrix (pore size, stiffness, fibrilar structure, etc.) can change concurrently with changes in collagen density, we aimed to characterize these finer physical details and ask how these features varied with each other and with observed motility responses. Analysis of reflection confocal images of 1-6 mg/ml matrices ( Fig. 2A) showed that fiber alignment (measured at length scales of the cell) varied biphasically with collagen concentration (Fig. 2B). The average pore size varied somewhat irregularly, but overall was reduced with increasing collagen concentration (Fig. 2C). These two matrix parameters correlated significantly with one another across all collagen densities (Fig. 2D). Quantitative shear rheometry was used to measure the bulk matrix elasticity, which varied somewhat irregularly with increasing collagen concentration (Fig. 2E). Although previous studies suggest that the elastic modulus should scale positively with increasing collagen concentration 21,22 , it is critical to note that the process by which collagen is extracted 23 , the gelation conditions 22,24 , and even the thickness of the matrix 22 can all significantly alter its mechanical properties and related microstructure 25 . Collagen density alone, which is often the only information provided in studies, is not sufficiently descriptive to enable direct comparisons. In our collagen matrices, the alignment of fibers decreased significantly when collagen density was increased from 1 mg/ml to 1.5 mg/ml, but the average pore size did not change. That the mesh is not tighter, but is significantly less aligned likely explains the decrease in the elastic modulus from 1-1.5 mg/ ml. Then from 1.5 to 2 mg/ml pore size decreases dramatically while fiber alignment drops only slightly. This means that 2 mg/ml represents a tighter mesh, which would be expected to increase the elastic modulus compared to 1.5 mg/ml. Comparing the measured changes in matrix microstructure and mechanics to changes in cell motility revealed a significant correlation of cell speed and invasion distance with both fiber alignment and pore size (Fig. 2F,G) across the varying densities of collagen. Cell speed and invasiveness did not correlate with matrix stiffness, at least within the range of collagen concentrations tested (Fig. 2H).
Additional crosslinking reinforces fiber alignment as a predictive parameter. Since local fiber alignment and pore size correlated with each other across a wide range of collagen densities, we developed a method to alter microstructure independently of density to try to decouple these features. This was accomplished by the addition of the crosslinking enzyme Transglutaminase II (TGII) to the matrices. Crosslinking of 1 mg/ml and 2 mg/ml collagen matrices by the addition of TGII allowed alteration of the microstructural properties of the matrices, while preserving global collagen concentration (Fig. 3A,B). Cells embedded in 1 mg/ml matrices further crosslinked with TGII migrated significantly more slowly than cells in non-TGII treated 1 mg/ml matrices (Fig. 3C). However, the migration of cells embedded in 2 mg/ml matrices further crosslinked with TGII was unchanged compared to non-TGII treated 2 mg/ml across all data points in all repeats for each condition. ***p < 0.001; **p < 0.01; *p < 0.05. matrices (Fig. 3C). Addition of a ten times higher concentration of TGII did not further alter the ECM microstructure or cell speed at either concentration of collagen (Fig. 3C).
As a control to ensure that TGII did not affect cell physiology directly, we applied TGII to cells migrating on conventional 2D substrates at the same concentration as used to crosslink 3D matrices and monitored possible changes in cell behavior. Cell speed on conventional 2D substrates was not altered by the addition of TGII ( Supplementary Fig. 1C), suggesting that our observed changes in 3D cell motility were specifically related to crosslinking of the collagen matrix and the ensuing alterations to its physical properties.
Our microstructural analysis of the matrices showed that collagen crosslinking by TGII reduced the alignment of the fibers in 1 mg/ml matrices ( Supplementary Fig. 1B), but did not significantly alter the alignment of 2 mg/ml matrices (Fig. 3A,B(top inset), D). These results fit the previously established correlation between cell speed, invasion distance, and fiber alignment (Fig. 3F) and further increased the significance of the correlation. However, average pore size was unchanged following the addition of crosslinking TGII for both 1 mg/ml and 2 mg/ml matrices (Fig. 3E,G), indicating that fiber alignment can regulate cell motility independently of pore size.
Requirement for MMPs critically depends on matrix microstructure. MMPs are thought to be critical for cancer cell migration and invasion in crosslinked matrices where the average pore size is significantly smaller than the cell body. Here cell size is an order of magnitude larger than the average pore size of the matrix, ~0.6-2.3 μ m in diameter (Fig. 2C), so we wondered how the requirement for MMPs might be affected. MMP − 1, − 2, − 7, − 9, and the membrane tethered MT1-MMP, which are considered critical for cell migration and invasion 26 , were inhibited for cells embedded in either 1 mg/ml or 2.5 mg/ml collagen matrices using a range of Marimastat concentrations. The concentrations of 1 mg/ml and 2.5 mg/ml were selected for comparison because they represent the extremes of both cell motility and fiber alignment observed in this study (Figs 1 and 2). For 1 mg/ml, collagen I formed structures that were the most aligned and for which cells were the most invasive, protrusive, and uni-axially oriented (Fig. 1G,I,J and Fig. 2B). For 2.5 mg/ml, collagen I formed structures that were the least aligned, and for which cells were the least invasive and protrusive, and protrusions were more isotropically oriented (Fig. 1C,G-I,L and Fig. 2B).
Maximal inhibition with Marimastat at a concentration of 30 μ M (higher concentrations resulted in cell death) did not abolish motility for cells in highly aligned 1-mg/ml matrices. Instead, they maintained ~50% of their invasive ability, on average 75μ m over 16.5 h (Fig. 4A). Marimastat-treated cells in 2.5-mg/ml matrices (Fig. 4A, right inset) were essentially confined to the void in which they had been initially embedded. When MT1-MMP was overexpressed in cells embedded in 1-mg/ml and 2.5-mg/ml matrices, cell motility increased by an insignificant amount and 67%, respectively (Fig. 4B). Taken together, these results suggest a threshold for MMP utility within a 1-mg/ml matrix for which collagen fibers form highly aligned microstructures. MT1-MMP overexpression did not alter the orientation of cellular protrusions in these matrix conditions (Fig. 4C, D). This suggests that regardless of whether more MT1-MMP mediated collagen degradation occurs, the directionality of motility remains unaltered and is driven and oriented by the fibrilar topography of the ECM.
Discussion
We have conducted a multi-parametric quantitative analysis of cell motility in 3D collagen matrices of controlled density to understand how fibrilar matrix properties modulate motility. This study was designed to span a broad range and complexity of physical features of the matrix, including pore size, fiber alignment, stiffness, and extent of crosslinking, such that any key feature would be tested in combination with several features to avoid bias. After a comprehensive quantification of each parameter, we relied on cross-comparison of matrix parameters and motility measurements to identify the features of the matrix influencing cell motility. We find that the local alignment of collagen fibers is highly predictive of cell motility across the entire range and combinations of matrix conditions tested, while collagen density is highly predictive only at low concentrations. Pore size correlates with cell motility parameters, corroborating recently published findings 13 , but less significantly than fiber alignment. By incorporating a secondary method of crosslinking that modulated matrix architecture independently of density, the relationship between fiber alignment and motility was further reinforced while that of pore size, density, and motility was not.
Our collagen image analysis method provides a score for the pore size between fibers and the fiber alignment for fiber intensity signals that are greater than imaging background noise, which is removed during reflection confocal image processing (see Methods). Moreover, imaging resolution must also be able to resolve fiber locations spatially. In general, reflection imaging is limited in its ability to detect fibers that are oriented at an angle greater than ~50 degrees to the 2D imaging plane 27 and by the spatial resolution of the imaging system (~0.20 μ m/px in our case). If the average space between fibers is less than the spatial resolution of the imaging system, the analysis results can be unreliable. In Fig. 2A, fibers are distinguishable in all conditions, but the 6 mg/ml case shows significantly fewer distinguishable fibers. Additionally, the average pore size in 6 mg/ml approaches the imaging resolution limit (Fig. 2C), indicating a rationale for the larger variability in alignment measurement for the 6 mg/ml case (Fig. 2B) compared to all other conditions. Nonetheless, our method is able to make highly reproducible, quantitative comparisons among the other multiple collagen concentration and crosslinking conditions and to predict the effect of ECM architectural changes on cell motility with high statistical significance (Fig. 3F).
Our alignment analysis constitutes a bulk measurement for all the fibers in the imaging area (analyzed images of collagen cover 61 μ m × 61 μ m) and reports the degree of anisotropy of the image as a whole. Simulations show that our method for quantifying alignment is not sensitive to changes in the length of fibers, but is sensitive to significant changes in width (data not shown). However, the width of the fibers did not change significantly across the different concentrations of collagen.
The significant influence of fiber alignment, measured at a length scale that is of the same order of magnitude as cell invasion distance in our study, likely arises because of topographic guidance as the cell protrudes and attaches to the ECM. Indeed, we find that the orientation of cellular protrusions follows a similar trend to that of fiber alignment. During review of this manuscript, another paper appeared showing that aligned 3D collagen matrices enhance migrational persistence and orient cellular protrusions, corroborating our findings 28 . Other studies using synthetic nanofiber scaffolds have demonstrated that fiber alignment promotes cell elongation, restricts sites of cell attachment, aligns adhesions, and promotes faster migration speeds 29 . This imposition on cell shape has been shown to play an important role in traction generation on 2D substrates 30 , resisting repolarization on 1D substrates 31 , and in generating polarized traction forces in 3D, which are thought to facilitate invasion 32 . Using a native 3D collagen matrix, our studies identify fiber alignment as a key feature even in light of the introduction of many additional complex matrix changes.
In our study, matrix architecture is assessed without cells present in the matrix. Since cells continuously migrate into the matrix and do not typically turn to retrace the void space they leave behind (over our observation time of 16 h), and because we seed cells at a very low density, we anticipated that cells would continuously encounter the same conditions as dictated by the initial matrix architecture. That these initial matrix conditions predict cell motility indicates that this assumption is justified. The ability of cells to pull on and align the matrix after their initial adhesion to the ECM may also play a role, possibly feeding back to positively reinforce cell polarization. However, others have shown that cell invasion in 3D matrices correlates more strongly with cell shape elongation than with contractility 32 .
It is well established that increasing ligand density on planar 2D substrates induces a biphasic increase and then decrease in cell speed 9 . In 3D matrices, we observe the opposite: a decrease then an increase in cell speed. The 2D phenomenon is thought to reflect changes in adhesion strength mediated by the aggregation of proteins at focal-adhesion sites 33 . At high planar ligand density, strong adhesion inhibits cell movement, while at low planar ligand density, weak adhesion strength causes less efficient motility. Here, the fibrilar topology of the 3D matrices presents a distinct network of nanoscale, linearly organized ligands to which the cell adheres. A recent modeling study addressed this point and predicted differences in cell speed and invasion distance based solely on fibrilar matrix organization, finding a significant role for alignment 34 . Our study shows experimentally that the alignment of discrete ligand-containing fibers directs cell protrusions and motility distinctly in 3D [35][36][37] .
By treating cells in 3D matrices with Marimastat, a wide-spectrum MMP inhibitor, our studies show a near complete loss of motility for cells in matrices of intermediate density (2.5 mg/ml), where pore size averages 0.9 μ m in diameter and where fibers and cell protrusions are isotropic. However, treating cells in low-density matrices (1 mg/ml), where average pore size is still smaller than the cell (2.3 μ m) but fibers and protrusions are highly aligned, results in only a 50% reduction in cell invasion distance, indicating MMP independent motility and significant cell deformation occur. Yet, cells maintain a higher level of motility than expected based on previously reported steric limitations of pore size 13 . This could be due to differences in pore size measurement techniques or may indicate that fiber alignment influences cell deformability.
Overexpression of MT1-MMP in our study suggests that a threshold may exist for MMP utility, even in matrices of relatively small pore size. For cells embedded in low-density 1 mg/ml matrices with pores averaging 2.3 μ m in diameter, cells move rapidly and invade further than in any other condition. In this case, overexpression of MT1-MMP did not result in any significant increase of invasive ability. Low ECM densities are known to reduce baseline MMP production while high ECM densities feedback to increase baseline MMP production [38][39][40][41][42] . This suggests that cells in low-density 1 mg/ml matrices could benefit from overexpression of MT1-MMP, but in fact do not. It is possible that overexpressed MT1-MMP in cells in 1 mg/ml remains inactive. Alternatively, because MMPs are most active in protrusions 43,44 , and protrusion number and spatial orientation about the cell body were modulated by matrix architecture, it is possible that aligned fibers optimize the efficient use of MMPs by spatially and temporally focusing the activity over longer time scales. Or aligned fibers may present cleavage sites in a distinct and more susceptible way. Future work will aim to further delineate these mechanisms. It is interesting to note that in our previous work, we discovered that matrix-embedded cells lacking certain focal adhesion proteins display variations in cell protrusion rate, orientation, cell speed, and invasion, which resemble some of the same motility outcomes that were achieved here by simple variations in 3D ECM microstructure. This highlights the fact that physical changes in the ECM can affect the outcomes of cell motility as drastically as genetic changes in cells, and demonstrates the importance of characterizing and further understanding the role of the ECM in modulating cell motility.
Conclusions
This study shows that the ECM microstructural parameter of fiber alignment reliably predicts multiple features of cancer cell motility in 3D matrices. Fiber alignment modulates cellular protrusion rate and orientation, which regulate cell motility. Fiber alignment varies significantly with collagen density and the extent of matrix crosslinking, highlighting the need for standardizing ECM microstructure characterization, which may help unify outcomes across 3D motility studies. The techniques and results presented here lend further insight into the role of the physical ECM and crosslinking enzymes in cancer cell motility and metastasis. 45
Materials and Methods
Cell Culture. HT 3D collagen I gelation, crosslinking, and drug inhibitors. Cell-impregnated 3D collagen matrices were prepared similarly to that described previously 1,4,47 . Briefly, 20,000 cells suspended in a 1:1 (v/v) solution of culture medium and 10x reconstitution buffer were gently mixed by pipetting with the appropriate amount of low concentration rat tail type I collagen in acetic acid (BD Biosciences) on ice to achieve a 500 μ l solution with a final concentration of 1.0, 1.5, or 2.0 mg ml −1 collagen. High concentration rat-tail type I collagen in acetic acid (BD Biosciences) was used for 2.5, 4.0, and 6.0 mg ml −1 conditions. Then, NaOH (1N) was added in the amount of 3.6% (v/v) of the volume of collagen to initiate polymerization. The mixture was again gently but thoroughly pipetted on ice. At this point, 0.1 μ g (1X, 1 μ l) or 1 μ g (10X, 10 μ l) of purified recombinant human Transglutaminase II (TGII) in solution (R&D Systems, Minneapolis, MN) was mixed into the solution for those matrices that were to be additionally crosslinked. Initial medium and reconstitution buffer volumes were adjusted equally to account for the additional volume of TGII. The solutions were mixed again thoroughly on ice, with care taken to avoid Scientific RepoRts | 5:14580 | DOi: 10.1038/srep14580 introduction of bubbles, and subsequently placed in a standard glass bottom 24-well plate. Immediately, the well plate was placed in a 37 o C incubator for at least 30 min. Additional warm cell culture medium, 500 μ l, was then added on top of the matrices and the plate put back into a CO 2 incubator for full pH buffering. Reproducibility of gel structures relied on consistently using the same dish size (24-well glass bottom plates) plus accurate volumes (good pipetting technique) to ensure reproducible gel thickness and pH, and consistent timing (mixing all ingredients together on ice in less than 30 sec) plus temperature control (keeping all ingredients on ice throughout the process, mixing everything on ice, and then immediately transferring to a temperature and CO 2 controlled incubator for at least 1 hr gelation time). Since each of the above-mentioned variables can influence gel structure and because time is limited during the gelation procedure, quantification of matrix structure after the full process, using our image analysis methodology presented herein, served as our quality control checkpoint. Statistical analysis of data produced from the image analysis of matrix structure and from cell motility analysis demonstrates the reproducibility of our procedure and the utility of our image analysis method.
For MMP inhibition experiments, Marimastat (AnaSpec Inc., Fremont, CA) or vehicle control DMSO (Santa Cruz Biotechnology Inc., Santa Cruz, CA) was mixed into the medium before it was added on top of the cell impregnated matrix to achieve a final concentration of 10, 20, or 30 μ M in the well. Since DMSO itself was found to dramatically effect cell motility at low concentrations (data not shown), we note that making the final amount of DMSO in our experiments optimally minimal required the purchase of dry Marimastat to make high concentration stock solutions. The final concentration of DMSO which did not impair motility, as demonstrated by the vehicle control data, was 0.1% (v/v) DMSO in culture media. This value was used to back-calculate the appropriate mixture of Marimastat to DMSO to achieve the desired stock concentration of inhibitor. Low cell density in the matrices helped to ensure that cells moved as singlets and that motility measurements were accurate.
Immunofluorescence microscopy and reflection confocal microscopy. To visualize collagen fibers within a 3D collagen gel, both immunofluorescence and reflection confocal images were collected using a Nikon A1 confocal microscope equipped with a 60x water-immersion objective (Nikon) and controlled by Nikon Elements imaging software (NIS-3.1). Collagen matrices imaged with immunofluorescence were first gently detached from the well plate walls, then incubated with primary and secondary antibodies in PBS with 1% BSA for 24 h (Abcam ab24133 anti-collagen I and AlexaFluor secondary). For reflection imaging, the microscope was configured to capture 488 nm light reflected during illumination with the 488 nm laser. Reflection and immunofluorescence images of detached gels were used to validate the fiber analysis algorithm, but only reflection images of attached gels were used for correlation plots.
Cell tracking and motility. Cells embedded in 3D collagen matrices, far from the walls and bottom of the 24-well plate, were imaged at 10x magnification every 2 min for 16.5 h similar to that which was previously described 1,4 . Briefly, cells which were actively moving as singlets over the course of the timelapse were tracked using Metamorph software with the Metavue Track Objects Application (Molecular Devices, Sunnyvale, CA) with x and y position coordinates measured for each 2 min time point, the collection of points comprising the cell's trajectory. Cell speed was calculated by averaging the distance per 2 min time interval that each cell traveled. Net invasion distance is a measure of the maximum displacement of the cell from its point of origin. This was found by calculating the displacement of each x, y tracking point over 16.5 h from the cell's original x, y location at 0 h, then finding the maximum of these displacements for each cell. For correlation plots, min-max normalization was used to re-scale the average of each data set between 0 and 1, avoiding units bias in cross-comparisons.
Cell protrusion analysis. Orientation of cellular protrusions and number of cellular protrusions
were calculated as described in detail previously 1,4 . Briefly, the position of protrusions at least 5 μ m in length from the cell periphery were tracked and measured with Metamorph software (Molecular Devices, Sunnyvale, CA) and tallied by hand from time-lapse microscopy images taken at 2 min intervals. The space around the cell, originating at the cell's centroid, was divided into eight equal radial partitions with the anterior axis aligned with the longest initial cellular protrusion and fixed in this position. The orientation of protrusions was calculated based on the number of protrusions that were extended into each of the eight partitions over 12 hours. The polarization index, α , of cellular protrusions was calculated by equation 1, where C 2 is the number of protrusions produced along the lateral axis of each cell over 12 hours, i.e. quadrants 3 & 7 in the orientation radial plot (refer to references 1,4 ). C 1 (quadrants 1 & 5) is the number of protrusions produced along the anteroposterior axis of the cell over 12 hours, with the anterior axis being set by the initial polarization of the cell at time zero. In effect, this parameter measures the re-orienting ability of the cellular protrusion away from their initial orientation over 12 hours. A value near 1 means the cell remains highly polarized along its initial polarization axis, whereas a value near zero indicates that the cell protrusions explored every angle equally. A One-Way Anova and Tukey post-test were used to calculate significance. Image pre-processing for reflection confocal collagen image. The non-uniform background, I B (x,y), in the raw image,(I R (x,y)) (Supplementary Fig. 2A), caused by interference patterns of the reflected incident laser light, was estimated by interpolating the intensity distribution as a function of radial distance from the center (I B (r)) of the image. The relation between intensity and radial distance from center of image was established from angular-averaged intensity at different radical distances. Raw reflection images were then normalized, I N (x, y), using the relationship in equation 2 below ( Supplementary Fig. 2B), where < > is denoted for the mean value.
( , ) = ( , )/ ( , ) × ( ) I x y I x y I x y I 2 Further, we enhanced the fibrous structure in reflection confocal image by applying a fiber enhancement filtering (FEF) algorithm. The procedure for FEF in brief is stated below. At first, a mask filter (MSK θ ) with size of 5 pixels by 5 pixels is created with a line across the center line at orientation θ . A 2-D anisotropic Gaussian filter (GF θ ) was obtained from the normalized product of the line mask and was the same size (5 by 5) of the Gaussian filter. A 2-D anisotropic average filter (AF θ , 5 by 5in size) was also obtained from the normalized product of the line mask. The raw image was convolved with these two filters respectively to access the filtered image FEF θ , which is the result of subtracting the average filter convoluted image from the Gaussian filter convoluted image. FEF θ at 15 different orientations distributed incrementally from 0 to 180 degree was used and the fiber enhanced filtering image (I FEF ) is obtained from the maximum intensity projecting of FEF θ (Supplementary Fig. 2C). The binary of FEF image (I BW ) is obtained by setting a suitable threshold value (th) to the fiber enhance filtering image (I FEF ) to highlight the collagen fibers ( Supplementary Fig. 2D). The standard deviation for Gaussian filter was tested and found to be optimized at a value of 0.7. All image processing was conducted in MATLAB. All images were taken at the same magnification and size (1024 × 1024 pixels, 60x magnification) and were analyzed at the same size.
Collagen pore size analysis. Relative differences in the mean pore size for the different concentration collagen matrices were estimated from the binarized reflection confocal images (refer to previous section for image pre-processing details, Supplementary Fig. 2) using a sequential morphological open operation based on techniques described by Gonzalez and Woods 48 ( Supplementary Fig. 3). First, the inverse or conjugate of the binary image was created, i.e. equation 3 below, to highlight the void regions between fibers (Supplementary Fig. 3A).
( , ) = − ( , ) ( ) I x y 1 I x y 3 invert BW This conjugated image was then subjected to a sequential morphological opening operation of increasing width using a disk shape ( Supplementary Fig. 3B-D). After morphological opening with a disk shape with radius of r, the remaining pore area, which is the summation of intensity in the image, I(r), represents the pore size with characteristic size larger than r ( Supplementary Fig. 3E). Therefore, the areas in the image with pore size of radius equal to r are estimated by equation 4 below.
( ) = ( ) ( + ) ( ) -dI r I r I r dr 4 The frequency distribution at different pore sizes within an image was approximated by this distribution, equation 5 below.
( ( ) ≈ ( ) ( ) f r dI r 5 For each collagen concentration, the mean of the images' pore size distributions were calculated and plotted (Fig. 2). Then these values were normalized within the set of varying collagen concentrations and plotted in correlation graphs (Figs 2 and 3).
Collagen fiber alignment analysis. After pre-processing of the x, y reflection confocal images to enhance fiber detail (see "Image Pre-Processing…" section of Methods and Supplementary Fig. 2), the alignment of the fibers, i.e. the anisotropy of the white space in the resulting binary image (e.g. Supplementary Fig. 2D), was estimated using a discrete Fourier Transform (FT) method similar to that described by Sander and Barocas 49 . Broadly, this method measures the anisotropy (alignment) across the image of reflected light intensity peaks (fibers) after filtering and binarizing. Supplementary Fig. 4 shows example computer generated images and experimental pre and post-processed reflection images as well as their resulting polar FT plots and alignment values. FT allows the description of a spatially complex digital image to be represented in terms of the frequency of its components. The two-dimensional FT ( Supplementary Fig. 4B,E) where R(u, v) and I(u, v) are the real and imaginary parts of FT(u, v). Then random transformation was applied to calculate the integrated intensity (F I (θ )) over a line through the center at different orientations in the FT image. This Polar distribution of integrated FFT power was then transformed to Cartesian coordinates represented by equation 8 below (Supplementary Fig. 4C,F). The two eigenvalue (λ 1 and λ 2 , where λ 1 > λ 2 ) of the matrix [C xy T C xy ] are used to calculate the alignment index, α , as α = 1 − λ 2 /λ 1 . A value of α = 1 represents a completely aligned matrix of fibers, and α = 0 represents an isotropic matrix of fibers.
Determination of elastic modulus. A strain-controlled rheometer with steel cone-and-plate geometry (25 mm in diameter; RFS3; TA Instruments) was used to measure viscoelastic properties of the varying concentrations of collagen matrices. First, collagen matrices were gelled between the cone and plate for 30 min at a temperature of 37 o C in a humidified environment. A dynamic strain sweep was performed from 0.1% to 100% at a frequency of 1 rad/s to determine the region of linear elastic behavior, where the storage modulus G' is independent of strain, i.e. below the critical strain where the network structure is disrupted. Three replicate experiments were performed for each collagen concentration, with new gels being formed for each replicate. Based on these results, a strain in the linear elastic regime for each collagen gel concentration was chosen (0.3% for 1-2 mg/ml, 0.8% for 2.5-6 mg/ml). Then new gels were made and a frequency sweep was performed at said strain from 0.1-100 rad/s. This was again repeated for three separate gels for each collagen concentration. The storage modulus was found to be independent of frequency in the range of ~6-100 rad/s for all concentrations, and this is the G' that is reported. | 8,050 | 2015-10-01T00:00:00.000 | [
"Biology",
"Materials Science",
"Engineering"
] |
A comprehensive meta-analysis of transcriptome data to identify signature genes associated with pancreatic ductal adenocarcinoma
Purpose Pancreatic ductal adenocarcinoma (PDAC) has a five-year survival rate of less than 5%. Absence of symptoms at primary tumor stages, as well as high aggressiveness of the tumor can lead to high mortality in cancer patients. Most patients are recognized at the advanced or metastatic stage without surgical symptom, because of the lack of reliable early diagnostic biomarkers. The objective of this work was to identify potential cancer biomarkers by integrating transcriptome data. Methods Several transcriptomic datasets comprising of 11 microarrays were retrieved from the GEO database. After pre-processing, a meta-analysis was applied to identify differentially expressed genes (DEGs) between tumor and nontumor samples for datasets. Next, co-expression analysis, functional enrichment and survival analyses were used to determine the functional properties of DEGs and identify potential prognostic biomarkers. In addition, some regulatory factors involved in PDAC including transcription factors (TFs), protein kinases (PKs), and miRNAs were identified. Results After applying meta-analysis, 1074 DEGs including 539 down- and 535 up-regulated genes were identified. Pathway enrichment analyzes using Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) revealed that DEGs were significantly enriched in the HIF-1 signaling pathway and focal adhesion. The results also showed that some of the DEGs were assigned to TFs that belonged to 23 conserved families. Sixty-four PKs were identified among the DEGs that showed the CAMK family was the most abundant group. Moreover, investigation of corresponding upstream regions of DEGs identified 11 conserved sequence motifs. Furthermore, weighted gene co-expression network analysis (WGCNA) identified 8 modules, more of them were significantly enriched in Ras signaling, p53 signaling, MAPK signaling pathways. In addition, several hubs in modules were identified, including EMP1, EVL, ELP5, DEF8, MTERF4, GLUP1, CAPN1, IGF1R, HSD17B14, TOM1L2 and RAB11FIP3. According to survival analysis, it was identified that the expression levels of two genes, EMP1 and RAB11FIP3 are related to prognosis. Conclusion We identified several genes critical for PDAC based on meta-analysis and system biology approach. These genes may serve as potential targets for the treatment and prognosis of PDAC.
Introduction
Pancreatic cancer (PC) is one of the most lethal types of cancer, with exocrine cells accounting for approximately 95 percent of cases.This type of PC is commonly known as PDAC [1], and is one of the most common malignant tumors of the gastrointestinal tract, as well as the seventh leading cause of cancer death worldwide [2], with a five-year survival rate of less than 5%.According to projections, PDAC will overtake breast and colorectal cancer as the second leading cause of cancer death by 2030.It has been observed that there is high mortality and very poor prognosis of PDAC as a result of unclear early symptoms and lack of specific molecular markers for early diagnosis.Most patients with advanced cancer are diagnosed with local invasion or distant metastasis [3].Therefore, identification of the key genes and pathways is necessary to deepen our understanding of the molecular mechanisms of PDAC that can provide reliable biological markers and treatment targets [4].
Gene expression has also become a powerful tool in recent years for predicting the role and activity of genes.Advances in high-throughput measurement technologies and a large amount of gene expression data in public databases provide an opportunity to obtain more reliable and transparent results.Besides, using meta-analysis techniques has been increasingly employed to integrate data from different resources and is especially useful for combining several datasets related to the same disease when they are limited in size [4] for increasing statistical power [5].In addition, exploration of interactions among genes can help to better explain the complex mechanisms of biological processes.Expression analysis identifies those genes that have similar expression patterns.Genes that express a high degree of expression are likely to be involved in a common biological process or metabolic pathway [6].WGCNA identifies correlation patterns among the genes, detecting highly correlated gene modules and summarizing many of the hub genes and biomarkers [7].Currently, WGCNA has been used for several types of cancer that have been associated with promising results [8].
In this study, we applied large-scale microarray data for meta-analysis to find DEGs associated with PDAC.Following that, WGCNA was used to identify the co-expression of genes.Various bioinformatic methods were also applied to help in the identification of the most important candidate genes that can be considered as potential biomarkers and therapeutic targets in PDAC.
Data collection
Microarray-based expression datasets were retrieved from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih.gov/)(Fig 1).To investigate transcriptome responses in pancreatic cancer, we used 11 datasets.We selected only datasets with both tumor and normal samples, which included 202 samples of normal tissue and 307 samples of tumor tissue (S1 Table ).
Datasets processing and meta-analysis
The preprocessing steps were performed on each platform independently.Affymetrix datasets were preprocessed and normalized with the Robust Multi-Array Average (RMA) approach [9] using Expression Console (Affymetrix, Santa Clara, CA, USA).The expression values of Agilent microarray data were normalized using the Loess algorithm.The probe set IDs were mapped to gene symbols according to the probe annotation files.Next, expression values of the same gene symbols were collapsed based on the mean value of each gene in each database.In addition, genes with low expression levels and low variation in expression values were removed.After processing, to identify DEGs, meta-analysis was conducted using the Rank Prod method in the MetaDE R package [10].Genes with a False Discovery Rate (FDR) of less than 0.05 were considered significant DEGs.
Gene enrichment analysis
GO enrichment and KEGG pathway analyses were performed for DEGs using the g:Profiler database (https://biit.cs.ut.ee/gprofiler/gost) with an adjusted P-value significance level of � 0.05.
Protein-protein interaction network analysis
To investigate the interactions among the DEGs, a protein-protein interaction (PPI) network was constructed by the STRING database (http://string-db.org) with a minimum required interaction score > 0.4.The PPI network was visualized using Cytoscape (version 3.7.1)software.
Cis-elements analysis
CREs are one of the most important factors in regulating gene expression in various tissues and diseases.The 1500 bp upstream flanking regions of DEGs were extracted from Ensemble (https://www.ensembl.org/index.html).The MEME online tool (http://meme-suite.org/tools/meme) was used to discover conserved motifs [12].The Tomtom tool (http://meme-suite.org/tools/tomtom) was used to define known CREs based on the motif database of JASPAR CORE 2018 [13].The GoMo tool (http://meme-suite.org/tools/gomo)was also applied to identify possible roles and GO terms for the motifs [14].
Weighted gene co-expression network analysis
To build the DEGs co-expression network and to identify highly correlated genes, we used the WGCNA R package [15].Firstly, the matrix of the normalized expression values of DEGs was used to calculate Pearson's correlation coefficient between gene pairs.Next, this similarity matrix was transformed into a topological overlap measure (TOM).A hierarchical clustering tree was constructed and modules were detected with the cutreeDynamic function by cut-off a minimum module size of 30 genes.Subsequently, the network was visualized by using cytoscape software and hub genes were identified by using the cytoHubbo plug-in [16].In addition, GO and KEGG pathway analyses of modules were conducted with DAVID under a significance threshold of FDR < 0.05 (https://david.ncifcrf.gov/).
Survival analysis for identifying biomarker genes
To explore the potential prognostic value of hub genes, we used the Gene Expression Profiling Interactive Analysis (GEPIA) database (http://gepia.cancer-pku.cn/)for PDAC to perform an overall survival analysis using Mantel-Cox tests.Log-rank tests were used to determine statistical significance and Log-rank P < .05 was considered significant.Furthermore, the hazard ratio (HR) was calculated based on the Cox Proportional-Hazards Model.
Identification of DEGs
The R package MetaDE was utilized to identify DEGs.DEGs were filtered by the criterion of FDR < 0.05.Consequently, a total of 1074 DEGs, including 535 up-and 539 down-regulated genes, were identified (S2 Table ).GO analysis of the DEGs revealed significant enrichment in the GO terms for 109 Biological Processes (BPs), 38 Cellular Components (CCs), 22 Molecular Functions (MFs), and also 2 KEGG pathways (adjusted P-value < 0.01 as cut-offs) (S3 Table ).BPs include regulation of catalytic activity, metabolism of proteins, cellular response to organic substances, regulation of developmental processes, regulation of cell migration, anatomical structure morphogenesis, regulation of molecular functions, regulation of cellular component movements, and regulation of cell motility.The MFs of DEGs are mainly concentrated on protein binding, enzyme regulator activity, catalytic activity, acting on a protein, protein kinase activity, and protein serine/threonine kinase activity.The top and significant enriched KEGG pathways were HIF-1 signaling pathway and focal adhesion (Table 1).
Identification of the TFs, PKs, and miRNAs
During tumorigenesis, some transcription factors cause overexpression or suppression of target genes, as well as changes in the biology of cancer cells.As a result, targeting of transcription factors is a possible strategy for cancer therapy.Among DEGs, 152 TFs belonged to 23 conserved families were identified, whereas 2 families were the largest groups.One is unknown and contains 73 genes and the other is C2H2 ZF with 29 genes (Fig 2).Sixty-four protein kinase gene were identified and classified into 12 families.The CAMK was the largest group among these families (Table 2 and S4 Table).A total of 39 and 25 PKs were up-and down-regulated, respectively.
We investigated potential miRNAs that may be related to DEGs using the mirtarbase database.A total of 901 miRNAs, which belonged to 195 families, were found.Among the detected miRNAs, the hsa-miR-200 family comprised the highest frequency with 59 members (Table 3 and S5 Table ).
Cis-elements analysis
Conserved motifs and consensus CREs were detected by analyzing the 1500 bp upstream flanking regions.Eleven significant motifs were detected by the MEME database (Table 4) and further motif enrichment was performed by using the GOMO tool (S6 Table ).Gene ontology indicated that these motifs participated in sensory perception of smell, DNA damage checkpoint, regulation of organ growth, translational elongation, and RNA processing.These motifs were also involved in molecular functions such as olfactory receptor activity, structural constituent of ribosome, transcription factor activity, histone binding, proton-transporting ATPase activity, and rotational mechanism (Table 4).
WGCNA and identification of modules
We used the WGCNA approach to examine gene co-expression patterns in pancreatic cancer mRNA expression profiles.Initially, a similarity matrix was calculated based on Pearson correlation between each DEG pair, which was converted to a proximity matrix using a power function (β).Then, the topological overlap matrix (TOM) was calculated for hierarchical clustering analysis.Finally, a dynamic tree cutting algorithm was implemented to identify gene expression modules.The parameters used in this study were β power of 12 and a minimum module size of 30.Eventually, the DEGs were based on the dynamic tree cutting algorithm grouped into eight modules, which were labelled by different colors (turquoise, brown, blue, black, yellow, red, green, and pink).The modules ranged in size from 80 (pink module) to 256 (turquoise module) genes (Fig 3 and S7 and S8 Tables).
To understand the biological functions associated with modules, the enrichment analysis for the BP category was conducted by using DAVID (FDR< 0.05).The results revealed that modules were more involved in the Ras signaling pathway, p53 signaling pathway, MAPK signaling pathway, protein processing in endoplasmic reticulum, proteoglycans in cancer, focal adhesion, Rap1 signaling pathway, PI3K-Akt signaling pathway, HIF-1 signaling pathway, FoxO signaling pathway, ErbB signaling pathway, and insulin signaling pathway (Fig 4 and S9 Table).Among the modules, the highest number of TFs belonged to the turquoise module, with 22 TFs, which can indicate the regulatory role of this module (Fig 5).Moreover, a total of 11 PKs were identified in the turquoise module (S10 Table ).
Hub genes analysis in modules
To identify genes with central roles in the network, Cytoscape's CytoHubbo plugin was used to select genes with high connectivity within each module, known as hub genes.In the Cyto-Hubbo plugin, the top 10 nodes calculated by the maximal clique centrality (MCC) algorithm were shown as hub genes in each module.Finally, from eight modules, 80 hub genes were selected (S11 Table ).The hub genes were found to be significantly enriched in 12 GO terms and 4 pathways (S12 Table ).These hub genes were enriched in negative regulation of adiponectin secretion, and cell division.The turquoise module yielded the highest number of hub TFs.The EMP1, ELP5, ABCC3, PIGN, LTBP3, PANX1, DERL2 and RPS5 genes in the modules indicated top-ranking in each module (Table 5).
Discussion
Pancreatic ductal adenocarcinoma is a gastrointestinal malignant tumor that is diagnosed at an advanced stage due to a lack of effective screening tools and biomarkers.As a result, PDCA patients have a low survival rate.Therefore, the identification of reliable biomarkers associated with prognosis and treatment in PDCA is critical.There is a large amount of transcriptome data available, which allows researchers to identify biomarkers and metabolic pathways involved in various cancers [17].In this regard, we collected 509 samples of microarray data from different datasets.Finally, 1074 DEGs were screened out by meta-analysis that had different expression levels in tumor tissues.The pathway enrichment analysis of DEGs showed that some enriched terms were related to the HIF-1 signaling pathway (KEGG:04066) and focal adhesion (KEGG:04510).HIF-1 has been reported to be involved in human cancers such as ovarian, prostate, and breast cancers.Deng et al., found that HIF-1 signaling increases in hepatocellular carcinoma compared to normal tissue and also plays a major role in cancer prognosis [18].HIF-1α is a major factor involved in the regulation of cellular responses in prostate cancer; it is activated and decreased by hypoxia and targets the HIF pathway [19].On the other hand, cancer cells consume oxygen, disrupt oxygen balance, and cause hypoxia, while cell growth and proliferation result in an increase in oxygen [20].In fact, with increasing and decreasing oxygen levels, conditions are created for tumor growth and the survival of cancer cells increases [21].However, oxygen in eukaryotic cells is essential for aerobic metabolism and ATP production.Therefore, it is important to maintain oxygen homeostasis.Focal adhesions play important roles in biological processes such as cell motility, cell proliferation, cell differentiation, regulation of gene expression, and cell survival [22].They are communicators and adhesion between cells and the extracellular matrix (ECM).Focal adhesion kinase (FAK), a cytoplasmic non-receptor tyrosine kinase, is a key regulator in FAs, which leads to FA signals on cell adhesion to the ECM [23].It has been reported that FAK is expressed in pancreatic cancer cell lines at the levels of mRNA, protein, and phosphorylated protein.It has previously been shown that FAK knockdown and FAK kinase inhibition have antitumor activity [24].
To study the regulatory mechanisms, we identified transcription factors among DEGs.The Cys2-His2 zinc finger family (C2H2-ZF) was the largest of the 152 TFs discovered.Cys2-His2 zinc finger (C2H2-ZF) proteins are the largest class of putative human transcription factors.Najafabadi et al. have indicated that the human genome contains an extensive and largely unexplored C2H2-ZF regulatory network that targets various genes and pathways [25].Munro et al. have found that somatic mutations within Cys2His2 zinc finger genes lead to widespread transcriptional dysregulation in cancer cells [26].In the present study, EGR1 was identified as one of the TFs that belong to the family of C2H2-ZF.EGR1 is involved in tumor cell proliferation, invasion and metastasis, and tumor angiogenesis.It has also been reported that δ-tocotrinol in pancreatic cancer cells stimulates the expression of EGR1, which causes apoptosis of pancreatic cancer cells [27].
The results of meta-analysis were further investigated for protein kinases.Protein kinases are kinase enzymes that phosphorylate a target protein to change its function.Protein kinases have been shown in studies to be important cancer regulators [28].In this study, we identified several protein kinase genes that associate with pancreatic cancer.Sixty-four protein kinases from 12 families were identified.The largest families included CAMK, TyR, STE and CMGC.STKs, which are members of the STE family, are enzymes that modulate protein activity by phosphorylating the serine and threonine amino acids [29].This family is involved in signal transduction pathways, controlling metabolism, cell division, and angiogenesis [30].They have also been shown to be involved in various types of cancer.For example, STK 4 is involved in pancreatic [31] and colorectal cancers [32,33].In addition, epidermal growth factor-containing fibulin-like extracellular matrix protein 1 (EFEMP1) as a member of the CMGCs family was detected.
Considering the importance of miRNAs in cancer, we also identified miRNAs associated with DEGs.A total of 901 miRNA-related genes belong to 195 families, which may have important roles in pancreatic cancer.miRNAs can affect the expression profiles of genes such as oncogenes and tumor suppressor genes, as well as cause the formation and progression of cancer [42].Hsa-miR-200 was the largest family, with 59 members.Peng et al. reported that the miR-200 family is involved in the onset and metastasis of cancer [43].According to Barshack et al., the hsa-miR-200 family expression has been significantly increased in liver malignancies, which can be used in tumor diagnosis [44].Moreover, Yu et al. suggested miR-200c as a new marker for the prognosis of pancreatic cancer [45].
We also performed a promoter analysis to identify regulatory elements upstream of the DEGs.Eleven motifs with significant scores were discovered.A large number of olfactory receptor activity-associated motifs were detected in the DEG upstream promoter sequences.Olfactory receptors (ORs) are a large group of G protein-coupled receptors in the olfactory epithelium [46].ORs are expressed ectopically in many tissues, and some evidence points to the role of ORs in several diseases, including cancer [47].They are also involved in various physiological processes such as cell migration, proliferation, and secretion [48].They have also been mentioned in several studies as biomarkers for various cancer tissues such as breast cancer [47,48], bladder cancer [46], and small intestine neuroendocrine carcinomas [49].
WGCNA was performed using the DEGs obtained from meta-analysis to identify the coexpressed and hub genes, and a total of 8 modules were discovered.From each module, ten hub genes were extracted.Based on GO enrichment analysis, the hub genes act as regulators of cell proliferation as well as regulators of apoptosis and cell death.Eleven of the hub genes (EMP1, EVL, ELP5, DEF8, MTERF4, GUlP1, CAPN1, IGF1R, HSD17B14, TOM1L2 and RAB11FIP3) were related to apoptosis, suggesting that the genes may be a potential target in PDCA.The survival analysis of these 11 genes revealed that EMP1, EVL, HSD17B14, MTERF4, RAB11FIP3, and TOM1L2 expression are closely related to the prognosis of patients with PDAC.It was found that EMP1 (epithelial membrane protein 1) encodes a protein located in the cell membrane.This gene is involved in biological processes such as cell death, epidermal development, and bubble assembly [50].In a study by Liu et al., EMP1 was reported to be expressed in a large number of tumors and was shown to be a cellular linkage on cell membranes and to play an important role in proliferation, invasion, metastasis of tumor cells, and mesenchymal epithelial transmission.Furthermore, EMP1 has been shown in several studies to be a reliable biomarker in cancers such as gastric [51], colorectal [52] ovarian [53], bladder urothelial carcinoma [54] and non-small lung carcinoma [55].EVL is a member of the Ena / VASP family of proteins involved in the regulation of the actin cytoskeleton.Changes in cytoskeletal composition either stimulate or suppress tumor cell invasion and migration.Mouneimne et al. showed that EVL decreased the migration and invasion of tumor cells.Decreased EVL expression in human tumor cells is also associated with high invasive activity, increased protrusion, decreased contraction and adhesion [56].This gene is also involved in cervical cancer [57].The HSD17B14 gene encodes 17β-Hydroxysteroid dehydrogenase.Sivik et al. discovered that HSD17B14 is a predictor marker for the tamoxifen response in breast cancer [58].The MTERF family includes MTERF1, MTERF2, MTERF3, and MTERF4 that have roles in the pathogenesis of various cancer types.In a study by Sun et al., it was indicated that high mRNA expression levels of the MTERF family lead to an improved overall survival (OS) rate in patients with lung adenocarcinoma.Furthermore, in their study, they identified MTERFs as primary biomarkers for predicting non-small cell lung cancer [59].RAB11FIP3 is an interaction of RAB11GTPase with the FIP3 protein.RAB11 GTPase is a major regulator of vesicle trafficking and belongs to a family of proteins that are susceptible to changes in human cancers [60].Tong et al. showed that RAB11FIP3 is involved in the endocytosis recycling in breast cancer and promotes EGFR transmission [61].The next gene in the list, Tom1l2, belongs to the Tom1 family that may be involved in the immune response and suppression of tumors [62].
Conclusions
In conclusion, in this study, several bioinformatics methods were used to identify novel biomarkers for pancreatic cancer.1074 DEGs were screened and TFs, PKs, miRNAs, and regulatory elements were identified by analysis.Following that, among the DEGs, 11 important hub genes were found that were associated with many pathways of tumor progression.Among them, EMP1 and RAB11FIP3 were identified as new biomarkers for the treatment and prognosis of patients with PDAC.A comprehensive study is required in future research to confirm the prognostic and diagnostic value of the identified biomarkers.Therefore, they may be promising prognostic indicators for patients with PDAC.
Fig 1 .
Fig 1.Workflow including meta-analysis and bioinformatics pipeline.Gene expression datasets of PDAC and pancreatic cancer were obtained from the GEO.The datasets were normalized and processed to identify differentially expressed genes (DEGs) between normal and tumor tissues.The significantly enriched pathways and Gene Ontology were identified through enrichment analyses.Conserved motifs and consensus cis-regulatory elements (CREs) of DEGs were detected.The WGCNA was used to cluster genes with the highest connection and identification of co-expression modules.https://doi.org/10.1371/journal.pone.0289561.g001
Fig 3 .
Fig 3. Clustering dendrograms and modules identified by weighted gene co-expression network analysis (WGCNA).The dendrogram indicates the gene clustering based on the TOM dissimilarity measure and each line indicated a gene.The colored column below the dendrogram indicates the modules conducted by the static tree cutting method at module size of 30 resulted in 8 color-coded modules.https://doi.org/10.1371/journal.pone.0289561.g003 | 4,963 | 2024-02-07T00:00:00.000 | [
"Medicine",
"Biology"
] |
Tunable cavity coupling of the zero phonon line of a nitrogen-vacancy defect in diamond
We demonstrate the tunable enhancement of the zero phonon line of a single nitrogen-vacancy color center in diamond at cryogenic temperature. An open cavity fabricated using focused ion beam milling provides mode volumes as small as 1.24 $\mu$m$^3$. In-situ tuning of the cavity resonance is achieved with piezoelectric actuators. At optimal coupling of the full open cavity the signal from individual zero phonon line transitions is enhanced by about a factor of 10 and the overall emission rate of the NV$^-$ center is increased by 40% compared with that measured from the same center in the absence of cavity field confinement. This result is important for the realization of efficient spin-photon interfaces and scalable quantum computing using optically addressable solid state spin qubits.
Introduction
Coupling of fluorescence from nanoscale quantum systems to optical microcavities provides a means to control the emission process and can be an essential element of nanophotonic device applications. The negatively charged nitrogen-vacancy (NV − ) defect in diamond is an example of a solid state system that has gained significant attention in recent years as a quantum spin register [1,2] and nanoscale sensor [3,4,5] due to its long spin coherence times and capacity for optical manipulation and readout. The recent demonstration of quantum error correction in an NV − defect [6] provides a sound basis for using these systems in practical quantum information technologies. However a coherent spin-photon interface, necessary for many quantum technologies, can not be achieved if information is leaked via interaction with phonons, and so in the NV − center only the 'zero phonon line' (ZPL) transition at 1.945 eV (637 nm) may be used. The low Debye Waller factor (DW 0.04) of the center imposes a significant limitation, since most spontaneously emitted photons are useless. The enhancement of this transition and its efficient coupling to external optics are therefore important challenges to which microcavities are well suited. In particular is an essential requirement for generating large scale entangled states between spatially separated arXiv:1506.05161v2 [quant-ph] 18 Jun 2015 defects connected via photonic networks, for which proof of principle experiments have been achieved [8] but further development is hampered by low entanglement efficiency. Success in this endeavor will provide a route to the realization of scalable quantum computers based on optical networks of electronic and nuclear spins. [9,10] Resonant coupling of the NV − ZPL to micro-ring resonators [11] and photonic crystal cavities [12,13] has been demonstrated to provide effective enhancements, but the monolithic structure of these cavities prevents positioning of the emitter at the heart of the cavity mode in-situ. Tuning of the cavity mode to the ZPL resonance can be achieved in these systems by progressive condensation of gas molecules onto the structure, thus increasing the mode volume and red-shifting the resonance, but it is difficult to optimise tuning using this procedure and it has limited scope for use in device applications. Open cavities [14,15,16,17] provide a flexible approach to cavity-coupled devices that permit full in-situ alignment and tuning, combined with efficient coupling to external optics. Here we demonstrate control over the emission from a single NV − center by coupling its ZPL to the resonant mode of an open microcavity. The open cavity geometry allows direct comparison between the emission properties of the same defect in and out of the cavity, thus providing unambiguous evidence of the effect of the cavity mode. We demonstrate clear tunable enhancement of the ZPL emission and reduction of the fluorescence lifetime of the defect in a controlled manner, and analyze our results in terms of the Purcell effect acting on the ZPL and the phonon sideband (PSB). Our work builds on recent demonstrations of room temperature coupling of NV − defects to open cavities [18,19,20].
Experimental Method
The open microcavities are of a plano-concave design, the concave features produced by focused ion beam milling of a fused silica substrate [16]. These cavities support a Hermite-Gauss mode structure with TEM 00 modes that can be effectively mode-matched to an external Gaussian beam. The radius of curvature of the concave mirror used here is 7.6 µm. The concave and planar dielectric mirrors have reflectivities of >99.99% and 99.7% respectively at the design wavelength of 637 nm, and reflection bands extending from 550 nm to 720 nm. The planar mirror is terminated with a low index layer to provide a field anti-node at its surface. The shortest cavity length achieved here has a mirror spacing of 1.11 µm providing an additional 3 field intensity maxima of the TEM 00 mode between the mirror surfaces with λ = 637 nm, as shown in figure 1(d). We label this mode with the longitudinal index q = 4, or the set of indices (q, m, n) = (4, 0, 0). The NV − defects are located in nanodiamonds of average diameter 100 nm produced using high pressure high temperature methods, which are spin cast onto the planar mirror. Registration marks on the planar mirror created using a focused ion beam allow individual defects to be identified and characterised both in and out of the cavity. The two experimental geometries used in this report are displayed in figure 1. The 'out-ofcavity' fluorescence is measured in the absence of a concave mirror and with the planar mirror as a substrate behind the nanodiamonds (fig 1a). For the 'in-cavity' experiments the planar mirror is inverted and both excitation of the NV − centers (at λ = 532 nm) and collection of fluorescence is carried out through the planar mirror. The microcavity assembly comprises a set of piezoelectric actuators that provide full control of the cavity length and relative position of the planar and concave mirrors at cryogenic temperature ( fig 1b). All measurements are carried out at 77K in a dry He exchange gas environment with the container supported in a liquid nitrogen bath cryostat [21]. Further details of the experimental apparatus can be found in section A of the supplementary information.
Results and Discussion
A low temperature fluorescence image recorded in the out-of-cavity configuration reveals well-dispersed nanodiamonds (fig 2a) with some instances of single centers recorded by the Hanbury Brown and Twiss method. The nanodiamond used in this study is circled, and its uncorrected photon intensity correlation data shown in fig 2b, revealing g (2) (0) = 0.38. Subtraction of background due to autofluorescence from the mirrors reduces this value to g (2) (0) = 0.05 implying that ≈ 97% of the fluorescence is from a single emitter. Fluorescence spectroscopy of this NV − center reveals a ZPL spectrum that is dominated by a linearly polarised doublet (fitted peaks 2 and 3 in figure 2d) indicating a high level of strain that lifts the degeneracy of the orthogonal 3 E x and 3 E y excited state dipoles of the defect (fig 2c). The line widths of the individual ZPL components are approximately 0.4 nm, with Gaussian line shapes suggesting that they are dominated by inhomogeneous broadening due to spectral diffusion. Weak additional lines are seen, potentially due to an additional NV center in close proximity. The relative intensities of peaks 2 and 3 and the angle between their polarisations (fig 2e) indicate that the NV − defect axis is oriented at 49 • relative to the optical axis of the cavity, and the two orthogonal transition dipoles for lines 2 and 3 are at angles of 39 • and 24 • to the plane of the mirror respectively (see supplementary information section B). The ZPL measured has DW = 0.044, as is typical for NV − centers. Figure 3 shows the measured fluorescence spectrum for the in-cavity configuration as a TEM 00 cavity mode is tuned through resonance with the ZPL. Clear enhancement of the ZPL emission is observed, providing a fully saturated photon count rate of 15 kc/s. This represents an experimental enhancement of the total ZPL signal by a factor of 2.5 compared with that recorded from the defect with the planar mirror alone (see supplementary information section C). Figure 4 shows a side-by-side comparison of the optical properties of the NV − center out of the cavity, with that observed in the cavity at optimal tuning to the ZPL. Figure 4a shows the spectra over the full range of NV-emission, illustrating the extent to which the coupled ZPL dominates the measured fluorescence, a result of the cavity having no other modes in this range that couple efficiently into the objective lens. Figure 4b shows a comparison between the fluorescence decay characteristics. The lifetime of the out-of-cavity defect is measured as (30.8 ± 0.6) ns while that in the cavity is (22.1 ± 0.4) ns, corresponding to a (39.5 ± 0.7) % increase in the emission rate. The slight deviation from a single exponential decay in the in-cavity data at longer delay time is suspected to be due to spectral instability of the cavity mode leading to inhomogeneous broadening. Figure 4c shows the photon autocorrelation data, which reveal a reduced g (2) (0) correlation value of 0.28 suggesting a slight improvement in the isolation of the single emitter when only the cavity-coupled ZPL is measured.
We separate the analysis of our results into two parts -a semi-analytic treatment of the coupling of the ZPL to the TEM 00 mode and a numerical treatment of the coupling of the PSB to other cavity modes present. We begin with the ZPL coupling, for which we express the wavelength-dependent enhancement of the emission of a given dipole by a single cavity mode as: V mode is the maximum rate enhancement assuming perfect spatial alignment and orientation, ξ µ = |µ.E| |µ||Emax| 2 is the spatial overlap and orientation factor between the emitting dipole µ and the cavity electric field E, λ cav is the cavity wavelength and Q is the cavity quality factor. As we are able to position the emitter at the electric field maximum we assume that E = E max so that ξ µ = cos 2 (θ) where θ is the angle between the dipole and the plane of the mirror.
The fractional increase in the total emission rate arising from the coupling with the ZPL is given by where n µ are the branching factors of the two dipoles µ in the excited state and S µ are their normalized free emission spectra collected over all directions. For peaks 2 and 3 of the ZPL doublet n µ = 0.44 and 0.56 respectively (see supplementary information section B). F ZP L as defined includes the Debye Waller factor through the relative weight of S µ corresponding to the ZPL. We rewrite equation 2 as: where is the normalized spectrum emitted along the cavity axis, to which the spectrum shown in figure 2(c) is a good approximation when appropriately scaled. The spectrum emitted into the cavity mode now reads which, taking δλ cav = 0.7 nm, reproduces well the measured tuning spectra in fig 3a. The same value of δλ cav used in equation 3 gives F ZP L = 0.25. This quantity is equal to the ratio of the overall ZPL emission rate into the cavity mode with the total emission rate in free space.
Based on the measured Debye Waller factor of 0.044 and the branching factor for peak 3 of 0.56, the cavity-induced enhancement of emission from peak 3 is found to be about a factor of 11. This compares with a value of 9.2 obtained using equation 1 with the effective Q factor Q ef f = λ/ (δλ cavity + δλ emitter ), in which δλ emitter = 0.4 nm, the fitted line width for peak 3.
Modification of the optical density of states experienced by the phonon sideband was determined using numerical Finite Difference Time Domain (FDTD) calculations that reflect the full dielectric environment of the NV − center (see supplementary information section D for details). These calculations allow confirmation of the cavity parameters by matching the simulated mode spectrum to that measured, and a direct prediction of the change in emission rate that will occur between the in-cavity and out-of-cavity experimental configurations. Figure 5 shows semi-logarithmic plots of three cavitycoupled spectra with the (5,0,0) mode tuned to the ZPL. This measured spectrum ( fig. 5a) is chosen as it reveals the positions of other modes coupling to the PSB and therefore provides a good reference by which to verify the geometrical parameters of the cavity. The relative Purcell factor ( fig. 5b) and the resultant semi-empirical prediction for the emission power density spectrum of the NV − center in the cavity (fig 5c) are also shown. The PSB emission couples to cavity modes with longitudinal index q = 4 and transverse indices 0, 2, and 4. The absence of observed coupling to modes with odd transverse indices suggests that the NV − center is well positioned on the cavity axis of symmetry where the electric field intensities of these modes drops to zero.
Integrating the emitted power density spectrum between 640 nm and 740 nm reveals the Purcell enhancement of the NV − center PSB emission to be F P SB P = 0.93 (ie, a 7% suppression of emission relative to the out-of-cavity geometry). The total change in emission rate for the NV − center is thus predicted to be F ZP L +F P SB = 1.14 significantly lower than the measured value of 1.395. Figure 6a shows the calculated Purcell enhancement factors of the ZPL and PSB for different TEM 00 modes tuned into optimal resonance with the ZPL. The ZPL data points correspond to the enhancement of the emission rate corresponding to the entire ZPL: an estimate of the enhancement of peak 3 alone can be obtained by multiplying these values by 1/n 3 = 1.8. Figure 6b reveals that the theory above underestimates the experimentally observed lifetime changes for each of these cavity lengths. The total emission rate is predicted to remain approximately the same for q = 4, 5 and 6 because the increase in ZPL emission as the cavity length is reduced is compensated by a reduction in PSB emission.
We attribute the difference between the predicted and measured behavior to the presence of inhomogeneous broadening, both in the NV − center ZPL due to spectral drift and in the cavity linewidth due to mechanical instability in our apparatus. Under such conditions the fastest component of the measured emission decay curve corresponds to near-resonant alignment and will primarily reflect the homogeneous line widths. We introduce inhomogeneous broadening of the cavity mode into our analytic calculations above by convoluting the mode line width with a Gaussian function g(λ cav ). The change in the NV − emission rate due to the ZPL coupling is then given by (supplementary information section E): A cavity line width of δλ = 0.2 nm, corresponding to the value measured Tunable cavity coupling of the zero phonon line of a nitrogen-vacancy defect in diamond.9 by transmission spectroscopy in a nominally identical cavity, combined with an inhomogeneous broadening of 0.5 nm, gives F ZP L,inhom = 0.364, whilst leaving the detuning spectra in figure 3 relatively unchanged. The overall changes in the NV emission rates are plotted in figure 6b, and are seen to agree more closely with, although still consistently underestimate, the measured values. We attribute the remaining discrepancy between the measured and modelled lifetimes to inhomogeneous broadening of the ZPL, which is more difficult to quantify and which the semi-empirical calculation method described above can not easily accommodate. A simple indication of the potential effect to the lifetimes in this experiment can be obtained from equation 1 however, in which a ZPL homogeneous line width of order 0.1 nm, consistent with values measured at this temperature in bulk materials [24], and a cavity line width of 0.2 nm, give F µ = 33.6 when resonantly tuned to peak 3. The resultant emission rate increase is F tot = 1.71, suggesting that the measured rate increase can indeed be accounted for by a combination of cavity and ZPL inhomogeneities. For completeness we note that an unknown parameter for our NV − center is its quantum efficiency (QE), which the above treatment assumes to be unity. Recent reports have demonstrated that nanodiamonds of the size used here can have QE as small as 0.3 [25]. The effect of reduced QE would be to require higher still Purcell factors to achieve the measured lifetime change, since the enhancement acts only on the radiative term.
In conclusion we have shown the controlled coupling of the ZPL of a single NV − center in nanodiamond to an open cavity at cryogenic temperatures. This cavity system shows significant potential for interfacing the NV − center with photonic networks and performing quantum operations and measurements. The degree of enhancement is currently limited by the line width of the NV − centers in the nanodiamond, and by the cavity line width as determined by scattering and instability of the low temperature cavity assembly. Improvements in these areas are readily achievable and are expected to produce much larger enhancements than those reported here. Open cavity quality factors exceeding 10 6 have been demonstrated [17], and modest further reductions in cavity mode volume are clearly also possible. Single NV − defects implanted into bulk diamond at depths of 100 nm can offer ZPL line widths as narrow as 27 MHz, [26] and electron spin coherence times > 100µs. Resonant ZPL coupling of NV − centers situated in diamond membranes to open cavities can thus result in enhancement of the emission rate into the ZPL by a factor of > 10 3 , leading to an effective Debye Waller factors approaching unity and indistinguishable photons with lifetimes of a few hundred picoseconds. Such projections suggest that the experimental configuration demonstrated here is an attractive route towards an efficient spin/photon interface and to the construction of scalable quantum processors. (Newport FSM300), outlined in figure 7. 532nm CW excitation is used for imaging and spectroscopy, with 532nm pulsed excitation available for PL lifetime experiments (Teem Photonics-SNG-20F-1SO, repetition rate = 20KHz). Excitation is coupled through a single mode fiber (Thorlabs SM460HP) with polarisation control (Thorlabs FPC030). The cavity apparatus is situated in a dry He-exchange gas environment (50 mbar), immersed in a liquid nitrogen bath cryostat with optical access. The apparatus consists of a custom sample stage, allowing independent piezo actuation of the mirror substrates. The planar mirror substrate has all the translational degrees of freedom (Attocube: 2 x ANPx100, 1 x ANPz100), whilst the cavity substrate can only move vertically (Attocube ANPz30). All stepper motors are driven with an Attocube ANC300 control module. A low temperature compatible achromatic objective (Attocube ASWDO x50 0.82NA) is used for optical excitation and collection. Spectral filters can be placed in the collection path. Cavity coupled HBT, Lifetime and power saturation measurements are taken in the 633-647nm spectral window using an additional band pass filter (Semrock FF01-640/ [14][15][16][17][18][19][20][21][22][23][24][25]. Fluorescence is coupled into single photon avalanche detectors (Perkin-Elmer SPCM-AQRH) via a single mode fiber (Thorlabs SM600). A 1x2 50:50 fiber splitter (Thorlabs FCMM50-50A) with second single photon detector are used for HBT measurements. An Edinburgh Instruments TCC900 card provides the correlation electronics for the HBT and lifetime measurements. The output fiber can also be coupled
B) Calculation of the ZPL dipole orientations
The orientations of the dipoles for the two strain-split ZPL transitions are calculated by determining the polar angle of the NV axis relative to the optical axis (θ) and the rotation of the dipoles about the NV axis (β) which will lead to the projections on the observation plane.
A thermal distribution between the populations of the two excited states comprising the ZPL doublet is also assumed, consistent with the findings of Fu et al [24]. The energy splitting of peaks 2 and 3 is 1.5 meV, so that the population ratio for the levels responsible for peaks 2 and 3 at 77K is 0.8:1. The measured intensity ratio between peaks 2 and 3 due to the projection of this thermally distributed dipole pair onto the measurement plane is 0.58:1, so the equivalent projection of a circle formed by two perpendicular dipoles of equal strength would result in a ratio of R = 0.73.
The polar angle θ is obtained from the sum of the polar intensity distributions of peaks 2 and 3, whereby the extrema have the following dependence on θ.
The combined rotation matrix for the axial, then polar, rotation is cos(β) −sin(β) 0 cos(θ)sin(β) cos(θ)cos(β) −sin(θ) sin(θ)sin(β) sin(θ)cos(β) cos(θ) (8) Applying this rotation to unit vectors X & Y, and projecting the resultant vectors onto the measurement plane, gives Where the square of the dipole vector have been taken to obtain the intensity. Knowing θ and the ratio R between these intensities, allows β to be determined. Finally, the angles φ X & φ Y between the rotated dipole vectors and the observation plane are found by taking the dot product of the rotated dipole vectors and their projections, leading to
C) Comparison of excitation conditions for the two experimental geometries
For comparison of the emission intensities in the out-of-cavity and in-cavity geometries it is necessary to establish equivalent excitation conditions in the two cases. To do this we measured the dependence of the emission intensity I on excitation power P to record the emission saturation curve of the color center in each case. These data are shown in figure 8. We then fitted these curves to the saturation function I = I sat P P sat + P The fitting parameters I sat and P sat are the fully saturated photon count rate and the characteristic saturation power for excitation respectively. In the out-of-cavity geometry we find that I sat = 154 kc/s for the whole NV centre (6.78 kc/s for the ZPL only) and P sat = 1.02 mW, whilst that in the q = 4 cavity (data in figures 3 and 4) yields I sat = 15.1 kc/s for the ZPL only and P sat = 1.89 mW. The spectra shown in figure 4 use an excitation power of P sat in each case.
D) Finite Difference Time Domain simulations
We used Lumerical Solutions FDTD software to perform numerical simulations of a dipole emitting into the cavity. Firstly, by reproducing the measured mode structure using these simulations we confirm the cavity geometry. The concave mirror is modelled as vertically stacked films of λ/4 vertical height, based on cross-sectional SEM measurements such as the one shown in figure 1(a). Refractive indices of the SiO 2 and Ta 2 O 5 layers are 1.52 and 2.10 respectively. The concave mirror is situated as having 15 Bragg pairs rather than the 20 in the real device, as this allows faster calculations with negligible affect on the results. The dipolar source is positioned on the axis of symmetry, 20 nm from the planar mirror surface, and within a dielectric sphere of diameter 100 nm and refractive index 2.4 in contact with the planar mirror to simulate the nanodiamond. Separate calculations are performed with dipoles at angles of 18 • and 39 • to the planar mirror representing peaks 3 and 2 respectively. We confirmed that the results of the calculation are insensitive to the position of the source within the nanodiamond and on the exact size of the nanodiamond. The finite element Yee cells have a minimum size of 5 nm to accommodate the nanodiamond and the contours of the curved mirror. The electromagnetic field was allowed to propagate and decay for 10 ps in 5 fs increments, corresponding to a simulation time of about 12 hours. Mode volumes were calculated both using the simple analytic formula for a Gaussian beam and numerical integration of the simulated FDTD field intensity. The analytic expression used is V mode = λz R L 4 where z R = L β L − 1 is the Rayleigh range, L is the optical length of the cavity and β is the radius of curvature of the concave mirror. The integration used is | 5,629.4 | 2015-06-16T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Quantum Bounds on Detector Efficiencies for Violating Bell Inequalities Using Semidefinite Programming
Loophole-free violations of Bell inequalities are crucial for fundamental tests of quantum nonlocality. They are also important for future applications in quantum information processing, such as device-independent quantum key distribution. Based on a detector model which includes detector inefficiencies and dark counts, we estimate the minimal requirements on detectors needed for performing loophole-free bipartite and tripartite Bell tests. Our numerical investigation is based on a hierarchy of semidefinite programs for characterizing possible quantum correlations. We find that for bipartite setups with two measurement choices and our detector model, the optimal inequality for a Bell test is equivalent to the Clauser–Horne inequality.
Introduction
Bell inequalities characterizing all possible classical correlations of local realistic theories and their violations by quantum theory have been an active field of research since Bell's original paper published in 1964 [1]. Apart from revealing characteristic quantum properties, which could also be demonstrated experimentally recently in loophole-free ways [2,3], violations of Bell inequalities are also used in quantum physics as a measure for the secrecy of quantum key distribution schemes [4,5]. Such violations can only be achieved as long as losses and inefficiencies involved in an experimental setup under investigation are sufficiently small. It is a main intention of this paper to investigate the bounds on efficiencies required for observing characteristic quantum correlations by exploring Bell inequalities in the presence of imperfect detectors.
To find the minimal requirements on these efficiencies, we treat the experimental setup as a black box, only requiring some input and yielding an output for every single measurement. For such device-independent treatments, results have already been published for several possible scenarios with different numbers of involved parties, inputs and outputs, see e.g., [6][7][8]. In this paper, we introduce an extension of these investigations to an error model which includes dark counts of the imperfect detectors involved.
To demonstrate the possibility of a violation of a Bell inequality there are two problems which must be solved. First, an adequate inequality must be found. To this end, one could compute and subsequently test all facets of the corresponding Bell polytope. For larger systems this is however infeasible, as the Bell polytope is typically given in its vertex representation (see Section 2.1) and finding all facets is a computationally hard problem [9]. However, one can use linear programs [10] to test if given measurement correlations lie outside the Bell polytope, thus violating some Bell inequality. The second problem is finding such correlations, which must be compatible with quantum theory. Nevertheless, despite repeated efforts, similarly clear conceptual restrictions as known for Bell polytopes, are not yet known for quantum correlations, see e.g., [11,12]. There are, however, mathematical restrictions, which can be put into a hierarchy of optimization programs. The resulting correlations are conjectured to represent the possible quantum correlations exactly in the limit of infinite order of the hierarchy [13]. In the following we use a combination of these techniques to find the detector parameters, which allow for a successful violation of Bell inequalities in bipartite and tripartite scenarios. This way it will not only be shown that in the absence of dark counts, our approach reproduces previous results [8], but we will also explore the detector efficiencies required for loophole-free violations of local realism in these scenarios.
Bell Polytopes, Realistic Theories and Quantum Correlations
We summarize basic aspects concerning the theoretical description of those statistical correlations, which are typically explored by Bell experiments, within the theoretical framework of (classical) local realistic theories. The constraints imposed on these correlations within the framework of local realistic theories can be described in terms of (generalized) Bell inequalities. Even beyond the scenarios originally considered by J. S. Bell [1], for each multipartite measurement setup the structure of these correlations is described by a corresponding polytope, the so-called Bell polytope. Characteristic features of these polytopes have been explored, for example, in Refs. [9,[14][15][16].
Let us consider a typical N-partite Bell-type experiment in which N parties perform a joint measurement on a physical system shared between them. For this purpose, each of these parties selects a physical variable (observable) from a finite set of possible physical variables (observables), say O. This selection can be described formally by an N-tuple (input) x := (x 1 , · · · , x N ) with x i ∈ O, i ∈ {1, · · · , N}. Thus, in an N-partite spin experiment, for example, each party measures the spin of a many-particle system consisting of at least N spin 1/2 particles generally in a different direction. Depending on the choices of these physical variables each of these parties observes a measured value. All these measured values define an N-tuple of measured values (output) a := (a 1 (x), · · · , a N (x)). In the following we assume that these measured values form a finite discrete set A, i.e., a i (x) ∈ A with i ∈ {1, · · · , N}.
If we describe such an N-partite Bell-type experiment within the general framework of a classical realistic theory without any further constraint, such as a locality requirement, we can distinguish between two types of theoretical descriptions, namely a deterministic one and a statistical one. In a deterministic realistic description, there exists a transfer function F (or state) which relates each value of the input variable (choice of physical variable) with a unique value of the output variable (result of the measurement), i.e., Within the framework of a statistical realistic theory the state of the physical system, i.e., F, is described by a random variable characterized by a normalized probability distribution P(F) ≥ 0 with ∑ F P(F) = 1. This probability distribution P(F) generates conditional probability distributions P(a|x) by the relation with δ(a, F(x)) = 1 for a = F(x) and zero otherwise. These conditional probabilities describe the probability that the choice of the physical variables x ∈ O N yields a joint measurement result a ∈ A N of all the N parties involved. They define a probability polytope with vertices F [10] characterizing a (classical) statistical realistic theory. If the parties involved in such a Bell-type correlation experiment are pairwise space-like separated, the input of any party cannot influence the output of any other party due to the finite speed of light in vacuum. This locality constraint is equivalent with the requirement that each party has its own transfer function, i.e., so that for this locality constraint the conditional probabilities characterizing a statistical theory are of the form These conditional probabilities define a probability polytope with vertices (F 1 , · · · , F N ) characterizing a (classical) statistical local realistic theory [10].
There are two different ways to describe the probability polytope of a (classical) statistical realistic theory mathematically, either by a vertex representation or by a half-space representation. For any given experimental setup, the vertex representation of the associated probability polytope is defined in terms of the relevant transfer functions [9,10,15]. The half-space representation of such a probability polytope is defined by inequalities, which in the case of a locality constraint are called generalized Bell inequalities. Whereas the vertex representation can be constructed in a systematic way, in general its conversion to the corresponding half-space representation constitutes a hard mathematical problem which can typically be solved only for very small numbers of parties, numbers of observables and measurement results. In view of these problems, the exploration of generalized Bell inequalities for multipartite scenarios is still an active field of research. In the following we use the term 'Bell inequality' for any inequality, which divides the probability space in such a way that the full probability polytope of a statistical local realistic theory, i.e., its Bell polytope, is inside the half-space defined by this inequality. Of particular interest will be those inequalities, which are also facets of the relevant Bell polytope. Clearly, a violation of such a Bell inequality by a quantum system shared between N parties proves that these statistical correlations cannot be described within the framework of a statistical local realistic theory. In particular, entangled quantum states may exhibit such characteristic quantum features. Apart from very simple physical systems it is still a largely open question which violations of Bell inequalities describing local realistic correlations are possible by quantum theory.
Bell Experiments with Imperfect Detection
In the following we explore correlations of local realistic theories in the presence of imperfections concerning the measurements involved. In particular, generalizing recent investigations we concentrate on imperfect detection of measurements in the additional presence of dark counts of detectors in bipartite and tripartite scenarios.
Putting the problem into perspective let us concentrate on one of the simplest possible generalizations with imperfect detection, namely a bipartite scenario, i.e., N = 2, in which each party selects one of two inputs randomly, i.e., |O| = 2 and in which for each of the parties four different outcomes are possible, i.e., |A| = 4. Physically speaking these four exclusive measurement results (outputs), namely A = {∅, 1, 2, DC}, are supposed to represent the no-detection event (∅), the possibly ideal measurement results 1 and 2 and the dark count event (DC). To realize such a scenario, it is necessary that each party has two detectors available which can detect the exclusive measurement results 1 and 2. In a typical Bell experiment measuring polarization states of photons, for example, this would require that each party uses two photon detectors which detect two orthogonal photon polarizations. If only one photon has been sent to a party and if none of these two photon detectors of this party clicks, the event ∅, i.e., no photon present, is registered. If both detectors click a dark count (DC) is taking place.
We model each of the two imperfect detectors of each party by two (statistically) independent parameters, namely η and δ. These parameters are assumed to be identical for all detectors involved. The parameter η is the probability that a particle is detected provided a particle has been sent to the party. The parameter δ is the probability that a dark count is produced. Correspondingly, for each party the conditional probabilities transforming ideal outputs with potential measurement results a (id) ∈ A ideal = {1, 2} into the actually observed imperfect outputs a ∈ A are given by Here, P η,δ (a|a (id) ) is the probability to measure the result a in the system with imperfect detectors, if the measurement would have resulted in the value a (id) in the ideal system with perfect detection. Therefore, the four exclusive events A, B, C, D have the following physical meanings: • A: no particle is detected, and no dark count takes place in both detectors, • B: no particle is detected by the detector which should have registered it in the ideal case and a dark count takes place in the other detector, • C: either the particle is detected by one of the detectors and a dark count takes place in the other detector or the particle is not detected, and dark counts take place in both detectors, • D: either the particle is detected, and no dark count takes place or the particle is not detected and a dark count takes place in the detector in which the particle should have been registered.
Within this detection model the observed bipartite conditional probabilities P(a, b|x, y) are related to the corresponding ideal (unobserved) bipartite conditional probabilities P(a with a, b ∈ A and a (id) , b (id) ∈ A (id) . This model for imperfect detection can be viewed as the subsequent application of a perfectly working inner box, which creates ideal correlations without errors, and of an outer box so that losses and dark counts are introduced for each party separately. Thus, a party (observer) has only access to the inputs and outputs of the outer box, while classical or quantum physics restricts the possible correlations created by the inner box.
Required Detector Parameters for Loophole-Free Bipartite Bell Tests
To perform a Bell test successfully, a Bell inequality must be violated using some quantum setup. Thus, we need to find a probability distribution P, which (a) lies outside the Bell polytope, and (b) can be achieved by quantum correlations. Here, P denotes the vector of all conditional probabilities P(a, b|x, y) characterizing the experimental setup under investigation.
To check condition (a), we use a linear program as in [10]. Given all vertices P (i) V of the Bell polytope by evaluating all the possible combinations of transfer functions for the parties, the probabilities P B obtained by a convex combination of these vertices, i.e., P B = ∑ i w i P (i) V , with w i ≥ 0 and ∑ i w i = 1, are also part of the Bell polytope. In the bipartite case considered in this section and for ideal conditional probabilities P(a (id) , b (id) |x, y), the conditional probabilities P(a, b|x, y) observed with imperfect detectors are computed using the detector model. Using a convex sum of the vertices v i , the linear program then finds the point in the Bell polytope which is closest to P. It returns the distance between that point and P with respect to the 1-norm. For probabilities P(a (id) , b (id) |x, y), which lie outside the Bell polytope for ideal detectors, i.e., η = 1 and δ = 0, we can now find critical detector parameters. To do this, δ is fixed and the detection probability η is decreased (or vice versa) until the distance of P to the Bell polytope reaches 0. Once this happens, no violation of any Bell inequality is possible with these detector parameters and the given P(a (id) , b (id) |x, y). However, there might be some other P(a (id) , b (id) |x, y), that, for the same detector parameters, result in a probability distribution P which still is well outside the Bell polytope. This brings us back to condition (b), namely which P(a (id) , b (id) |x, y) are we allowed to use when trying to find the P (id) that stays outside of the Bell polytope for the worst possible detector parameters? To answer this, we use a hierarchy of semidefinite programs (SDP) [13] as described in Materials and Methods. In short, the program maximizes a linear objective function on the probabilities while following mathematical restrictions arising from quantum theoretical principles. In our case, the objective function is the left hand side of a Bell inequality of the form ∑ a,b,x,y c xy ab P(a, b|x, y) ≤ C. In the following, we will choose C = 0, which is always possible using ∑ a,b P(a, b|x, y) = 1. For each order of the hierarchy, the SDP returns an upper bound for the quantum value on this inequality, as probabilities might be viable, which lie outside of the possible quantum correlations. The results of SDP are conjectured to be identical with the possible quantum correlations only in the limit of infinite order of the hierarchy or in special cases, e.g., a finite-dimensional quantum system and a correspondingly high order of the hierarchy, see [13].
With SDP we can find the amount of violation of a Bell inequality by quantum theory for given detector parameters and we also obtain the responsible conditional probability P SDP = P(a (id) , b (id) |x, y). As the chosen Bell inequality may not be optimal, we use these probabilities to find an optimized inequality. To this end, we compute the Bell inequality which is maximally violated by P(a, b|x, y), obtained from P SDP and from fixed detector parameters, by solving the linear program max c ∑ a,b,x,y c xy ab P(a, b|x, y) The first constraint limits the values of the weights within the Bell inequality, as the optimal value would go to infinity otherwise. The second constraint makes sure that only inequalities are considered, which leave every vertex and thus the whole Bell polytope in the feasible half-space. In this sense, it is still a Bell inequality, although this algorithm typically does not return a facet of the polytope. To get a facet with this method, P must be chosen very close to the surface of the polytope, by increasing the detector inefficiency or the dark count probability.
They are equivalent in the sense that they only differ by a relabeling of the parties, of the inputs or outputs or by instances of ∑ a P(a|x) = 1. Here, the no-signaling condition is used, making it possible to write P(b|y 1 ) = ∑ a P(a, b|x 1 , y 1 ) = ∑ a P(a, b|x 2 , y 1 ) as the result of one party cannot depend on the input chosen by the other party. The inequality is also equivalent to the well-known Clauser-Horne-Shimony-Holt (CHSH) inequality [18] if the additional outputs created by detector errors are subsumed in one of the original outputs. This implies a lifting of the original inequality to a scenario with more possible results per measurement (here from 2 outputs to 4), without additional changes to the inequality [19].
The hierarchy of semidefinite programs yields improving lower bounds for necessary detector efficiencies by giving better and better upper estimates for the possible quantum correlations with each order of the hierarchy. However, this method does not tell us if and how these quantum correlations can be created experimentally. However, using the CH inequality, we find that the step from 2nd to 3rd order SDP does not change the results for the necessary detector parameters. This implies that the obtained probabilities, and thus detector parameters, are identical to the ones possible by quantum physics. These lower bounds on the detector parameters necessary for a successful Bell test are shown as the red crosses in Figure 1.
To find an upper bound on the detector parameters, we take the quantum states and measurements proposed by Eberhard [7] for bipartite scenarios. These define several probability distributions P E (a (id) , b (id) |x, y). As previously described, we vary η and δ to obtain P E (a, b|x, y) and we use our linear program to check at which point these correlations are no longer outside the Bell polytope. As there are specific instructions on how to create the needed probabilities experimentally, they definitely can be reached by quantum systems and the resulting upper bounds on the detector efficiencies are shown as the blue circles in Figure 1.
As the upper and lower bounds are identical up to numerical precision, they represent the critical detector parameters for a successful Bell test, i.e., a violation of a Bell inequality, in this bipartite scenario. In the area below the curve, no violation of any Bell inequality is possible. If the detector efficiency η is higher or the dark count probability δ is lower than specified by the shown critical values, quantum systems and measurements exist, which result in a violation of a Bell inequality. As the lower bounds obtained via SDP and the CH inequality (red crosses) are identical to the values accessible with the states and measurements given by Eberhard [7] (blue circles) up to numerical precision of the method, this bound is optimal.
Extension to Tripartite Scenarios
To find the critical detector parameters for a Bell experiment with three parties, we create Bell inequalities for the scenario with imperfect detectors and four outputs by lifting known inequalities for the tripartite case with dichotomic measurements. These inequalities were taken from [20], which gives a full list of the different classes of Bell inequalities for this scenario. Therefore, lifting means that the additional outputs are summed into one of the previously existing outputs as in the bipartite case with the CH inequality, thus making minimal changes to the Bell inequalities in order to be able to use them in a higher-dimensional scenario. This step is motivated by the previous bipartite result, where the similarly lifted CHSH inequality was optimal. We have also searched for better inequalities by iterative combination of a linear program and SDP as described earlier; however, this did not improve the results for the necessary detector efficiencies. In this case of three parties, there is no further change in the retrieved critical parameters for higher than 3rd order SDP, while in the scenario with 2 parties this point was reached with 2nd order SDP already. In accordance with [8], lower detector efficiencies of η = 0.6 with δ = 0, compared to 2 3 for 2 parties, are sufficient to measure the violation of a Bell inequality. In contrast to the states and measurements given by Eberhard, the states used in Larsson's proof for minimal detector efficiencies are only optimal for δ = 0 or η = 1. These states are of the general form α|0 + β|W with |W being the multipartite W state [21]. For δ > 0 or η < 1, they require up to 1% better detector parameters, compared to the lower bound found with SDP, to achieve a violation of a Bell inequality, even if the best possible Larsson state is chosen. The required detector efficiencies for a given dark count rate, obtained via SDP, are given in Figure 2.
Discussion
We have been able to reproduce and extend previous results on required detector parameters to observe violations of Bell inequalities by using semidefinite programming. While the Bell polytope can easily be created for any given scenario, finding the corresponding Bell inequalities is a computationally hard problem. The SDP however needs some fixed inequality as a starting point for finding a probability distribution, which maximizes its value while fulfilling all constraints of the chosen order of the hierarchy. Considering imperfect detectors and dichotomic variables as in our model, where the probability of retrieving some output only depends on the original outcomes of a perfect system and not on the chosen measurement or on the outputs of other parties, we found that it is sufficient to check lifted Bell inequalities for violation to find optimal Bell inequalities. As these results are based on our numerical analysis, a rigorous proof or a counterexample, i.e., a different inequality, which allows for worse detection efficiencies in the same general setup, has yet to be found. The SDP did not show any improvement in the results when exceeding 2nd or 3rd order for 2 and 3 parties, respectively. While we do not explicitly restrict the dimension of the underlying quantum systems, we fix the number of measurements and possible outcomes of each measurement. If all possible quantum correlations can be obtained by using low-dimensional quantum systems, e.g., qubits distributed among some parties, a correspondingly low order SDP is sufficient to obtain the possible quantum correlations, as proven in [13].
Our results give lower bounds on the necessary detector efficiencies for violating any Bell inequality. These bounds are general physical features, which form a basis for all two and three party cryptographic protocols that are based on entanglement. The violation can be used as a starting point to create protocols for secure key generation, see e.g., [22]. However, the computation of secure key rates would require additional assumptions about the used protocol as well as a thorough analysis of the mutual information of the involved parties and is not within the scope of this paper.
Semidefinite Programming (SDP)
To restrict our chosen ideal probabilities P to probabilities, which are consistent with quantum theory, we use a hierarchy of semidefinite programs. This method has been introduced by Navascues et al. and is described in detail in [13]. Each order of the hierarchy revolves around a positive semidefinite matrix Γ 0, which is linked to the relevant conditional probabilities P(a, b|x, y). To construct these links, first consider projective measurements with projection operators E a,x for all outputs of a setup. As the dimension of the quantum system is not fixed, it is always possible to use only projection operators in the model. For any pure state |ψ , for example, we have P(a, b|x, y) = ψ|E a,x E b,y |ψ and the relations for the projection operators involved. For order n of the hierarchy, consider S i to be all sequences of up to n of these projection operators, e.g., E a,x E a ,x E b,y for n = 3. For some fixed |ψ , the matrix Γ is defined by its entries Γ i,j = ψ|S † i S j |ψ . Using the relations for the projection operators E a,x , we can now determine all K algebraic dependencies of the form abxy P(a, b|x, y) =: g (k) (P) , k ∈ {1, · · · , K}.
Probabilities P, which cannot be obtained by a quantum system, are not able to fulfill all these conditions while keeping Γ positive semidefinite, in the limit of n → ∞. Thus, the obtained F and g are used to restrict P in our optimization problem. To find the maximal violation of a Bell inequality ∑ a,b,x,y c xy ab P(a, b|x, y) ≤ C we use SDP in combination with the detector model. For each step of the hierarchy, the optimization problem max P (id) ∑ a,b,x,y c xy ab P(a, b|x, y) s. t. P(a, b|x, y) = ∑ a (id) , b (id) P η,δ (a|a (id) )P η,δ (b|b (id) )P(a (id) b (id) |x, y) tr((F (k) ) T Γ) = g (k) (P (id) ) Γ 0, is solved using SeDuMi [23], a MATLAB toolbox for semidefinite optimization. The last two conditions make sure that for P (id) only values that are consistent with the chosen order of the hierarchy of SDP are considered, while the program maximizes the value of the Bell inequality after applying the detector model. This procedure is repeated for higher orders of SDP until convergence, i.e., until the results are identical up to numerical precision with the results obtained with the previous order of the hierarchy. By construction, this method considers all possible correlations compatible with quantum theory without any further restrictions, such as special states or bipartite entanglement.
Conclusions
We found that a hierarchy of semidefinite programs can be used to retrieve exact minimal requirements on detector efficiencies for loophole-free Bell tests. Although such a hierarchy of semidefinite programs is only conjectured to approach the set of quantum correlations in the limit of the order approaching infinity, in our cases only two and three steps were necessary to obtain convergence for the set of quantum correlations. This implies that these systems can always be described by a corresponding finite-dimensional quantum system. As convergence can be reached in few steps, SDP is a useful tool to find the limits of the quantum correlations for a given scenario. In the bipartite case, we could reproduce the necessary detector efficiencies given by Eberhard's results [7]. As this is an upper bound, while we produce with the hierarchy of semidefinite programs a lower bound, we were able to obtain the exact bounds on detector efficiency and dark count probability in this scenario. For the tripartite scenario with two input settings per party, we found that slightly less ideal parameters are sufficient to violate a Bell inequality. In the special case of a vanishing dark count probability, our method reproduces the lower bound which has been proven to be minimal in [8]. These bounds on detector parameters can be used in future tripartite Bell tests as they need to be fulfilled in an experiment free of the detection loophole. E.g., such an experiment might be a hub distributing quantum systems to multiple parties for quantum key distribution. | 6,475 | 2020-01-03T00:00:00.000 | [
"Physics"
] |
New ATLAS results in inclusive searches for supersymmetric squarks and gluinos
Abstract. Despite the absence of experimental evidence, weak-scale supersymmetry remains one of the best motivated and most studied Standard Model extensions. These proceedings summarise recent results from the ATLAS experiment at the LHC on inclusive searches for supersymmetric squarks and gluinos in events containing jets, missing transverse momentum, and possibly isolated leptons in R-parity conserving scenarios.
SUSY in strong production
If present at TeV scale, squarks and gluinos may be copiously produced at the LHC.
Gluinos and squarks decay either directly or via a cascade into: jets, coming from gluinos and squarks decays the LSP (the lightest supersymmetric particle), escaping the detector and resulting in E miss T (assuming R-parity conservation in this talk) and: possibly lepton(s), coming from chargino, neutralino or slepton decays 0-lepton + multi-jets: signal regions arXiv:1308.1841, submitted to JHEP Dedicated analysis to look for longer SUSY decay chains -up to 10 jets in the final state The usage multi-jet triggers without E miss T requirements allows to have low cuts on E miss T Three sets of signal regions: Veto events with isolated electrons or muons in order to suppress W +jets or tt background 8, 9 or at least 10 jets with p T > 50 GeV and zero, one or at least two b-tagged jets 7 or at least 8 jets with p T > 80 GeV and zero, one or at least two b-tagged jets Signal regions with "mega-jets": At least 8, 9 or 10 jets with p T > 50 GeV and M J > 340 or 420 GeV All signal regions also impose E miss T / H T > 4 √ GeV (H T = p jet T using jets with p T > 40 GeV and |η| < 2.8)
Mega-jets:
Re-clustering anti-k T jets with cone radius of 0.4 into 'mega'-jets: anti-k T jets with cone radius of 1.0 Discriminating variable M J = m R=1.0 jet constructed by using mega-jets with p T > 100 GeV and |η| < 1.5 (SUSY) (BG) 0-lepton + multi-jets Backgrounds: Multi-jet production: strong production, fully hadronic decays of tt, W and Z bosons At least one isolated lepton + 2-6 jets ATLAS-CONF-2013-062 Reduction of the large QCD multi-jet background by requiring at least one lepton (electron or muon) Powerful discriminant between background and signal: Decays of gluinos or squarks via one, two or more steps into the LSP at least one soft lepton (6 (µ)/10 (e) < p T < 25 GeV ) 80, 30 > 80, 50, 40, 40, 40 > 80, 50, 40, 40, 40, 40 pT add. jets Events / 20 GeV Also exploit the longitudinal information of an event. Idea: • Both sparticles are pair-produced, the initial heavy sparticles are assumed here to have the same masses or to be at the same mass scale → Both decay chains are symmetric if going into the frame where the initial heavy sparticle is at rest → Can group all visible particles of one decay chain into a mega-jet; both mega-jets should have the same energy Variables to benefit from this configuration: • Characteristic mass of the event: M R = (j 1.E + j 2,E ) 2 − (j 1,L + j 2,L ) 2 (E: energy, L: longitudinal component, j 1 and j 2 : four-vectors of the two mega-jets) • Transverse information of the event: Background events tend to have lower R, SUSY like signal events tend to be uniformly distributed between 0 and 1 in R Good agreement between background estimation and observed data. Tightest limit on the visible cross section in the 1-lepton + 6 jets SRs: 0.15 fb Production of same-sign lepton pairs (focus here on ee, eµ and µµ) rare in the SM Several production possibilities in SUSY decay chains, e.g. ing → tt → ttχ 0 1 Due to rareness of same-sign events in SM processes low background expectation: prompt same-sign lepton pairs with two real same-sign leptons: tt+ vector boson, dibosons; estimated by using the MC prediction and validated in validation regions fake lepton background -at least one of the selected leptons is misidentified: mostly semi-leptonic tt events; estimated by a matrix method charge mis-identification: emission of hard bremsstrahlung followed by an asymmetric conversion (e ± → e ± γ → e ± e ± e ∓ ) only for events with electrons; mostly di-leptonic tt events; estimated by determining the ratio between same-sign and opposite-sign electron pairs in Drell-Yan events and applying this ratio to regions similar to the SRs, but containing opposite-sign events
Interpretation summary
The various analyses are interpreted in multiple models:
2-lepton Razor
Limits are placed on the gluino mass (up to ∼ 1.3 TeV, depending on model) and on the squark masses (up to ∼ 700 GeV, depending on model).
Model independent upper limits on the visible cross section have been derived and can be as low as 0.15 fb (depending on the analysis).
All results can be found on this webpage, summarising all public ATLAS SUSY results.
Mass limit Reference
Inclusive Searches 3 rd gen. g med. | 1,194.2 | 2014-01-01T00:00:00.000 | [
"Physics"
] |
Transmembrane Chloride Intracellular Channel 1 (tmCLIC1) as a Potential Biomarker for Personalized Medicine
Identification of potential pathological biomarkers has proved to be essential for understanding complex and fatal diseases, such as cancer and neurodegenerative diseases. Ion channels are involved in the maintenance of cellular homeostasis. Moreover, loss of function and aberrant expression of ion channels and transporters have been linked to various cancers, and to neurodegeneration. The Chloride Intracellular Channel 1 (CLIC1), CLIC1 is a metamorphic protein belonging to a partially unexplored protein superfamily, the CLICs. In homeostatic conditions, CLIC1 protein is expressed in cells as a cytosolic monomer. In pathological states, CLIC1 is specifically expressed as transmembrane chloride channel. In the following review, we trace the involvement of CLIC1 protein functions in physiological and in pathological conditions and assess its functionally active isoform as a potential target for future therapeutic strategies.
CLIC Proteins, a Focus on CLIC1
Chloride channels exert different functions in every stage. They are involved in ion homeostasis, fluid transportation, regulation of cell volume, cytoskeletal rearrangement, and cellular motility. An example is the interplay between swelling-activate K+ and Clchannels during volume regulation due to cellular hypertonicity. In this case, both channels cooperate allowing a net efflux of ions. Chloride Intracellular Channel proteins (CLIC) are a class of proteins both soluble and integral membrane forms. Littler et al. showed CLIC proteins to be highly conserved in chordates with the presence of six vertebrate paralogues [1]. CLIC proteins diverge from the canonical ion channel structure. Using phylogen and structural techniques they found CLIC proteins exert enzymatic functions. CLIC proteins show 240 aminoacidic sequence belonging to the glutathione S-transferase (GST) fold superfamily. This suggests a well-conserved intracellular activity playing a role in redox balance. In particular, the enzymatic active site was shown to be the Cys24, distinct from canonical GSTs which has been shown to exhibit a thiol group [2]. As a variety of metamorphic proteins, CLIC proteins undergo important changes in structure switching from the globular hydrophilic form to the transmembrane hydrophobic structure. It was found that the membrane insertion is elicited by oxidizing conditions and pH changes [2][3][4][5]. Oxidation-dependent insertion is likely to be mediated by the active residue Cys24 as the C24A mutation alters the redox sensitivity of the channel and-for CLIC1-its electrophysiological characteristics. CLICs channel activity at physiological pH has been shown to be minimal while the presence of membrane-expressed rises with a change in pH [4]. To date, little is known about the function of CLIC proteins in native tissues. An example is the enrichment of CLIC4 and CLIC5 in the human placenta, in particular the apical section of microvilli in trophoblasts. Jentsch et al. propose this phenomenon to be secretion dependent [6]. Given that secretory vesicles are known to establish an acidic pH for the correct assembly, this may need a chloride conductance to balance the proton transport coupled to vesicle acidification. At the same time, cytoplasmic acidification may be a switch for their membrane insertion. Significant expression of mRNAs encoding 2 of 9 CLIC1, 2, 4, and 5 were found to be expressed in human hepatocellular carcinoma and metastatic colorectal cancer in the liver. CLIC2, in particular, was predominantly expressed in non-cancer tissues surrounding cancer masses where may be involved in the formation or maintenance of tight junctions which allows the intravasation of cancer cells to form metastasis [7]. Moreover, the protein was found to be a promising biomarker for efficacy of treatment of advantage-stage breast cancer. It was found to be co-expressed with PD-1 and PD-L1 and its increased expression was associated with a favorable prognosis and enrichment of multiple tumor-infiltrating lymphocyte types, particularly CD8+ T cells [8]. CLIC2 mutation were reported also to be associated with intellectual disability. Large-scale next generation resequencing of X chromosome genes identified a missense mutation in the CLIC2 gene on Xq28 in a male with X-linked intellectual disability and not found in healthy individuals. In particular, point mutation p.H101Q seems to be fundamental for membrane insertion of CLIC2 protein, suggesting that p.H101Q may be a disease-causing mutation, the first correlated with CLIC superfamily [9]. As for CLIC1 and CLIC2 proteins, also CLIC3 was found to be overexpressed in bladder cancer [10], pancreatic cancer [11], and metastatic breast cancer [12]. Its role in carcinogenesis was demonstrated to be correlated with the activity of CLIC1 as glutathione-dependent oxidoreductase activity that drive angiogenesis and increases invasiveness of cancer cells with transglutaminase-2 in ovarian cancer [13]. CLIC4 protein were associated to several normal cellular functions as the recruitment of NLRP3 complex during the formation of inflammasome [14], and-together with CLIC1 function and ezrin-they bridge plasma membrane and actin cytoskeleton at the polar cortex and cleavage furrow to promote cortical stability and successful completion of cytokinesis in mammalian cells [15]. CLIC proteins were found to also be involved in endothelium architecture. Tavasoli and colleagues have demonstrated that both CLIC1 and CLIC5A activate ezrin, radixin, and moesin proteins for the formation of the lumen of vessels. As a result, the dual CLIC4/CLIC5-deficient mice developed spontaneous proteinuria, glomerular cell proliferation, and matrix deposition [16]. Among CLIC proteins, despite CLIC1 is considered a chloride channel, its RNA transcripts lack the specific sequences for membrane insertion. Thus, it does not follow secretary pathways typical of canonical ion channels through endoplasmic reticulum and Golgi apparatus [17]. For this reason, to demonstrate that CLIC1 protein is a channel, rather than a modulatory element, recombinant protein inserted in artificial lipid bilayers has been used. These experiments evidenced a conductance in internal-external 140 mM KCl of 30 pS with a variety of substates suggesting that the integral membrane form of CLIC1 would comprise four single monomers [2,3,5,17,18]. Interestingly, each monomer is able to allow an anionic flux. This feature is supported by experiments on lipid bilayers performed by Tulk et al., showing that at low protein concentration there is a linear relationship between protein concentration and channel activity while reaching a saturation at higher concentration [4]. These experiments show that there is an assembly of single CLIC1 monomers at the membrane level and not prior the insertion. A similar behavior was observed by Warton et al. using the Tip-Dip technique. In particular, the authors show that an initial small conductance with slow kinetics is replaced by a high conductance fast kinetics determining four times the initial conductance recorded, equating WT CLIC1 channel [5]. The authors concluded that the high conductance fast kinetics represent the final step of single monomers membrane insertion and subsequent aggregation and cooperation. Using patch clamp technique on CHO-K1 cells overexpressing CLIC1, Tonini et al. were able to measure CLIC1 conductance both in cellular and nuclear membrane. They showed an evident sensitivity and selectivity to chloride concentrations. Furthermore, taking advantage of inside-out and outside-out patch clamp configurations, they concluded that the N-terminus domain projects outside of the cell, while the C-terminus is directed inwardly [19]. More recently, taking advantage of tethered bilayer lipid membranes combined with impedance spectroscopy, it has been shown that redox environment regulates CLIC1 membrane insertion but also that cholesterol represent an initial membrane binding site which would favor subsequent transmembrane protein rearrangements or oligomerizations [20].
CLIC1 in Non-Pathological States
In 2009, Qiu et al. published that, although the deletion of Clic1 gene in mouse did not cause embryonic lethality, mice developed a mild bleeding disorder with a higher platelet number and longer bleeding times compared to control group [21]. The authors propose this mechanism to be related to platelet P2Y12 receptor. Despite the mechanism of action relating CLIC1 to P2Y12R is still unknown, the authors suggest that the redox balance could be involved in the pathway. This conclusion is supported by the observation that P2Y12R requires free thiol groups (Cys17 and Cys270) to elicit its function. In this picture, CLIC1 would participate in the control of redox environments which, in turn, may induce platelet activation [21]. More recently, relatively abundant levels of CLIC proteins have been found in human bronchial epithelial cells primary cultures. In particular, it has been shown CLIC1 to be able to modulate cAMP-induced chloride currents [22]. The authors show that one of the well-known CFTR antagonists on the market, PPQ-102 abolished the currents elicited by isoproterenol or forskolin. Despite the fact that this effect is attributed, usually, to the activation of CFTR, RNA Seq datasets show the absence of this transcript in the analyzed cells. As a matter of fact, knockdown of CLIC1 inhibits significantly cAMP-induced chloride currents, suggesting a direct contribution of CLIC1 to this chloride conductance. The authors concluded that manipulating CLIC1 properties may enhance Cl-conductance, thus proposing the possibility to potentiate CLIC1 functional activity as a possible therapeutic strategy in conditions of impaired chloride homeostasis as cystic fibrosis [22].
Activation of Transitory Allostasis through CLIC1 Function
In tissues, CLIC1 is found mainly as a cytoplasmic protein able to shuttle to plasma membrane following a remodeling elicited by perturbation of cellular homeostasis. The two major players are oxidative stress and pH alterations. Overproduction of reactive oxygen species (ROS), modifies the structure of the protein through the formation of a disulphide bridge between Cys24 and Cys59, considered the essential oxidoreductase residues, which promotes membrane docking [3,20,23]. Transient oxidative stress is proper of physiological functions as cell cycle progression [24] and innate immune responses [25]. As a matter of fact, CLIC1 functional activity was demonstrated to be increased in active dividing cells, or cells that have completed mitotic processes [5,26]. Recently, it has been demonstrated a correlation between CLIC1 activity and Peroxiredosin 6 (Prdx6), a protein that influences redox environment. Downregulation of Prdx6 causes an alteration of oxidation of CLIC1 GST domain, causing cell swelling and cell cycle arrest [27]. Redox modifications are also involved in immune functions. Valenzuela and colleagues cloned CLIC1 for the first time to investigate its role in activated macrophages, where it was found to be overexpressed compared to resting immune cells [17]. In resting macrophages CLIC1 is cytoplasmic, while after stimulation with pro-inflammatory factors, CLIC1 migrates to the plasma membrane of cells and colocalizes with the NADPH oxidase complex subunit Rac2 [28]. The transient translocation of CLIC1 in plasma membrane of macrophages promotes phagosome acidification from 4.4 to 3.4. Clic1 −/− macrophages display impaired phagosome proteolysis and altered ability to kill microorganisms and to expose antigens to T cells [28]. Moreover, it has been demonstrated that tmCLIC1 activity contributes to the formation of inflammasome modulating NLRP3 complex [14,25], a multiproteic complex crucial for the immune system, which regulates the activation of Caspase 1 and release of cytokines as Interleukine 6 (IL-6) and Interleukine 8 (IL-8), promoting pyroptosis (inflammatory programmed cell death). In addition, Domingo-Fernandez and colleagues have found CLIC1-mediated current essential in enzymatic cascade for activation of NLRP3 inflammasome. NLRP3 agonist (NEK7) causes a potassium efflux that provokes mitochondrial damage and release of ROS with a consequent activation of Caspase 1 and release of IL-6 and IL-8 [14,25]. Recent studies have revealed a role of CLIC1 protein in angiogenesis supporting ROS production for proliferation and migration of endothelial cells [29][30][31]. Tung and Kitajewski demonstrate that CLIC1 plays a role in endothelial cell growth, sprouting, branching, and migration regulating Integrins subunits β1 and α3 expression [32]. Moreover, CLIC1 contributes to endothelial damage responses cascade translocating in plasma membrane of human umbilical vein endothelial cells (HUVEC) where it enhances the expression of tumor necrosis factor α (TNF α), IL-1ß, intracellular adhesion molecule 1 (ICAM1), and vascular cell adhesion protein 1 (VCAM1) [33]. The mechanism of action by which CLIC1 could support the activation of transitory allostatic processes was investigated by Milton and colleagues that postulated a feed forward mechanism involving CLIC1 protein functional activity. NADPH oxidation causes an extrusion of electrons throughout plasma membrane and the resulting depolarization downregulates the bioenergy of enzymatic activity. CLIC1 conductance during ROS overproduction supports the setting of the resting membrane potential, allowing electrogenic activity of enzymes and ensuring further ROS production. CLIC1 translocation to plasma membrane could be considered a compensative mechanism to support cellular physiological functions in the presence of oxidative stress [34]. Considering these evidence, it has been postulated that CLIC1 acts as a second messenger which can translocate transiently, as in macrophages activation and cell cycle phases, or chronically to the plasma membrane of hyperactivated systems [26]. Chronic oxidative stress and pH alkalization are hallmark for different pathologies. Therefore, chronic expression of CLIC1 in plasma membrane could be considered as a promising target to identify pathological conditions.
CLIC1 during Chronic Allostasis
Literature reports CLIC1 to be linked to cancer development and neurodegenerative processes. Both share a critical modification of the overall oxidative state towards an unbalance of reactive oxygen species, despite they are associated to opposite outcomes.
CLIC1 in Solid Tumors
CLIC1 protein was found to be overexpressed in several solid tumors. According to "The Human Protein Atlas" RNA database, gliomas, colorectal cancer, lung, ovarian, pancreatic, prostate, breast, and melanoma cancers have shown higher levels of CLIC1 RNA. Although the role of CLIC1 protein in tumorigenesis is still unclear, in recent years, different studies have elucidated the possible involvement of CLIC1 in tumor formation and progression. Lu and colleagues have postulated that CLIC1 acts as an oncogene in pancreatic cancer. Here, patients with CLIC1-positive tumors have demonstrated worse overall survival compared to those with CLIC1-negative tumors [35]. CLIC1 expression is correlated to a poor prognosis not only in pancreatic cancer, but also in tumors as lung cancer [36], ovarian cancer, where CLIC1 upregulation was correlated to chemotherapy resistance [37], gallbladder, and gastric cancers [38], where it was found to promote cells proliferation via MAPK/AKT regulation [39] and facilitates the formation of tumorassociated fibroblasts [40]. In addition, CLIC1 protein upregulation correlates with the level of aggressiveness and metastatic potential of colorectal cancer cells [41], where it was demonstrated to regulate cell volume and ROS level. One of the aspects of cancer is its heterogeneity, it is composed of differentiated cells, representing the major component of the tumor mass, and cancer stem cells (CSCs) [42]. Despite cancer stem cells constitute a small percentage of tumor cells (0.05-1%), they are the prime sources of tumor recurrence and metastasis as they confer resistance to chemo and radiotherapies [43]. Cancer stem cells are able to take advantage of the aberrant redox system. In particular, it was demonstrated that oxidative stress and gene-environment interactions support the development of a huge variety of solid tumors, as glioblastoma, breast, prostate, pancreatic, and colon cancer [44]. CLIC1 expression in plasma membrane is strictly dependent on redox homeostasis and pH levels supporting tumorigenesis and development of cancer [26]. Therefore, considering the role of oxidative stress in CSCs, a hypothesis could be that tmCLIC1 protein has an important role in CSCs physiology. In particular, CLIC1 functional activity was assessed in glioblastoma stem cells (GSC). Electrophysiological experiments have revealed a significant increase in CLIC1-mediated current in cells positive to stem/progenitor cell markers (Sox2, Nestin), demonstrating that tmCLIC1 is chronically expressed in GSCs compartment compared to differentiated ones [45]. In addition, Setti and colleagues have shown that tmCLIC1 functional activity has a pivotal role in glioblastoma stem cells (GSC), supporting self-renewal. Moreover, the tumorigenic capability of GSCs, in which CLIC1 is inhibited or silenced, was significantly reduced [42,46]. tmCLIC1 protein was demonstrated to have a pivotal role in cell cycle progression and cellular proliferation promoting G1/S cell cycle transition in GSCs. Peretti and colleagues proposed tmCLIC1 to contribute to a feed forward mechanism together with NHE1 proton pump and NADPH oxidase, promoting ROS overproduction [26]. According to these evidences, tmCLIC1 could represent an important target to sensitize CSCs toward chemo and radiotherapies, increasing tumor response to anticancer treatments.
CLIC1 in Neurodegenerative Processes
The first evidence about tmCLIC1 involvement in neurodegenerative processes dates to 2004 [47]. In the paper, the authors show an increase in protein expression, and enhanced functional activity in response to Aβ peptide incubation in microglial cells. In addition, the Aβ-induced release of Nitric Oxide (NO), as well as TNF-α is prevented by impairing tmCLIC1 function by its pharmacological inhibition and by RNA interference, supporting an active role of the protein during the neurodegenerative processes. A possible mechanism by which tmCLIC1 would support the neurodegenerative process was suggested in a following work [34]. In particular, its ion channel activity would sustain the Aβ-induced ROS production by microglial cells, acting as a charge compensator, which balances the electrogenic activity of the NADPH Oxidase (NOX2). The correlation between tmCLIC1 functional activity and neurodegeneration was further confirmed by the observation of high protein levels in section from 3xTg-AD mouse brain, characterized by a progressive deposition of Aβ during their life. The authors concluded that tmCLIC1 inhibition could represent a promising anti-inflammatory target for Alzheimer's Disease therapy. This was further strengthened by the work of Paradisi et al. [48] where the authors have shown that the inhibition of tmCLIC1 activity by pharmacological blockers and the suppression by RNA interference does not alter the activation of resident microglia. In particular, they show that affecting tmCLIC1 ion channel produces a reduction in some toxic aspects of activated microglia (i.e., NO production) while leaving unaltered its phagocytic ability. Moreover, in a following work it was shown that by co-culturing neurons and microglia in a two-chamber transwell, IAA94 treatment was able to prevent the neurotoxicity induced by the treatment with Aβ1-42 peptide [49]. In addition, they show that tmCLIC1 ion channel blocker IAA94 failed to protect cortical neurons when directly exposed to Aβ1-42 suggesting the phenomenon to be dependent on the lack of CLIC1 expression by these cells, or lack of a pro-oxidant environment to provoke CLIC1 membrane insertion. Since soluble and fibrillar Aβ are both able to induce ROS production in microglia it is reasonable to think that this is the cellular subtype where tmCLIC1 inserts forming a functional channel. In a recent study it was shown that peripheral blood mononuclear cells (PBMC) and, in particular, monocytes, isolated from Alzheimer's Disease patients show an overexpression of CLIC1 mRNA that is accompanied by a significative increase in transmembrane protein [50]. The authors concluded that the study could pave the way for future developing strategies aimed at discriminating between healthy and patients with ongoing neurodegenerative processes. A similar result was achieved by Miller et al. [51] by means of blood microarray data and machine learning approach to predict the cognitive status using three groups: normal cognitive, mild cognitive impairment, and probable Alzheimer's Disease. Blood RNA levels showed that CLIC1 was the only significant probe to change among groups although, to date, it cannot be sufficient to predict neurodegenerative progression throughout Alzheimer's Disease.
Conclusions
CLIC1 protein has been described in several papers associated to different pathological states, from neurodegenerative processes to solid tumors. However, CLIC1 is better described as a determinant of adaptation in different biological contexts, rather than being able to induce disease development. CLIC1 is one of the several proteins that react to stress conditions promoting cell survival. CLIC1 protein is conserved from yeast to humans and expressed in the cytoplasm of every cell types. A peculiar feature of CLIC1 is to be present on the cell membrane only in hyperactivated systems. This is particularly important for cells targeting in diagnosis of a pathological state or as a possible pharmacological target in therapeutic protocols. In all the living organisms, cells usually maintain a dynamic state of equilibrium defined as homeostatic state. In case of profound and endless changes in the surrounding environment, there are two main possible cell reactions. Cells involved in defense mechanisms, as cells of the immune system, become persistently activated. On the contrary, most somatic cells are not able to cope with the new conditions and are negatively selected. However, a small percentage of somatic cells are able to activate stable allostatic mechanisms and survive. It is important to underline that allostatic cells are not similar to the original ones, but they form a new population with completely different physiological characteristics. The majority of these new cell populations represent an adaptation (Figure 1). icant probe to change among groups although, to date, it cannot be sufficient to predict neurodegenerative progression throughout Alzheimer's Disease.
Conclusions
CLIC1 protein has been described in several papers associated to different pathological states, from neurodegenerative processes to solid tumors. However, CLIC1 is better described as a determinant of adaptation in different biological contexts, rather than being able to induce disease development. CLIC1 is one of the several proteins that react to stress conditions promoting cell survival. CLIC1 protein is conserved from yeast to humans and expressed in the cytoplasm of every cell types. A peculiar feature of CLIC1 is to be present on the cell membrane only in hyperactivated systems. This is particularly important for cells targeting in diagnosis of a pathological state or as a possible pharmacological target in therapeutic protocols. In all the living organisms, cells usually maintain a dynamic state of equilibrium defined as homeostatic state. In case of profound and endless changes in the surrounding environment, there are two main possible cell reactions. Cells involved in defense mechanisms, as cells of the immune system, become persistently activated. On the contrary, most somatic cells are not able to cope with the new conditions and are negatively selected. However, a small percentage of somatic cells are able to activate stable allostatic mechanisms and survive. It is important to underline that allostatic cells are not similar to the original ones, but they form a new population with completely different physiological characteristics. The majority of these new cell populations represent an adaptation ( Figure 1).
Figure 1.
Graphic representation of the dual behavior of cells under chronic stress. All cells constantly rely in a dynamic state of equilibrium defined as homeostasis, in which cells respond to stress stimuli with a transient activation of allostatic mechanisms. When the stimuli became persistent, the hyperactivation is irreversible. In a major part of the cases, this is not compatible with cell life, leading to cell death. In rare cases, the hyperactivation has, as a consequence, the establishment of a new steady state far different from the initial cell.
From a physiological point of view, tumorigenesis can be defined as an adaptation. Uncontrolled proliferation can be seen as an extreme survival mechanism in response to a chronic stress state generated by a hostile environment. In this scenario, among other proteins, CLIC1 is instrumental for this transformation. In particular, as a metamorphic Figure 1. Graphic representation of the dual behavior of cells under chronic stress. All cells constantly rely in a dynamic state of equilibrium defined as homeostasis, in which cells respond to stress stimuli with a transient activation of allostatic mechanisms. When the stimuli became persistent, the hyperactivation is irreversible. In a major part of the cases, this is not compatible with cell life, leading to cell death. In rare cases, the hyperactivation has, as a consequence, the establishment of a new steady state far different from the initial cell.
From a physiological point of view, tumorigenesis can be defined as an adaptation. Uncontrolled proliferation can be seen as an extreme survival mechanism in response to a chronic stress state generated by a hostile environment. In this scenario, among other proteins, CLIC1 is instrumental for this transformation. In particular, as a metamorphic protein, its translocation to the plasma membrane could be a first line response to diverse stimuli. Transmembrane insertion and the consequent increased ionic permeability can represent a fast reaction to counteract cytoplasmic unbalance conditions. As mentioned before, cytoplasmic CLIC1 protein binds GSH. In adverse conditions, the rearrangement of the protein structure releases GSH, contributing to buffer cytoplasmic oxidation. At the same time, CLIC1 association with an ionic channel guarantees the possibility to disperse any excess of charges that could accumulate during a persistent condition of oxidative stress. In the central nervous system, the presence of high concentration of β-amyloid generates a state of oxidative stress that triggers microglia activation with the intent of phagocyte oligomers and fibrils. The release of a huge amount of amyloid causes microglia hyperactivation that becomes harmful for neurons and glial cells, generating the process of neurodegeneration. In solid tumors, the process may be different. Chronic conditions of oxidative stress due to environmental causes like pollution, smoke, wrong or excessive alimentation, or hormonal dysregulation could induce cell "adaptation" giving rise to a neoplastic process. Both these conditions show a direct correlation between the presence of membrane CLIC1 and the state of microglia activation or tumors aggressiveness. The initial mechanism of neurodegeneration and tumorigenesis in sporadic cases is still obscure. However, CLIC1 protein functional expression as a membrane protein could be symptomatic of hyperactivated states as that of neurodegeneration or the neoplastic transformation. In this perspective, tmCLIC1 could represent a promising element as a diagnostic tool, as well as a therapeutic target.
Author Contributions: F.C. and I.V. equally contribute to the present work. All authors have read and agreed to the published version of the manuscript.
Funding:
The APC was funded by a AIRC grant to Michele Mazzanti, Grant n • 24758; I.V. was supported by a AIRC fellowship for Italy. | 5,815.6 | 2021-07-01T00:00:00.000 | [
"Biology"
] |
Nephroprotective Activities of Ethanolic Roots Extract of Pseudocedrela kotschyi against Oxidative Stress and Nephrotoxicity in Alloxan-induced Diabetic Albino Rats
Introduction: The present study was designed to evaluate the nephroprotective activities of ethanolic roots extract of Pseudocedrela kotschyi against oxidative stress and nephrotoxicity in alloxan induced diabetic albino rats. Methodology: Diabetes was induced in Albino rats by administration of alloxan monohydrate (150 mg/kg, i.p). The ethanolic roots extract of Pseudocedrela kotschyi at a dose of 250 and 500 mg/kg body weight was administered at single dose per day to diabetes induced rats for a period of 28 days. The effect of ethanolic roots extract of Pseudocedrela kotschyi on blood glucose, Urea, Creatinine, renal oxidative stress markers and lipid peroxidation were measured in the diabetic rats. Results: The ethanolic roots extract of Pseudocedrela kotschyi exhibited significant reduction of blood glucose (p<0.05) at the dose of 250 and 500 mg/kg when compared with the standard drug Glibenclamide (10 mg/kg). Urea and Creatinine levels were significantly increased (p<0.05) in diabetic group without treatment as compared to control. In addition, the level of oxidative stress markers such as Superoxide Dismutase (SOD), Catalase (CAT), Glutathione Peroxidase (GPx), Glutathione (GSH) were significantly decreased (p<0.05) in diabetic rats as compared to normal rats while the lipid peroxidation (MDA) significantly increased (p<0.05) in diabetic group without treatment as compared to control (normal) rat. Apart from these, histopathological changes also revealed the cytoprotective nature of the ethanolic roots extract of Pseudocedrela kotschyi against alloxan induced necrotic damage of renal tissues. Conclusion: From the above results, we concluded that the ethanolic roots extract of Pseudocedrela kotschyi can prevent renal damage from alloxan induced nephrotoxicity in rats and it is likely to be mediated through its antioxidant activities.
INTRODUCTION
Diabetes Mellitus (DM) is clinical condition in which metabolic activity of the carbohydrate, lipid and protein metabolism that contributes to several kinds of complications including diabetic nephropathy. Diabetic nephropathy is one of the major complications of diabetes mellitus. Diabetic nephropathy is the most important cause of death in diabetics, of whom, 30-40% eventually develop end-stage renal failure (Giorgino et al., 2004).
It has been reported that diabetic complications are associated with overproduction of Reactive Oxygen Species (ROS) and accumulation of lipid peroxidation by-products (Palanduz et al., 2001). These complications are considered the leading causes for death among these patients. Oxidative stress is generally considered as an imbalance between prooxidant and antioxidant (Lieber, 1997). Oxidation plays a major role in diabetes. The increase in free radical release accompanied by decrease in antioxidants is a major cause of diabetes (Mohamed et al., 1999). In diabetes mellitus, there are usually alterations in the endogenous free radical scavenging defenses which leads to ineffective scavenging of reactive oxygen species resulting to oxidative damage (Oberley, 1988).
Experimental diabetes induced by alloxan, selectively destroys the β-cells of pancreas by generating excess reactive oxygen species and produces kidney lesions that are similar to human diabetic nephropathy (Boukhris et al., 2012).
Thus, an early control of DM is recommended as one of main strategy to prevent these complications and increase the life span of these patients. The use of herbal medicine for the treatment of DM has gained prominence because of the undesirable side effects of oral anti-diabetic drugs, coupled with more recurrent failure of beta cells to respond to treatment (Earl, 2005). The herbal approach received a boost following the WHO recommendation for research on the beneficial uses of medicinal plants in the treatment of DM (WHO, 1980) They were considered more potent over most western medicines that are often made of single chemical compounds effective for direct relief of the symptoms (Li et al., 2004).
Pseudocedrela kotschyi (PK) is a member of the family Meliaceae. The plant is widespread in savannah woodland (Hutchinson and Dalziel, 1958). It is a tree of up to 20 m high with a wide crown and fragrant white flowers (Shahina, 1989). It is commonly found in West and Tropical Africa and in abundance particularly in North Central Nigeria, It is commonly known as Emi gbegi among Yoruba's and Tuna among Hausa's. In Togo, the bark is not only being used as a febrifuge and it is also being used in the treatment of gastrointestinal diseases and rheumatism (Hutchinson and Dalziel, 1958). The plant has also been reported to be used traditionally in the treatment of dysentery (Shahina, 1989). The analgesic, anti-inflammatory activities of plant (Musa et al., 2005), antiepileptic (Anuka et al., 1999) and dental cleansing effect have also been reported (Akande and Hayashi, 1998;Okunade et al., 2007) This plant has demonstrated a wide range of biological effects such as antimalarial (Asase et al., 2005), anticonvulsant (Odugbemi, 2006), antibacterial (Koné et al., 2004), antipyretic (Akuodor et al., 2013), heamatinic (Ojewale et al., 2013a) and antidiabetic activities (Georgewill and Georgewill, 2009;Bothon et al., 2013).
However, despite of widespread use of Pseudocedrela kotschyi as folk medicine to manage DM and other ailments, its protective effects on the renal system has not been established.
Therefore, the present study was designed to evaluate the nephroprotective effects of ethanolic roots extract of Pseudocedrela kotschyi against nephrotoxicity and oxidative stress in alloxan induced diabetic rats.
MATERIALS AND METHODS
Collection of the Plant material: Pseudocedrela kotschyi (PK) roots were collected from cultivated farmland at Kulende, Ilorin, Kwara State, Nigeria. The plant was identified and authenticated at Forestry Research Institute of Nigeria (FRIN), where voucher specimen has been deposited in the herbarium (FHI 108280).
Preparation of the plant extract:
The roots of the plant were shade-dried at room temperature for 7 days and then powdered using mortar and pestle. 850 g of the root powder was soaked in 96% ethyl alcohol in three cycles using soxhlet extractor. The crude extract was filtered through filter paper (Whatman No 4) and the filtrate was concentrated and dried in a rotary vacuum evaporator under reduced pressure in vacuole 30°C to obtain 105.2 g dry residue to yield an (12.3% vol.) viscous brownish-coloured extract which was stored in an air tight bottle kept in a refrigerator at 4°C till used.
Experimental animals: Twenty five healthy albino rats weighing between 160-180 g were obtained from the Laboratory Animal Center of College of Medicine, University of Lagos, Idi -Araba, Lagos, Nigeria. The rats were housed in clean metallic cages and kept in a well-ventilated room under 24±2°C with 12 h light/dark cycle throughout the experimental periods and allowed to acclimatize to the laboratory condition for one week before being used. They were fed with standard animal pellet (Livestock Feeds Plc., Nigeria) and had free access to water ad libitum.
The animals were carefully checked and monitored every day for any changes. The experiments complied with the guidelines of our animal ethics committee which was established in accordance with the internationally accepted principles for laboratory animal use and care.
Acute toxicity studies: The acute toxicity of ethanolic roots extract of Pseudocedrela kotschyi were determined by using 35 male Swiss albino mice (20-22.5 g) which were maintained under the standard conditions. The animals were randomly distributed into a control group and six treated groups, containing five animals per group. After depriving them food with 12 h prior to the experiment with access to water only, the control group was administered with single dose of ethanolic roots extract of Pseudocedrela kotschyi with at a dose of 0.3 mL of 2% Acacia solutions orally while each treated group was administered with single dose of ethanolic roots extract of Pseudocedrela kotschyi orally with at a doses of 1.0, 2.5, 5.0, 10, 15 and 20.0 g/kg body weight respectively of 2% acacia solution. They were closely observed in the first 4 h and then hourly for the next 12 h followed by hourly intervals for the next 56 h and continued for the next 2 weeks after the drug administration to observe any death or changes in behavior, economical, neurological profiles and other physiological activities (Ecobichon, 1997;Burger et al., 2005).
Experimental design:
To induce diabetes, rats were first anesthetized with inhalation of gaseous nitrous. ALX was purchased from representative of Sigma Company in Nigeria and was prepared in freshly normal saline. Diabetes was induced by intraperitoneal (ip) injection of alloxan monohydrate (150 mg/kg bwt) in a volume of 3 mL (Ojewale et al., 2013a). After 72 h, blood was withdrawn for blood glucose estimation monitored with a glucometer (ACCU-CHEK, Roche Diagnostics). The animals with blood glucose level≥250 mg/dl were considered diabetic and included in the experiment (Ojewale et al., 2013b).
The diabetic animals were randomly distributed into three groups of five animals each while the last group, the positive control, had five normal rats. Treatments were as follows: Group I : Normal rats received only vehicles (0.5 mL/kg body weight) and served as positive control. Group II : Alloxan diabetic rats that received only vehicles (0.5 mL/kg body weight) (control negative). Group III : Alloxan diabetic rats treated with glibenclamide at a dose of 10 mg/kg bwt Group IV : Alloxan diabetic rats treated with Pseudocedrela kotschyi at a dose of 250 mg/kg bwt GroupV : Alloxan diabetic rats treated with Pseudocedrela kotschyi with at a dose of 500 mg/kg bwt.
Treatments were administrated every day by intragastric gavage. Rats were maintained in these treatment regimens for four weeks with free access to food and water ad libitum. Every week measurements of body weight were recorded for 4 weeks.
Sample collection: Blood sample was collected every week from each animal and was used for glucose analysis. The remaining blood sample was put into sterile tubes and allowed to clot for 30 min and centrifuged at 4000 rpm for 10 min using a bench top centrifuge.
At the end of the experimental period, each rat was reweighed and starved for 24 h. Then, blood sample was collected from each animal by cardiac puncture and rats were sacrificed under diethyl ether anesthesia. Kidney was carefully dissected out and rinsed in cold saline solution, weighed and processed immediately as described below.
Histological analysis: This was done as described by Ogunmodede et al. (2012). Briefly, the kidneys were cut on slabs about 0.5 cm thick and fixed in bouin's fluid for a day after which they were transferred to 70% alcohol for dehydration. The tissues were passed through 90% alcohol and chloroform for different durations before they were transferred into two changes of molten paraffin wax for 20 min each in an oven at 57°C. Serial sections of 5 µm thick were obtained from a solid block of tissue and were stained with heamatoxylin and eosin, after which they were passed through a mixture of equal concentration of xylene and alcohol. Following clearance in xylene, the tissues were oven-dried. Photomicrographs were taken. Evaluation of biochemical parameters: Glucose determination: Glucose was measured by the glucose oxidase method using a commercially available kit (ACCU-CHEK, Roche Diagnostics).
Creatinine and urea: Creatinine and Urea were determined using colorimetric assay kits from sigma (Lab-kit, Spain).
Determination of renal enzymatic antioxidants:
Assay of Catalase (CAT) activity: Catalase activity was evaluated according to the method described by Aebi (1984). Activity of catalase was expressed as units/mg protein.
Assay of Superoxide Dismutase (SOD) activity:
Superoxide dismutase activity was evaluated according to the method described by Winterbourn et al. (1975). It was expressed as u mg -1 protein.
Assay of Glutathione Peroxidase (GPx) activity:
Glutathione peroxidase activity was determined by the method described by Rotruck et al. (1973). The absorbance of the product was read at 430 nm and it was expressed as nmol -1 protein.
Determination of renal non-enzymatic antioxidants:
Assay of renal reduced Glutathione (GSH) concentration: Reduced Glutathione (GSH) was measured according to the method described by Ellman (1995). The absorbance was read at 412 nm, it was expressed as nmol -1 protein.
Assay of lipid peroxidation (Malondialdehyde):
Lipid peroxidation in the renal tissue was measured colorimetrically by Thiobarbituric Acid Reactive Substance (TBARS) method described by Ohkawa et al. (1979). Concentration was estimated using the molar absorptive of malondialdehyde which is 1.56×10 5 M -1 cm -1 and it was expressed as nmol/mg protein.
Statistical analysis: Data are presented as means±SD. Student's t-test analysis was applied to test the significance of differences between the results of the treated, untreated and control groups. The difference was considered significant at the conventional level of significance (p<0.05).
Acute toxicity:
The acute toxicity study result (Table 1), showed that five out of the five animals that received 20.0 g/kg bwt of the extract died within 4 h (100 % death) while the animals that received 5 g/kg body weight survived beyond 24 h. The LD 50 of the drug was therefore calculated to be 8.85 g/kg bwt. Effect on body weight of rats: The control group (I) gained weight over the three weeks of experimental period, with the mean body weight increasing by 22.2 g after 4 weeks ( Table 2). In contrast, the untreated diabetic group (II) lost an average of 21.1 g after 4 weeks (p<0.05). Treatment with glibenclamide and Pseudocedrela kotschyi resulted in significant weight gain to levels approaching the control group (Groups III, IV and V, versus Group I). Mean kidney weight in the diabetic untreated group significant decreased as compared to that of control group while diabetic treated groups with Pseudocedrela kotschyi and glibenclamide decreased by improving the restoring activity of Pseudocedrela kotschyi extract to the weight lost due to alloxan administration.
Effect of the ethanolic roots extract of Pk on blood glucose level:
The blood glucose level in diabetic group was significantly higher (p<0.05) than those of the control group (Table 3). On the other hand, administration of ethanolic roots extract of P. kotschyi for 28 days was found out to lower blood glucose significantly in a dose dependent manner in treated diabetic groups (p<0.05) when compared with those of the diabetic nontreated (negative)group. The anti-hyperglycemic effect of ethanolic extract (250/500 mg/kg) was found more effective than the reference drug, glibenclamide produced a significant reduction in blood glucose compare to diabetic control.
Effect of the ethanolic roots extract of Pk on Urea and Creatinine:
The study also indicates that serum urea and creatinine levels significantly (p<0.05) increased in diabetic group when compared with the diabetic groups treated with the extract, the serum urea and creatinine levels reduced significantly (p<0.05) when compared to those of the diabetic group (Table 4).
Effect on renal enzymatic and non-enzymatic antioxidants:
Diabetic rats showed significant lowering (p<0.05) in Superoxide Dismutase (SOD), Catalase (CAT) and Glutathione Peroxidase (GPx) compared to control animals. Diabetic rats treated with the Pseudocedrela kotschyi significant higher (p<0.05) in kidney SOD, CAT and GPx activities compared to diabetic rats without treatment. Along the same line for the kidney content of Glutathione (GSH) and Malondialdehyde (MDA), GSH level in diabetic rats was significantly low (p<0.05) compared to normal rats. However the diabetic rats treated with the Pseudocedrela kotschyi showed significantly increase of (p<0.05) the kidney content of GSH compared to normal rats, on the other hand the level of MDA was significantly increased (p<0.05) in diabetic rats without treatment compared to normal rats while the level of MDA in diabetic treated groups with the Pseudocedrela kotschyi (Table 4) were significantly lower (p<0.05) compared to diabetic rats without treatment.
RENAL MORPHOLOGY
The biochemical results were also confirmed by the histological pattern of normal kidney showing normal tubular brushborders and intact glomeruli and Bowman's capsule (Fig. 1), Diabetic untreated group showed severe tubular necrosis and degeneration is shown in the renal tissue (Fig. 2). Diabetic group treated with glibenclamide (10 mg/kg body weight) showed normal tubular pattern with a mild degree of swelling, necrosis and degranulation (Fig. 3). Diabetic group treated with the ethanolic roots extract of Pseudocedrela kotschyi (500 mg/kg body weight) attenuated the toxic manifestations in the kidney caused by alloxan induction (Fig. 4).
DISCUSSION
This present study was aim to evaluate the nephroprotective effects of ethanolic roots extract of Pseudocedrela kotschyi against nephrotoxicity and oxidative stress in alloxan-induced diabetic male albino rats.
Diabetic nephropathy is one of the major complications of diabetes that is associated with the excretion of albumin in urine (Harcycy, 2002). Microalbuminuria and proteinuria typically reflect the presence of moderate and severed lesions, respectively, in kidney disease (Zipp and Schelling, 2003). However, the development of diabetic nephropathy is characterised by a progressive increase in urinary protein particularly albumin and a decline in glomerular filtration rate, which eventually leading to end-stage renal failure (Remuzzi et al., 2006). It could be interpreted from the result that the median acute toxicity (LD 50 ) value of the extract was 8.85 g/kg bwt. The extract can be classified as being non-toxic, since the LD 50 by oral route was found to be much higher than WHO toxicity index of 2 g/kg.
Our current data indicate that blood glucose level significantly increased, but body weight gain decreased after injection of ALX in albino rats.
Ethanolic roots extract of Pseudocedrela kotschyi normalizes the high blood glucose levels in diabetic rats. Glibenclamide was used as reference drug in diabetic models. It is interesting to note that the extract was more effective than reference drug.
However, the higher concentration of the extract used against the reference drug glibenclamide could be because of only a small amount of active substance present in the extract. Since, good activity has been seen in diabetic rats with damaged glomeruli therefore, it is likely to be expected that the ethanolic roots extract of Pseudocedrela kotschyi has some direct effect by increasing the tissue utilization of glucose (Ali et al., 1993).
In the present study, a single dose of alloxan injection affected kidney functions and produced a marked increase in glucose, urea, creatinine, lipid peroxidation levels and decrease in oxidation stress markers over a period of 28 days. Treatment with both the ethanolic roots extract of Pseudocedrela kotschyi and glibenclamide prevented urinary excretion of glucose. Importantly, marked reduction in glucose urea, creatinine, oxidative stress, lipid peroxidation levels and increase in oxidative stress markers over a period of 28 days brought about by the ethanolic roots extract of Pseudocedrela kotschyi indicates its protective effect on the renal functions (De Zeeuw, 2007;Boukhris et al., 2012).
An serum levels of urea and creatinine as a waste product formed during the digestion of proteins. An increase in serum of urea and creatinine levels in ALXinduced diabetic rats may indicate diminished ability of the kidneys to filter these waste products from the blood and excrete them in the urine which is also another characteristic change in diabetes. The main function of the kidneys is to excrete the waste products of metabolism and to regulate the body concentration of water and salt.
Furthermore, the results indicate that treatment of diabetic groups with ethanolic roots extract of Pseudocedrela kotschyi significantly reduced serum urea and creatinine levels. Based on these findings, the extract of this plant may enhanced the ability of the kidneys to remove these waste products from the blood as indicated by reduction in serum urea and creatinine levels and thus, confer protective effect on the kidney of diabetic rats.
In addition, administration of nephrotoxic doses of alloxan to rats resulted in development of oxidative stress damage in renal tissue. In this study, alloxan induced nephrotoxicity showed a significant (p<0.05) increase in the serum urea and creatinine concentrations in the Group II (diabetic untreated group) rat when compared to the normal group (Group I).Moreover, oral administration of ethanolic roots extract of Pseudocedrela kotschyi significantly (p<0.05) decreased the levels of urea and creatinine in group IV and V when compared to the Group II. However the levels of urea and creatinine were significantly increased (p<0.05) in the Group II rats when compared to Group I.
Thus, oxidative stress and lipid peroxidation are early events related to radicals generated during the renal metabolism of alloxan and also the generation of reactive oxygen species has been proposed as a mechanism by which many chemicals can induce nephrotoxicity (Roy et al., 2005;Boukhris et al., 2012).
Evaluation of SOD, CAT, GPx, lipid peroxidation, as well as GSH content and other antioxidant enzyme activities in biological tissue have been always used as markers for tissue injury and oxidative stress (Roy et al., 2005;Boukhris et al., 2012).
In view of the established role of oxidative stress and altered antioxidant levels in the pathogenesis of diabetic complications, we have evaluated the effect of the ethanolic roots extract of Pseudocedrela kotschyi on the levels of SOD, CAT, GPx, GSH and MDA in the kidney tissue of the rats. Like the ethanolic roots extract of Pseudocedrela kotschyi, glibenclamide also afforded antioxidant protection to the kidney.
Previous studies have clearly demonstrated that alloxan induction increases the lipid peroxidation and suppresses the antioxidant defense mechanisms in renal tissue (Boukhris et al., 2012).
During kidney damage, superoxide radicals are generated at the site of derangement and attenuate SOD and CAT, resulting in the loss of activity and accumulation of superoxide radical, which damages kidney. SOD and CAT are the most important enzymes involved in ameliorating the effects of oxygen metabolism (Roy et al., 2005;Boukhris et al., 2012).
The present study also demonstrated that alloxan induction resulted in a decrease in the SOD, CAT, GPx and GSH activities, when compared to normal control rats. It is due to enhanced lipid peroxidation or inactivation of the antioxidative enzymes. When diabetic rat was treated with the Pseudocedrela kotschyi, the reduction of SOD, CAT, GPx and GSH activity was significantly decrease (p<0.05) when compared with diabetic untreated group. However in the diabetic untreated animals the MDA levels are increased significantly, when compared to normal control rats. On Administration of ethanolic extract of Pseudocedrela kotschyi, the levels of MDA decreased significantly when compared to diabetic rats.
The histopathological studies of the kidney in this investigation provided additional evidence that damaged renal cells recovered with the treatment of ethanolic roots extract of Pseudocedrela kotschyi. The photomicrograph revealed severe degeneration of tubular and glomeruli, focal necrosis of tubules, cystic dilatation of tubules and fatty infiltration in diabetic control rats. These pathological conditions might be associated with increased diuresis and renal hypertrophy in the diabetic rats. Here, we demonstrated that injury to cells in the diabetic rats recovered by treatment of the ethanolic roots extract of Pseudocedrela kotschyi in 28 days. In this study, nephroprotective effects of ethanolic roots extract of Pseudocedrela kotschyi were observed in the photomicrograph that glomeruli appeared to be restored and tubules appeared to be regenerated.
CONCLUSION
We concluded that the ethanolic roots extract of Pseudocedrela kotschyi was found to effectively improve the renal function than the reference drug glibenclamide and ameliorate lesions associated with diabetic nephropathy in alloxan-induced nephrotoxicity rats. This was shown by improved activities of metabolic enzymes and recovered renal cells from injuries by ethanolic roots extract of Pseudocedrela kotschyi treatment in the diabetic rats. These results could further suggest that possible use of ethanolic roots extract of Pseudocedrela kotschyi as a nutraceutical supplement to cope with diabetic-induced detrimental effects and to protect renal cells from damages. | 5,334 | 2014-01-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Oxidation Behavior of Cu Doped CrAlN Coating Deposited by Magnetron Sputtering at 800°C
CrAlN coating with moderate Cu content was deposited on the surface of 1Cr18Ni9Ti stainless steel by DC reactive magnetron sputtering. The oxidation behavior under 800°C of the coatings was investigated emphatically in this research. The phase construction, microstructure and chemistry contents were analyzed by X-ray diffraction (XRD), field-emission electron scanning microscope (FESEM) and energy disperse spectroscopy (EDS). And the mechanical properties of the coatings were measured by microhardness tester and multi-function scratcher. As a comparison, the properties of CrAlN coating were also analyzed in the article. The results showed that the microstructure of CrAlCuN coating became smother and denser with a phase structure of CrN and Cu. With Cu doped in the CrAlN coating, the more excellent hardness which enhanced to 48.37 GPa was obtained. While the bonding strength of the coating was decreased because of the advanced microhardness usually released more internal stress. Both of the coatings were not complete failure and CrN diffraction peak was also tested. The oxidation rate of weight gain had significantly decreased with Cu doped, which improved the resistance of high temperature oxidation to a certain extent.
Introduction
The conventional binary nitride coatings have been paid extensive and close attention due to high hardness and excellent wear resistance over the past decades. TiN, as an early hard coating, has excellent mechanical properties but its moderate thermal stability limits its application areas [1]. In comparison, CrN coating exhibits enhanced performance under high temperature oxidation because of the formation of dense oxidation film. However, traditional binary coatings can't satisfy the demand of use due to the increasingly harsh working environment [2]. Alloying transition metal elements into these binary coatings can improve the mechanical properties and thermal stability which may expand their application scope. The element Al has been studied most. CrAlN coating is developed from binary Cr-based coating, which may obtain advanced hardness and high-temperature oxidation resistance by adding Al [3][4][5]. The main reason for improved performance is the formation of dense and adherent (Cr, Al) 2 O 3 mixed oxide film after high temperature annealing treatment. [6]. Previous studies indicated that the presence of Al in the coating formed a metastable phase in the cubic lattice of c-CrN coating. In addition, numerous works pointed that CrAlN coating had advanced resistance to high temperature oxidation, whereas its thermal stability is lower because of the presence of w-AlN during oxidation process [7]. And other researches pointed that the N-loss may be further aggravated with annealing process which has deteriorated the comprehensive practical properties and restricted its serviceable range. In certain studies that mixing the fourth element into CrAlN can constitute
Results and discussion
The phases of CrAlN and CrAlCuN coatings. Figure 2 shows the XRD patterns of the CrAlN and CrAlCuN coatings before oxidation. The diffraction patterns of CrAlN coating revealed that a crystal structure of CrN phase and a faint diffraction peak of Cu were detected in the CrAlCuN coating. Both of the coatings were presented a dramatically (200) preferred orientation with several major crystal orientation of (220), (222) and the preferential growth was not changed in the coatings [12]. However, the diffraction peaks of CrN phase were moved to the small angle that the main reason may be attributed to the adulteration of Cu. The mechanism of above reason was that Cu as a 3d transition metal element possibly changed the defect type so that cause lattice distortion [13]. What's more, the intensity of the diffraction peak has enhanced in the CrAlCuN coating. According to the Scherrer Formula that the full width at half maximum of the preferred orientation diffraction peak decreased indicated the grain size changed larger. And the surface morphology of the CrAlN and CrAlCuN Figure 4 image (b were som topograp defection attribute emerged exposed to the st oxides an high tem Figure 4 indicated that the grain had refined, and became more smother. Some research pointed that the Cu content in the coating had a limitation about 1%-5%. The Cu as soft phase played a leading role when it exceeded the limitation. In this work, the Cu content was 1.41at% and the result brought into correspondence with Pan's research [14][15][16][17]. And compared with the increasing trend of hardness, the bonding strength usually presented a reverse tendency. Table 1 enumerated the chemical composition of the coatings after oxidation annealing with surface scanning. From the Table 1, a variety of elements have been detects including several matrix elements such as Fe, Mn, Ni and O, whereas no N was discovered that the result was in agreement with the phase analysis in Figure 3. It can observe that the outer diffusion of the matrix elements appear, which may produce the crisp oxide and further cause the film to crack or fall off. And in the Table 2 the same situation was happened except that the N still preserved in the coating which account for that although the coating was oxidation, still kept excellent thermal stability and played a protective role to a certain degree. Figure 6 shows the kinetic curve about the coatings. From the image that the general trend of the coatings after oxidation annealing is that as the oxidation time changes its oxidation rate increases first and then decreases and the main reason was that the initial phase of oxidation is dominated by the diffusion of oxygen so that it did not produce enough dense metal oxide protective film. As the oxidation time lengthened, the coatings surface were cladding with (Cr, Al) 2 O 3 oxide film which could postpone the process of oxidation [18]. However, the diffusion of oxygen aggravated that caused the bonding force of the coatings became weaken, what's more, crack of drop occurred followed. The matrix element such as Fe, Ni presented external diffusion in this process, which is consistent with the EDS analysis results. The oxidative weight curve of CrAlN coating is much lower than the CrAlCuN coating. The main reason is that the primary channel of the oxygen atoms diffuse depended on the grain boundary and defection [19], the Cu can refine the size of grains and reduce the boundary, what's more, it can change the type of the defection in the coatings to a certain extent, which reduced the diffusion rate of oxygen in the coatings and slow the oxidation process [20][21].
Conclusion
The phase structure of CrAlN and CrAlCuN coatings deposited by DC reactive magnetron sputtering was basically CrN and Cu with a preferred orientation of CrN (200) before oxidation. And after oxidation annealing, both of the coatings were appeared the diffraction peaks of metal oxide.
The size of grains in CrAlCuN coating was smaller than CrAlN coating. And the CrAlN coating oxidized seriously than CrAlCuN coating.
The hardness of the coatings was increasingly from 30.17 GPa to 4837 GPa with the adulteration of Cu. While the bonding force of the CrAlCuN coating was 8N that lower than the CrAlN coating with a value of 16N.
The oxidation process of the CrAlCuN coating was slower than CrAlN coating and trend of coatings after oxidation annealing is that as the oxidation time changes its oxidation rate increases first and then decreases. | 1,678.6 | 2019-11-27T00:00:00.000 | [
"Materials Science"
] |
The GRAVITY fringe tracker
The GRAVITY instrument has been commissioned on the VLTI during 2016 and is now available to the astronomical community. It is the first optical interferometer capable of observing sources as faint as magnitude 19 in K-band. This is possible thanks to the fringe tracker which compensates the differential piston based on measurements of a brighter off-axis astronomical reference source. The goal of this paper is to consign the main developments made in the context of the GRAVITY fringe tracker. This could serve as basis for future fringe tracking systems. The paper therefore covers all aspects of the fringe tracker, from hardware, to control software and on-sky observations. Special emphasis is placed on the interaction between the group delay controller and the phase delay controller. The group delay control loop is a simple but robust integrator. The phase delay controller is a state-space control loop based on an auto-regressive representation of the atmospheric and vibrational perturbations. A Kalman filter provides optimal determination of the state of the system. The fringe tracker shows good tracking performance on sources with coherent K magnitudes of 11 on the UTs and 9.5 on the ATs. It can track fringes with an SNR level of 1.5 per DIT, limited by photon and background noises. On the ATs, during good seeing conditions, the optical path delay residuals can be as low as 75 nm root mean square. On the UTs, the performance is limited to around 250 nm because of structural vibrations.
Introduction
is an instrument used on the Very Large Telescope Interferometer (VLTI) situated at the Cerro Paranal Observatory. It can combine the light from four telescopes. These telescopes can either be the four Auxiliary Telescopes (ATs, with a primary mirror diameter of 1.8 m) or the four Unit Telescopes (UTs, with a diameter of 8 m). The specifications of the instrument were derived from the most demanding science case, which was to observe microarcsecond displacements of the light source causing the flares of the supermassive black hole Sgr A* (Genzel et al. 2010, and references herein). Such astrometric measurements are possible with 100 m baselines (Shao & Colavita 1992;Lacour et al. 2014) and were recently demonstrated on-sky by Gravity Collaboration (2018a,b,c).
In its quiescent state, Sgr A* can become fainter than K = 18 mag. Therefore, measuring its position reliably requires an integration time on the order of minutes. To enable such long integration times, it is important to correct the atmosphere effects in real time. The higher-order atmospheric wavefront distortions are compensated for by an adaptive optics (AO) system. However, the AO systems do not sense, and therefore cannot correct, the differential phase between the telescopes. This is the role of the fringe tracker: a phase-referencing target (IRS 16C in the case of Sgr A*) is used as a guide star. In real time, the optical path differences (OPD) between each pair of telescopes are computed, and are used to control the displacement of mirrors on piezoelectric systems. This is the counterpart of the AO system, but at interferometric scale.
To push the comparison a little farther: without fringe tracking, interferometry requires short integration times and deconvolution techniques. This was the time of speckle imaging (Labeyrie 1970;Weigelt 1977), when using bispectrum and closure phases was a good but not really sensitive technique. With fringe tracking, optical interferometry enters a new age: long detector integration times (DIT up to 300 s) give access to faint sources (Kmag of 19) and to the possibility of combining spectral resolution (up to 4000) with milliarcsecond spatial resolution. This is the historical equivalent to the emergence of adaptive optics: it enables a new level of science.
Fringe tracking is not new, however. Small observatories demonstrated the concept, with PTI and CHARA (Berger & Monnier 2006). On the Keck Interferometer (Colavita et al. 2013), comparable astrophysical objectives (Woillez et al. 2012) pushed a similar development for phase referencing (Colavita et al. 2010). Previous projects also existed at the VLTI: at first, the FINITO fringe tracker (Le Bouquin et al. 2008) was used in combination with the AMBER instrument (Petrov et al. 2007). More recently, ESO developed the PRIMA fringe tracker (Delplancke et al. 2006).
However, unlike AO, fringe tracking was not yet mature, and many complex problems had to be investigated for GRAVITY. A first problem was how to deal with limited degrees of freedom (the piston actuators) while many more optical path differences are measured (Menu et al. 2012). A second problem, which does not exist in AO, is that the phase signals are known only modulo 2π. A third problem, partially addressed by the AO community (Petit et al. 2008;Poyneer & Véran 2010), is how to set a correct state space control system that optimally uses a Kalman filter to cancel the vibrations (Choquet et al. 2014). A fourth difficulty is that both group-delay (GD) and phase-delay (PD) tracking need to be used in a control system to keep the best of both. The last hurdle of the project consisted of dealing with multiple closing baselines, some of them resolved. The GRAVITY fringe tracker is now the best and most modern instrument in the field of fringe tracking for optical interferometry. Below we describe the algorithms and mechanisms.
This paper builds upon earlier works from Cassaing et al. (2008), Houairi et al. (2008), Lozi et al. (2011), Menu et al. (2012), and Choquet et al. (2014) 1 . The Menu et al. (2012) paper theoretically describes modal control of the phase delay. The Choquet et al. (2014) paper simulates the expected performance of the Kalman controller. The present paper wraps up the series by presenting the final implementation on the VLTI. Section 2 is an overview of the technical implementation of the fringe tracker and is followed by Sect. 3, where the basis of the fringe sensing is defined by the observables. The control algorithm is presented in three sections: Sect. 4 defines the operational modes, Sect. 5 presents the group-delay controller, and Sect. 6 presents the phase-delay controller. Section 7 gives examples and statistics of on-sky observations. Last, Sect. 8 concludes the paper by a discussion of possible improvements to increase the sensitivity and accuracy of the fringe tracker.
Hardware
GRAVITY is equipped with two beam combiners (Perraut et al. 2018) that perform fringe tracking and scientific observations in the K band. GRAVITY has two main operational modes. In the on-axis mode, the light of one star is split 50:50, that is, equal portions of the flux go to the fringe tracking and to the scientific channel. In off-axis mode, the field is split into two with the help of a roof mirror: one of the two objects serves as fringe-tracking reference, while the scientific channel carries out longer integrations on the typically fainter science target (Pfuhl et al. 2010(Pfuhl et al. , 2012(Pfuhl et al. , 2014. 1. RTD of the SAPHIRA detector. The pixels in the green boxes are read by the fringe tracker and are used for tracking. They correspond to the six baselines, four ABCD outputs, two polarizations, and six wavelength channels. The names GV1 to GV4 correspond to the input beams. The values in yellow to the right correspond to the phase shift in degrees between the different ABCD outputs. The two beam combiners are based on silica-on-silicium integrated optics (Malbet et al. 1999), optimized for 2 µm observations (Jocou et al. 2010). The beam combination scheme is pair-wise. Each telescope pair is combined using a static ABCD phase modulation. This means that for each of the six baselines, there are four outputs corresponding to a phase shift between the two beams of 0, π/2, π, and 3π/2 radians. The total 24 outputs of the beam combiner can be seen on the real time display (RTD) of the instrument. The relevant pixels are delimited by the green rectangles in Fig. 1. Within each rectangle, the flux is dispersed in six spectral channels. The 24 lines of rectangles correspond to the 24 outputs, while the two columns of rectangles correspond to the two linear polarizations.
The detector is a HgCdTe avalanche photodiode array called SAPHIRA (Finger et al. 2016). It is running at 909 Hz, 303 Hz, or 97 Hz. It can also run at either low or high gain. The high gain corresponds to a gain of γ = 7 ADU per photodetection with a typical readout noise below σ RON = 5 ADU (≈0.7 e − ) per pixel. The low gain does not amplify the photodetections (γ = 0.5 ADU/e − ) and is only used for very bright targets (K magnitudes below 5 on the UTs).
The flux is processed by a first local control unit (LCU) that yields values of the observables. The LCU is an Artesyn MVME6100 using an MPC7457 PowerPC R processor (Kiekebusch et al. 2010). The data are then transmitted to a second LCU by means of a distributed memory system called reflective memory network (RMN). This second LCU processes the observables to control four tip-tilt piston mirrors on piezoelectric actuators from Physik Instrument. Each actuator has its own position sensor, driven in closed loop. The cutoff frequency of the piezoelectric delay lines is then higher than 300 Hz, with a maximum optical path delay of 60 µm (Pfuhl et al. 2014).
Real-time monitoring of the fringe tracking, including live display, is done on a separate Linux workstation connected to the LCUs through the RMN. This workstation processes the data, computes the best-fit control parameters (including the Kalman parameters), and updates the control parameters of the second LCU (Abuter et al. 2016).
Software
The two LCUs use the VxWorks operating systems. The computation is done using the so-called tools for advanced control (TAC) framework, which uses the standard C language environment. The TAC processing is triggered synchronously with the SAPHIRA, following the predefined frequency of the detector.
The first LCU computes the necessary estimators for the controller: -Four closure phases (Θ i, j,k ); see Sect. 3.6. Inside this LCU, only a few parameters can be changed: the number of DITs over which each of these quantities can be averaged.
The default values are presented in Sect. 3.
The second LCU is in charge of controlling the piezo-mirrors for adequate fringe tracking. Figure 2 is a block diagram of the controller algorithm: -The group-delay control loop, based on an integrator controller, with a direct command to the piezo-actuator (in blue). -A feed-forward predictor, based on the action to the actuators, to increase the gain of the closed-loop system (in green). -The phase-delay Kalman controller (in red).
-Two peripheral blocks for searching the fringes and adding a π modulation to the phase delay. The dual architecture of the controllers is made to obtain both sensitivity and accuracy. In case of high signal-to-noise ratio (S/N), the Kalman filter can determine and predict the state of both atmospheric and vibrational perturbation for the best possible correction. In case of low S/N, the Kalman filter relies on its predictive model, which in the worst case, can be as simple as a constant value. In this case, the group-delay controller is still working efficiently and provides coherence instead of fringe tracking.
The third and last software element is on the Linux workstation. It is a python script that runs every 5 s on the last 40 s of data calculated on the first LCU. It computes the parameters that are then used by the second LCU, which does the real time control. It includes the parameters for the predictive control and the best gain for the Kalman filter.
Visibility extraction
The real-time processing load consists mostly of matrix multiplications. The pixel-to-visibility (P2VM) principle was first used for data reduction (Tatulli et al. 2007) on AMBER. AMBER used spatial modulation for fringe coding, but the formalism was subsequently adapted to work with ABCD beam combiners (Lacour et al. 2008). Using a similar notation, the relation between the incoming electric field E i and the outgoing electric field S k can be expressed as where T k,i is the complex transfer function from the input i of the beam combiner to its output k. In the case of GRAVITY, i max = 4 and k max = 24. Averaged over the DIT, the flux is then equal to the average of its instantaneous intensity: or alternatively, after decomposition: The temporal average of the electric field leads to two types of coherence losses. One depends on the optical path inside the beam combiner, the other on the spatial brightness distribution of the astrophysical object. The first, C k,i, j , is intrinsic to the device and has to be calibrated. The second, V i, j , is the reason why we built the interferometer. The relation with the mean electric field intensity is then approximated by the following equation: Hence, Eq. (3) can be written as a matrix product: where, using as the transpose operator: |T 1,4 | 2 · · · |T 24,4 | 2 |T 1,1 T * 1,2 |C 1,1,2 · · · |T 24,1 T * 24,2 |C 24,1,2 . . . . . . . . .
|T 1,3 T * 1,4 |C 1,3,4 · · · |T 24,3 T * 24,4 |C 24,3,4 Everything related to the transfer function of the instrument is inside the 10 columns and 24 rows of the V2PM matrix. The V2PM is calibrated during daytime on the internal source of the instrument. It is regularly computed for verification, but has been proven to be very stable over several months. It is part of the calibration files that are needed by the first LCU (Sect. 2.2). The P2VM is the pseudo-inverse matrix of V2PM. It can be obtained by splitting the V2PM into real and imaginary parts, The group-delay integrator controller is shown in blue, the phase-delay state controller in red, and the actuators predictive model is plotted in green. The group-delay controller is the main controller: it continues to track the fringes even if the instantaneous S/N is too low for phase-delay tracking. The phase-delay state controller is a closed-loop system that determines the atmospheric perturbationsX V . A proportional controller (K) corrects for the effect of the atmosphere. The last block of the group-delay controller, the quantization step, ensures that the group-delay control signal is always a multiple of 2π: the change in OPD caused by the group-delay controller is not seen by the phase-delay controller. and taking the inverse using a singular-value decomposition (SVD). The P2VM is used to retrieve the astrophysical information from the flux observed on the pixels. This information consists of the flux F and the coherent flux Γ. Both are computed from the pixel flux q k,λ and P2VM according to Eq. (7): In the above equation, we used a slightly different nomenclature, which is used hereafter. q k,λ =< |S k,λ | 2 > is the number of photons observed over one DIT of the fringe tracker on a given pixel. F i,λ =< |E i,λ | 2 > is the energy per DIT of the incoming beam i at wavelength λ. Last, the complex coherent flux Γ i, j,λ corresponds to the visibility before normalization by the flux. It is obtained from the flux F i,λ and the visibilities V i, j,λ : using F i,λ F j,λ = |E i | 2 |E j | 2 . All these values are computed in real time for each of the six wavelengths in the K band. In case of split polarization (when the Wollaston is inserted), the calculation is also done independently for both polarizations.
Flux estimator
The flux F i is the value extracted after each DIT from Eq. (7), summed over the N λ = 6 spectral channels: where i corresponds to the input beam number.
Phase delay estimator
The phase delay Φ i, j is derived from the complex coherent flux, but after a first step to correct for the phase curvature caused by the dispersion: which is only a first-order approximation of the dispersion. It is caused both by the atmosphere and by the fibered differential delay lines (FDDL) 2 . Therefore, the corrective term D is a timevariable parameter that depends on the position of the star and the position of the FDDLs. The phase delay is then extracted by coherent addition of the six spectral channels: It is worth noting that the phase delay Φ i, j is wrapped: it lies between −π and π. No unwrapping effort is made at this stage.
Phase variance estimator
Computing the S/N of the fringes is essential for the fringe tracker. It ensures that the controller does not track on noise. It is also needed for the state machine to know if it has the fringes locked or if it must start looking for the fringes elsewhere. The S/N is calculated from the variance of the phase delay: S /N i, j = 1/ Var(Φ i, j ). For each DIT, from the photon and background noise, the variance on each pixel is estimated by the relation: where σ sky and q i,λ,sky are the noise and flux, respectively, observed during sky observations. The term γ is the detector gain in ADU per detected photon. The covariance matrix of the real and imaginary part of the Γ terms can be obtained from the P2VM: where is the transpose operator. To save processing time, only the diagonal values of the variance matrix are calculated. They correspond to the variance of the real and imaginary parts of Γ i, j,λ . This assumes that the covariance between the real and imaginary parts is negligible (a good assumption for an ABCD with phase shift of 0, π/2, π, and 3π/2). In the end, for simplicity, we estimated the variance of the phase by the following equation: which is the variance of the amplitude of the coherent flux averaged over five DIT. The five-DIT average is a way to increase the precision of the calculation. However, this is done at the expense of accuracy: the coherent averaging of the complex coherent flux can add a negative bias to the phase variance estimator.
Group delay estimator
The group delay, Ψ i, j , is also obtained from the complex coherent flux. Because it consists of a differential measure of the phase as a function of wavelength, this estimator is more noisy than the phase delay (Lawson et al. 2000). To increase its S/N, for each one of the N λ = 6 spectral channels, the Γ i, j,λ is first corrected for dispersion, cophased, and averaged over 40 DITs. The result is then used to derive the group delay by multiplying the phasor of consecutive spectral channels: As for the phase delay, the group delay is estimated modulo 2π. In terms of optical path, however, Ψ i, j corresponds to a value R times smaller than Φ i, j , with R = 23, the spectral resolution of the GRAVITY fringe tracker. This is explicit in open-loop operation where the phase delay and group delay are compared for a response to top-hat piezo commands. Because both estimators wrap at 2π, this means that the estimator is valid over a long range equal to 23 times the wavelength. This is the main advantage of the group-delay estimator: to be able to find fringes far away from the central white-light fringe (the fringe of highest contrast). However, because it uses individual spectral channels, the group delay calculated on a single DIT would be extremely noisy. Thus the 40 DIT summation is a way to increase the S/N of the group delay, at the cost of losing response time. In the end, the group-delay estimator is a reliable, but slow, estimator of the optical path difference.
Closure-phase estimator
The closure phases, Θ i, j,k , are calculated on all four triangles from the coherent flux. Before taking the argument, the bispectra are averaged over 300 DIT (corresponding to 330 ms at the fastest 909 Hz sampling rate). Closure phases are estimated from the phase delay: but also from the group delay: They are observables modulo 2π, no unwrapping is intended.
OPD-state space
A main difficulty for the fringe tracker consists of dealing with the different dimensions of the vectors involved. The number of phase observables is six. The number of delay lines is four. Last, the number of degrees of freedom is three. In Menu et al. (2012), we proposed an R 3 modal-state space orthogonal to the piston space. However, as also mentioned in Menu et al. (2012), this modal control has an important drawback: it cannot work in a degraded mode where one or more telescopes are missing. Instead, the GRAVITY controller uses the OPD-state space.
In some sense, the implemented fringe tracker is a downgraded version of the state controller proposed in Menu et al. (2012). However, it facilitates managing flux drop-out as well as working with a reduced number of baselines.
Reference vectors
For the system to work properly, the OPD-state space must be colinear to the piston space. However, because the astronomical object is not necessarily a point source, the closure phases are not necessarily zero. Therefore, the OPD component orthogonal to the piston space must be removed from the measurement. This is done by subtracting a reference position, or set point, which is computed from the closure-phase estimators Θ PD i, j,k and Θ GD i, j,k . Then, the error terms, that is, the differences between the measured OPD and the set points, are colinear to the piston space.
However, the devil is in the details. There are four closure phases, and only three can be used. The noisiest closure phase is therefore discarded. The three other closure phases are applied on the three edges of the triangle forming the noisiest closure phase. The reference vector is therefore defined as: depending on which triangle has the lowest S/N, from left to right, the 123, 124, 134, or 234 triangle. The reference values for the group delay are calculated similarly: The closure-phase changes as a function of time, causing the reference position to adapt to any change in the phase closures. The closure phase is smoothed over a long enough time (300 DIT) to avoid adding additional noise. However, the choice of which triangle is the noisiest is made only once between each scientific frame to avoid sharp jumps in the reference vector over the integration time of the science detector.
This reference scheme works most of the time. However, problems arise in two specific instances. First, when the object is so highly resolved that the used closure phases contain a baseline with zero visibility. If that happens, an undefined reference value is applied to a perfectly sane baseline and the system can diverge. Second, if the fringe tracker is tracking on two unconnected baselines (e.g., between telescopes 1-2 and 3-4), the closure phases are undefined, and using their values would mean losing one of the two locked baselines. To resolve these two problems, the closure phases are modified as follows: if one of the baselines of any of i j, jk, or ik have an S/N below the value S /N GD threshold , then Θ PD i, j,k used in Eq. (19) is a fixed value, and Θ GD i, j,k = 0 is used in Eq. (20). The difference in treatment between the group and phase delay arises because the default group-delay tracking shall be zero, while the default phase-delay tracking can be any constant value.
Transfer matrices
After the reference values are subtracted from the OPD, we can freely project the data in piston-state space as well as back to the OPD-state space. Hereafter, we use the same nomenclature as in Menu et al. (2012). P corresponds to the four-dimension piston-state vector, while OPD corresponds to the six-dimension OPD-state vectors. The matrix M corresponds to the conversion between piston and optical path difference: where The conversion OPD → P is ill-constrained, however: the rank of matrix M is 3, not 4. This is because the global piston cannot be obtained from the differences in the optical path. Nevertheless, we can define a pseudo-inverse matrix: where † denotes the pseudo-inverse operator.
Thresholds and S/N management
The system uses two distinct thresholds. A first threshold, S /N GD threshold , disables the baselines where the S/N is too low to make the baseline useful. This is the case when the fringes are not yet found, when the fringes are suddenly lost, or when the astronomical target is so highly resolved that the spatial coherence is close to zero. To detect these events, the S /N GD threshold is compared to a moving average of the phase-delay variance estimator: Var(Φ i, j ) 40 DIT . This group-delay threshold must be adapted to the target: it mustbe high enough to ensure that the fringe tracker does not track on side-lobes, but low enough to detect the fringes.
A second threshold, S /N PD threshold , is used solely for the phasetracking controller. It disables the tracking on a given telescope in case of rapid S/N drop-off. The main target of this threshold is to be able to catch a drop in flux injection (caused by external tiptilt), with the expectation that the S/N will increase soon hereafter. Typically, S /N PD threshold = 1.5 and S /N PD threshold < S /N GD threshold . Hence, we defined two matrices I that convert OPD space error into a space of the same dimension colinear to the OPD space. They read where the only difference is on the pseudo-inverse operator † . In both equations, the 6 × 6 weighting matrix W distributes the weights among the different OPDs: where This step is important to remove the risk of tracking on noise: if the variance of the phase reaches this threshold, the pseudoinverse discards that baseline in its calculation. The pseudoinversion is done using a SVD: The idea behind this decomposition is that U and V are two invertible orthonormal matrices (the left-singular and rightsingular eigenvectors, respectively): I = UU and I = VV , where I is the identity matrix. S is a square diagonal matrix where the values on the diagonal correspond to the square root of the eigenvalues: S = diag(s 1 , s 2 , s 3 , 0). For a four-telescope operation, three eigenvalues are non-zero. The number of nonzero eigenvalues decreases when the system cannot track all telescopes. Switching from searching to tracking only depends on the rank of the I 4 GD matrix. If GRAVITY is running in a degraded mode, the TRACK-ING transition can happen for a rank of 2 or even 1.
The pseudo-inverse of matrix S is calculated differently for the group-delay and phase-delay control loop. For the groupdelay control loop, we have For the phase-delay control loop, we have As a result, the two matrices I are calculated for each DIT from a new SVD and the equations where the difference between the two is that the eigenvalues in I 6 PD are weighted down when they are below a value equal to |S /N PD threshold | 2 .
State machine
The rank of matrix I 6 GD (the number of non-zero eigenvalues) drives the decision-making of the state machine. For a fourtelescope operation, a rank of 3 means that the position of the delay lines on all the telescopes is constrained. When only three telescopes are tracked, the rank is 2. The rank is 1 for two linked telescopes.
The state machine (Fig. 3) therefore has only three states: IDLE, SEARCHING, and TRACKING. When the operator starts the fringe tracker, it switches to the state SEARCHING. As soon as the rank of the I 6 GD matrix is 3, the fringe tracker transitions to state TRACKING. When the rank of the matrix decreases and remains low for a period of 1 s or more, then the system automatically switches back to SEARCHING mode. In both states, the group-delay and phase-delay controllers are running. This means that whether in SEARCHING or TRACKING state, the system still tracks the fringes on the baselines with sufficient S/N. the feedback signal Ψ is therefore enhanced by averaging over 40 DITs in Eq. (15). The control algorithm of the group delay is presented in Fig. 4. It generates a control signal u GD whose unit is the radian of the phase delay. It is made of seven distinct blocks that are described below. The measured group delay is a vector defined as
Group-delay block diagram
The first block is a comparator that extracts the error between a reference vector (Ref Ψ ) and the measured group delay. The logic would be to have all six setpoints equal to zero to track on the white-light fringes. However, as explained in Sect. 4.2, this is not possible in the presence of non-zero group-delay closure phase Θ GD . The use of a setpoint vector as defined by Eq. (20) therefore ensures that all baselines track the fringes with the highest contrast that do not contradict each other: The second block is there because the phase measurement is only known modulo 2π. Because ε Ψ,n can have any value, this block adds or subtracts an integer number of 2π to ensure that the error is between −π and π.
The third block uses the I 6 GD matrix to weight the errors between the different baselines. Following that matrix, the OPD error vector is now strictly, in a mathematical sense, colinear to the piston space. Moreover, the error on any baseline with no fringes is either estimated from other baselines or set to zero. After the third block, the error group-delay vector is now where the percent sign corresponds to the modulo function. The fourth block is a threshold function that quenches the gain of the control loop if the absolute value of ε Ψ,n is lower than 2π/R. This value corresponds to an OPD of one wavelength, meaning the group-delay controller cannot converge on an accuracy better than 2.2 µm. This is necessary to let the phase-delay controller track within a fringe: The fifth block is the integrator. In the time domain, it writes u GD,n = u GD,n−1 + G GD ε Ψ,n .
The same controller gain G GD is applied to all baselines. Upper panel: signal from the modulation block, in radians (the Volts-to-radians gain is not accounted for here). At each science exposure (here 5 s), the u modulation values change to make π offsets between the baselines. The signal repeats every four science exposures so that each baseline is observed with as many +π/2 as −π/2 offsets. Lower panel: modified sawtooth signal to search for fringes when the fringe tracker is in SEARCHING state. Here the research is made on all telescopes, meaning that the rank of matrix I 6 GD is 0.
is mostly a pure delay caused by the moving average of 40 DITs, as stated by Eq. (15) in Sect. 3.5. The −3 dB cutoff frequency therefore depends on the sampling rate of the fringe tracker. It is 13, 4.5, and 1.5 Hz for sampling rates of 909, 303, and 97 Hz, respectively. The sixth block is the matrix M † to transpose the control signal from OPD space to piston space.
The last block is a quantization function. Practically, it means that the group-delay controller, when it detected a group-delay error larger than a fringe, only makes 2π phase-delay jumps until it comes to the reference fringe, without disturbing the long-term phase measurement. The command issued from the group-delay controller is therefore for each piezo-actuator the closest value that is a multiple of 2π. Hence When M † u GD,n is between −π and π, the control signal of the group-delay controller is constant, and the phase delay can work without interferences caused by the group-delay loop. We note that the final control signal (as shown in Fig. 2) also includes the modulation function, the fringe-search function, and the phase-delay control signal:
Modulation function and 2π phase jumps
The rounding of the group-delay command ensures that the same phase is always tracked even though jumps between fringes are required when the white-light fringe is searched for. However, this means that the science detector always records the fringes at the same phase delay, and when the sky is not well subtracted, for example, this can bias the visibility. This can be explained in the case of a perfect ABCD beam combiner. Assuming the recorded flux on each one of the four pixels is q A , q B , q C , and q D , the P2VM matrix calculates the raw complex visibility in this way: When the sky removal on q A is not perfect, we have an additional flux that biases the measurement:q A = q A + ε A . To remove this error term, the solution is to record the fringes with π offsets: This temporal π modulation is added to the group-delay command (Fig. 5). It is not seen by the group-delay control loop because of the quenching block (block 4 in Fig. 4). This command is synchronized with the reset of the science detector, and it is sequentially: Only with a minimum of 4 DIT with each of the u modulation above can we have, on each baseline, as many exposures with 0 and π offsets. Therefore the number of science DITs within a GRAV-ITY exposure is recommended to be a multiple of 4.
Fringe search
The search is made through a modified sawtooth function of increasing amplitude (u saw ). This function is started when the fringe tracker enters SEARCHING state. It is disabled when the system transition to TRACKING state. This is the only difference between the two modes. The same sawtooth function is generated for each piezo-actuator, but with different scaling factors: −2.75, −1.75, 1.25, and 3.25 for the first, second, third, and fourth beam, respectively. The four commands are then multiplied by the kernel of the I 4 GD matrix. This matrix is computed as giving the following signal: where I 4 is the 4 × 4 identity matrix. This ensures that the baselines with sufficient fringe signal are tracking and are not modulated by the sawtooth function. When no more fringes are found, the kernel is the identity, and each baseline is modulated. This gives the trajectory shown in Fig. 5. An example of fringe research and acquisition is presented in Fig. 6. In this example, at t = 0, the fringes are found and tracked on two baselines: the baseline between telescopes 1 and 3, and the baseline between telescopes 2 and 4. The group delay for the two fringes is zero, and the rank of the I 4 GD matrix is 2. The fringe tracker is therefore in SEARCH-ING state. At t = 0.8 s, the fringes are found on baseline 34. The rank of the I 4 GD matrix increases, and the system switches to TRACKING state. The group delays of baselines 12, 14, 23, and 34 are brought to zero. Signals are later found on all six baselines and the system reaches a nominal tracking state at t = 0 s. From bottom to top, the phases correspond to baselines i, j equal to 12, 13, 14, 23, 24, and 34. At the beginning of this recording, the system only has fringes on two baselines (green and purple). At t = 0.8 s, the system found fringes on the yellow baseline, and later on all the other baselines. Gray areas correspond to the baselines whose S/N is below the value S /N GD threshold , meaning that no tracking is performed. After t = 1.4 s, the system nominally tracks on all baselines.
Principle
The GRAVITY phase control loop uses both piston-space and OPD-space state vectors. This is the easiest way to properly handle both piezo-actuators and the atmosphere dynamics (Correia et al. 2008). The vibrations and atmospheric perturbations are represented by six OPD-space state vectors labeled together x V . Each vector corresponds to a baseline. The piezo-actuators are characterized by four piston-space state vectors x P , one for each delay line. The phase delay, Φ n , is a vector of measured phases: which results from a linear combination of the state vectors x V,n and x P,n at a time n. The real-time algorithm of the fringe tracker follows a sequence: 1. It predicts the future state of the system from previous state using the equation of state (Sect. 6.2), 2. It uses Kalman filtering to update the state vectors (Sect. 6.5), and 3. It uses the system state to command the piezo-actuators to correct for vibrations and atmospheric effects. (Sect. 6.6). The phase-delay controller is summarized by the block diagram presented in Fig. 7.
Equations of state
The equations of state are where each matrix is three-dimensional. Because we assumed uncorrelated baselines, only the diagonals are populated: Fig. 7. Block diagram of the phase-delay controller. The two control signals u GD and u PD are summed before they are applied to the actuators. In this control scheme, the state vectorsX V are regulated. The control signal u PD is then issued from a matrix multiplication of the state vectors: u PD = KX V .
Here the upper index corresponds to the telescope or baseline numbers. In the block diagram of Fig. 2, the equations of state are written in the frequency domain, but the transfer function writes The evolution of the system is driven on one hand by white noise u n , and on the other hand by a user-controlled voltage applied to the piezo-actuators u n . When u n is white noise, x V,n has a colored noise, as highlighted by Eq. (54). The problem is similar for AO systems, where it has already been mentioned and corrected for (Poyneer & Véran 2010).
In Menu et al. (2012) and Choquet et al. (2014), we have shown that using several autoregressive (AR) models of order 2 in parallel was effective to correct for both vibration frequencies and atmospheric turbulence. However, practically, two issues made that implementation difficult: i) the determination of the vibration peaks for low S/N data, and ii) the need to change the space model when vibrations appear or disappear. Both problems can be technically resolved, but to ensure maximum robustness, we preferred a fixed well-defined space model. The idea is that the state x V do change with time, but the space-state model does not. We therefore used an autoregressive model of order 30. This means that the state-space model corresponds to the 30 last values of the phase delay: 29 v i, j 30 1 0 0 · · · 0 0 0 1 0 · · · 0 0 0 0 1 · · · 0 0 · · · · · · · · · · · · · · · · · · 0 0 0 · · · 1 0 and The value 30 allows for complexity in the vibrational pattern while characterizing the perturbations with a sufficiently low degree of freedom. It was chosen in light of Kulcsár et al. (2012): they showed that it is a good compromise with respect to other state-space models to correct atmosphere and vibration for the tip-tilt of AO systems. The transfer matrix A i, j V is determined every 5 s on a workstation that is connected on the reflective memory network. The transmission matrix A i P is also an autoregressive model, but of order 5. It includes the response function of the full system (image integration time, processing time, inertia of the piezo mirror, etc.), from setting a control voltage u n to measuring a phase delay Φ n . It is measured monthly by injecting a top-hat signal into the system. The matrix for the AR5 model verifies and where the a i 0 , a i 1 , a i 2 , a i 3 , and a i 4 values, obtained experimentally on the fringe tracker are presented without normalization in Table 1. The same values can be represented in a Bode plot, see Fig. 8. The cutoff frequencies, calculated at a phase of −90 • , are around 60 Hz. This is considerably below the bandwidth of the piezo-actuator measured by Pfuhl et al. (2010), who showed a cutoff frequency above 220 Hz. This is caused by the pure delays inside the system: detector integration time, processing time, and data transfer between the different units (LCUs). The static gain of the piezo G i piezo = 5 k=1 a i k is also an important property of the piezo-actuators because they are used both in the group-and phase-delay controllers. Practically, to remove the static gain of the control loop, and because u is in unit of OPD radians instead of Volts, the a i k values are normalized by the static gain of the G i piezo .
Observation equation
The observable, Φ n , depends on the state vectors of the piezoactuators x P and on the state vectors of the atmosphere and vibrations x V . Both contribute to the phase delay. The equation for the measurement process of the phase delay is a linear combination of the two: where w n is a white measurement noise and C V and C P are threedimensional measurement matrices with two-dimensional matrices on the diagonal for the vibrations and the piezo-actuator, respectively: In the case of a resolved astronomical target, the phase-delay vector Φ n also includes a term that corresponds to the spatial signature of the target on the phase. It appears in the form of a non-zero closure phase (Θ PD 0) and is included in the phase delay by the term Ref Φ , as defined in Eq. (19).
Parameter identification
The a 1≤k≤5 predictive terms for each of the piezo-actuators are determined offline during dedicated calibration laboratory measurements. This is not the case for the identification of the v 1≤k≤30 AR values. They are calculated on a distinct Linux workstation. The workstation senses in real time the phase delay that passes by the reflective network. Every 5 s, it collects the last 40 s of data (36 000 sample at 909 Hz) and computes the pseudo-openloop phase delay Φ ATM,n by removing the influence of the piezo command and closure phases. This phase delay corresponds to the phase delay produced by the atmosphere and the vibrations only: from which it calculates the differences between consecutive values: The ∆Φ ATM,n differential phase values are then processed baseline after baseline to derive 29 p 1≤i≤29 AR parameters that best represent the data. This uses the Python toolbox "Time Series Analysis". The generated parameters ensure stationarity (Jones 1980) and thus provide stability to the closed-loop algorithm. Last, the v 1≤i≤30 values are determined through after which the matrices A V and A P are sent by the RMN to the real-time LCU to adjust the parameters of the phase-delay controller.
Asymptotic Kalman filter
The vectorsx V,n|n−1 correspond to our best estimate of the state of the vibrations at a moment n from all the measurements available up to a time n − 1. It can be estimated fromx V,n−1 according to the equation of state in Eq. (48): However, the goal of the Kalman filter is to update this estimate from the error between the new observable and this new estimate. This error can be derived from the measurement process in Eq. (60) and writes which becomes, after wrapping around 0 (between −π and π) and baseline weighting, where %2π corresponds to modulo 2π. The I 6 PD matrix is here for two purposes. The first is to derive the errors on low S/N baselines through the baselines with higher S/N. The second purpose is to ensure that the low S/N data are either weighted down or discarded through the weighted pseudo-inversion in Eq. (32). The state estimatorx V,n is finally updated through an integrator: The gain G PD is not a scalar but a three-dimensional matrix. The correct estimate of G PD is the basis of Kalman filtering. In GRAVITY, it is not identified at each DIT, but every 5 s on the sensing Kalman workstation. Therefore, it is obtained from the asymptotic Kalman equations. It is calculated from the two covariance matrices of the measurement noise Σ w and the steadystate error Σ ∞ (Poyneer & Véran 2010;Menu et al. 2012): The steady-state covariance matrix can be obtained from the algebraic Riccati equation, the vibrations input noise, and the vibration and atmospheric noise Σ v : The noise characteristic (Σ v , Σ w ) makes the Kalman filter optimally adapted to the average noise on the system, but not to instantaneous noise variations. Adaptability is the role of the I 6 PD matrix in Eq. (68).
Determination of the control signal
To use the Kalman filter as a controller, an optimal command is needed. The purpose of this section is to determine the control signal u PD,n from the atmospheric state vectors: u PD,n = Kx V . The task of this signal is twofold. First, it must cancel the atmospheric perturbations on the measured phase delay (Φ ATM,n ). Second, it must ensure that the phase delay converges to the group-delay control signal u GD,n + u modulation + u search,n .
The first task uses the predictive power of the equation of state: where A n DIT V is the power of matrix A V over n DIT samples. The integer n DIT is there to account for the delay between control signal and its effect on the phase delay. Ideally, n DIT = 1, but because of the pure delay in the open-loop transfer function, it must be higher. This pure delay depends on the frequency of the fringe tracker. We currently use n DIT = 3 for the 909 Hz frequency and n DIT = 2 for the lowest frequencies.
The second task is achieved through the integrator in Eq. (69). This integrator updates the state vector of the atmospheric perturbations to minimize the error between estimation and observation. It causes the quantity ε Φ,n to converge to zero: ε Φ,n→∞ = 0. From Eq. (67), we can derive Under a steady-state assumption, C P x P,n→∞ = u n→∞ because the piezo-gains are normalized. From Eq. (72) we also have, under the same assumption, C V ·x V,n→∞ = M · u PD,n→∞ because the first row of A V is normalized. Hence, we have what we desire, i.e.,
Why not a simpler state controller?
The block diagram presented in Fig. 2 shows the complexity of the fringe tracker. A simpler version was used during the first commissioning in 2016. The phase and group-delay controllers were both different. First, the group-delay control signal was not used to directly command the piezo-actuators, but instead, it was used as a setpoint for the phase-delay controller. Second, the phase-delay controller was a proportional-integral controller. It was efficient and robust and a good commissioning tool. However, at low S/N, its performance was not adequate. The reasons were that -the phase-delay estimator is noisy. The proportional controller (or worse, a derivative) directly injected the noise back into the control signal. This was especially problematic during flux dropouts when the S/N can be close to zero; and that -the phase delay is modulo 2π. It needs to be unwrapped with respect to a prediction. Using the setpoint as the prediction resulted in many fringe jumps during poor atmospheric conditions. We therefore decided to have the group delay directly command the piezo, skipping the phase-delay controller. In parallel to the group-delay controller, we included a Kalman filter on the phasedelay feedback and used the predictive model to unwrap the phase (the vector ε Φ,n ). This gave the block diagram in Fig. 7.
Operation
The simplicity of operating the fringe tracker lies in the simplicity of the state machine. With only three states, the operator interactions are limited to the transition between IDLE, SEARCH, and back to IDLE. When the fringes are detected, the system automatically switches to TRACKING and the instrument then starts recording data. However, there are two free parameters that could ask for the intervention of the operator: the S /N GD threshold and S /N PD threshold thresholds. An operational error could be, for example, to set a S /N GD threshold too low and risk having the fringe-tracker tracking on the second lobe of a fringe packet.
When the thresholds are correctly set, the system is made to be fully autonomous, and able to deal with any glitch of the VLTI. For example, Fig. 9 shows the fringe tracker losing Fig. 9. Observation of star GJ 65 A on 21 November 2018. This example shows the case of a glitch on the AT2 adaptive optics system that resulted in the loss followed by the recovery of the fringe by the fringe tracker. Upper two panels: fringe tracker phase Φ i, j and group delay Ψ i, j estimators. The color lines correspond to each of the six baselines. The π/2 phase jumps at −7 and −1.5 s are normal and correspond to the modulation synchronized with the 5 s DIT science camera (the high spectral resolution detector). The gray areas correspond to detection of low S/N fringes by the controller. Lower left panels: S/N for each of the baselines calculated as the inverse of the square root of the phase variance (1/ Var(Φ i, j )). The horizontal red lines correspond to the group-delay threshold (S /N GD threshold ). If the black line dips below the red line, the FT stops tracking on that baseline. The horizontal purple lines correspond to the phase-delay threshold (S /N PD threshold ). Lower center panel: flux F i . Similarly, the red line is a threshold made by a moving average of all flux, which is used to detect the loss of a telescope. Lower right panels: commands to the piezo-actuators and VLTI delay line actuators. The piezo-actuators take care of the fast control signal, while the VLTI delay lines are used to offload and to search for fringes over large distances. telescope AT2 and how it recovered. The figures are the same plots as those available to the support astronomers during realtime GRAVITY operation. The data were captured and plotted at t = 0 s. At t = −6.39 s, the AT2 looses its pointing target, and the injected flux in the fiber drops. Immediately, the S/N level drops below the purple line, and the FT discontinues tracking on all the three baselines that include AT2. At this point, the system is still in TRACKING state. At t = −5.39 s, after a time delay of 1 s, the system switches to SEARCHING state, and the VLTI delay lines start following the sawtooth function. At t = −3.26 s, the system recovers the fringes on the AT2-AT1 baseline and starts centering them. At this point, the rank of the I 4 GD matrix is back to 3 and the fringe tracker switches back to TRACKING state. At t = −3.12 s, the fringes are detected on all six baselines, and the system again tracks nominally.
In summary, the complexity of dealing with multiple baselines is hidden behind the I 6 GD and I 6 PD matrices presented in Sect. 4.4. From the user's point of view, the fringe tracker transitions from SEARCHING to TRACKING state, but the engine behind the scene does not change the way it operates.
Sensitivity
The sensitivity is mostly a question of having enough photons on the detector to generate a feedback signal for the fringe tracker. In Fig. 10 we plot the flux versus magnitude of calibrators observed with GRAVITY during a period covering June 2017 to November 2018. The selection of the files with a tracking ratio above 80% leads to a total of 1117 exposures on 473 distinct calibrators. The orange dots correspond to the 814 AT Photodetections per telescope per second detector noise x 1s/10ms detector noise x 1s/3ms AT observations UT observations UT observations (CIAO) 1% transmission AT 1% transmission UT Fig. 10. Transmission plot, i.e., photons detected on the GRAVITY FT receiver, per telescope and per second, as a function of the K-band magnitude (tracking ratio >80%). The magnitudes of the targets are obtained from SIMBAD. Most of the observations are made on-axis, meaning that 50% of the flux is lost because of the beam splitter. However, the many K = 9.7 mag CIAO observations are taken off-axis, hence the higher flux. Targets observed with ATs can be as faint as 10 mag. Based on this plot, it is clear that because the flux observed with the UTs is more than 10 times higher, observations will be possible up to K = 13 mag. observations. The others correspond to the MACAO (visible AO) and CIAO (infrared AO) UTs observations. The solid lines correspond to a total transmission of 1% from telescope to the fringetracker detector. The dashed lines correspond to the theoretical detector noise, scaled by N p , with N p the total number of pixels divided by the number of telescopes.
On the ATs, the faintest target observed so far was TYC 5058-927-1, a star with K = 9.4 mag. It was observed in the night of 5 April 2017, when the atmospheric conditions were excellent: seeing down to 0.4 and coherence time up to 12 ms. The observations were made in single-field mode, where only half the flux is sent to the fringe tracker. The fringe tracker efficiently tracked the fringes throughout the entire exposure time, at a frequency rate of 97 Hz, with OPD residuals between 250 and 350 nm.
Technically, if we were to extrapolate to the UTs the sensitivity observed on the ATs, we should be able to track stars of K magnitudes up to 12.5. However, the faintest star observed with the fringe tracker on the UTs so far is TYC 7504-160-1, a star with K = 11.1 mag. It was observed in the night of 2 July 2017 during good atmospheric condition: seeing between 0.4 and 0.6 , coherence time between 4 and 7 ms. The frequency rate was 303 Hz, with OPD residuals between 350 and 400 nm. The relatively low UT sensitivity is still not understood. One explanation could be that the 97 Hz integration time cannot be used: the fringe contrast is attenuated by the vibrations of the UT structure. Another explanation could come from moderate AO performances. With a low Strehl ratio, difficulties in injecting the light in the fibers decrease the number of available photons, but also create flux drops that decrease the visibility and complicate the fringe tracking. One last possibility is a selection effect caused by the lower availability of the UTs.
Signal-to-noise ratio
The limit for fringe tracking could in theory be an S/N of 1 per coherence time. Below that number, the phase varies faster than our ability of measuring it: our knowledge of the phase decrease with time, hindering the convergence of the fringe tracker. However, practically, we observed that an S/N of 1.5 per DIT is required to keep the fringes in the coherence envelope. The signal is proportional to the number of coherent photons received. The noise is created by quantum noise on one hand and the background noise on the other hand. At 909, 303, and 97 Hz, we measured during sky observations a standard deviation of σ sky = 4, 5, and 8 ADU per pixel, respectively. These values come from the quadratic sum of the noises caused by the scattered metrology light (7 ADU at 97 Hz), the sky and environmental background (6 ADU at 97 Hz), and the read-out noise (4 ADU).
The detector variance observed during sky observations is used to compute the real-time S/N shown in the lower left panel of Fig. 9. In Fig. 11, for a dataset covering the seven months between April and November, we have plotted this S/N as a function of the measured flux. The three solid lines correspond to the theoretical S/N calculated from σ sky and photon noise. These values lie below these theoretical lines because of a loss of visibility contrast. This can be caused either by non-equilibrium of the flux between the different telescopes, or by OPD variations within a DIT of the fringe tracker detector.
The vertical lines correspond to the flux of a star of K magnitude of 8, 12, and 16 observed with the UTs under the assumption of 1% throughput. The drop at low flux of the theoretical S/N curves correspond to the effect of the observed sky noise. This noise is mostly detector noise at high frequency, and a combination of background and metrology noise at low frequency. The dotted lines are theoretical computations of the noise assuming only photon noise. Under this assumption, 1% throughput, and 100% visibilities, the UT sensitivity could technically reach magnitudes up to K = 16 mag.
OPD residuals and S/N
The fringe tracker performance degrades when the limiting sensitivity is approached. This is partly because the phase-delay control-loop gain decreases at low S/N because of the weighted inversion of matrix S † PD in Eq. (32). This is also partly caused by 10ms DIT 3ms DIT 1ms DIT Fig. 12. OPD residuals as a function of the fringe tracker S/N. Above an S/N of 10, the best fringe-tracking residuals are limited to a constant value around 80 nm, caused by the intrinsic bandpass of the fringe tracker hardware (dashed line). At lower S/N, the fringe tracker performance decreases because it is difficult to predict and track the evolution of the perturbations. The phase error caused by a readout noise on the phase goes as λ/(2π S/N) (lower dotted line). The actual performances of the fringe tracker shows that the degradation is more likely a factor 2 or 3 above that limit. the difficulty of estimating the correct fringe-tracking state from a noisy measurement. This decrease in performance is shown in Fig. 12 for the same dataset as presented in Fig. 11. Concretely, below a S/N of 4, the residuals are above 300 nm. These residual include the variance caused by the measurement noise, which is equal to λ/(2π S /N): ≈ 90 nm for a S/N of 4. The fringe tracker residual does not correct the phase to that level, however: the residual remains a factor 2 or 3 above that value. At an even lower S /N (< 3), the Kalman filter has problems to identify the state of the atmosphere and sometimes cannot update them fast enough. Fringe tracking is then only possible when the atmosphere varies slowly, when the atmospheric conditions are best. 7.5. OPD residuals and τ 0 At high S/N (≤ 10), the accuracy of the fringe tracking depends on several environmental parameters: vibrations, wind speed, seeing conditions and coherence time (τ 0 ). Between wind seeing and coherence time, Lacour et al. (2018) showed that the strongest correlation is observed with τ 0 . Using the same dataset as in Fig. 10 and using only the calibrators observed at 909 Hz, we plot in Fig. 13 the OPD residuals as a function of the coherence time as observed by the ESO site monitor at 500 nm. On average, the ATs perform better, with a median residual of 150 nm. Under the best conditions, the OPD residuals can be as low as 75 nm. The UTs residuals are higher, with a median value of 250 nm, and a minimum at 180 nm.
Depending on the seeing conditions, the fringe tracker shows different limitations. In the worst atmospheric conditions, which make up 20%, (τ 0 < 3 ms), the UT and AT performances are limited by the coherence time of the atmosphere. Under these conditions, 20% of all observations have OPD residuals above 380 nm. The consequence is then that the jitter of the phase significantly affects the visibility in the science channel with long integration times. We can show that this contrast loss can be directly estimated from the variance of the phase according to the relation With 380 nm rms jitters (σ Φ ≈ 1.1 rad), the contrast of the fringes of the science detector will only be at V residuals ≈ 55% of its maximum.
During the best 20% of atmospheric conditions (τ 0 > 7 ms), however, the fringe tracker can reach very low residuals. On the ATs, 80% of all observations are below 140 nm (V residuals > 93%). However, on the UTs, the environmental vibrations cause the residuals to remain between 220 and 320 nm. The explanation is the higher level of vibrations observed on the UTs.
Power spectral density
In Figs. 14 and 15 we show the power spectral density of the OPD for two astronomical objects: IRS 16C observed with UTs, and GJ 65 observed with ATs. The observation conditions are listed in Table 2. The IRS 16C galactic center target was observed with the CIAO off-axis AOs. The conditions were good to excellent, and the residual OPD was around 220 nm over the entire duration of the observations. The GJ 65 binary was also observed in good seeing conditions, without AO, and OPD residuals of 140 nm.
To compute the power spectral density, the phase delay Φ n is unwrapped and the π/4 modulation function removed. The pseudo-open-loop phase delay Φ ATM,n is then computed from Eq. (63) using the piezo transfer function estimated during calibration. In Figs. 14 and 15, the six upper plots correspond to the square root of the power spectral density. The black curves correspond to the measured OPD estimated from the phase delay Φ, and the red curves show the power spectrum of the reconstructed atmosphere Φ ATM,n . The atmospheric power spectrum (which also includes the vibrations) shows, as expected, the prevalence of the low frequencies in the atmospheric perturbations. Below 10 Hz, the power spectral density fits the relation PSD Φ ATM,n ( f < 10 Hz) = 1 f −2 µm 2 Hz −1 , very well, which is not far, but different, from the power of −5/3 of a Kolmogorov type atmosphere. After fringe-tracking A99, page 14 of 18 correction, the residual OPDs are attenuated below the spectral density: PSD Φ n ( f < 10 Hz) ≤ 0.001 µm 2 Hz −1 .
The fringe tracker is therefore clearly a high-pass filter. The cutoff frequency of the fringe tracker correction is not well defined because it depends on the efficiency of the Kalman filter and the accuracy with which the equations of state reflect the system. On the UTs, the 3 dB cutoff frequency is on the order of 10 Hz. On the ATs, maybe because the vibrations are less frequent and the predictive model more accurate, the fringe tracker performance has a higher cutoff frequency at 30 Hz. This 30 Hz shows that the system is optimized because it is close to the cutoff frequency of the open-loop system (≈60 Hz in Fig. 8).
Have we reached the ultimate sensitivity?
It is often assumed that the sensitivity of an optical interferometer must decrease as a function of N, the number of telescopes. This is not necessarily true. The limiting sensitivity is when each of the degrees of freedom of the fringe tracker reaches a variance of one during one coherence time of the atmosphere, and each of the degrees of freedom corresponds to a non-zero eigenvalue of matrix I (Sect. 4.4). Therefore the threshold on the phasedelay control loop (S /N PD threshold ) is applied on the eigenvalues in Eq. (30). For N 1, the signal grows like the flux (∝ N), the degrees of freedom like N, and the number of pixels like the number of baselines, N 2 . For an increasing number of telescopes, the detector noise therefore becomes ever stronger. However, in the case of a system without background and detector noise, the S/N on the eigenvalues no longer depends on N (Woillez et al. 2017). The ultimate sensitivity of the fringe tracker can then be established at one photon per coherence time and per telescope, regardless of the number of telescopes. For the VLTI, assuming 8 m telescopes, a coherence time of 10 ms, a throughput of 1%, and using the full K band, the ultimate sensitivity is Kmag = 17.5. Even assuming the need for an S/N of 1.5 per baseline and per coherence time, we should be able to reach a Kmag of 16 (Fig. 11).
What can be done? There are several paths forward to reach this magnitude. The first is to decrease the background and detector noise. The sky brightness in the K band at Paranal is about 12.8 mag/square arcsec 3 . Therefore, the fraction of sky background light entering the 60 mas single-mode fiber is almost negligible (Kmag sky ≤ 19). A more important light emitter is the VLTI thermal background because of the ≈75% absorption of the optical surfaces. All of these surfaces are typically about T = 283 K to 288 K and contribute to the majority of the background light. Regarding the detector noise, even if the SAPHIRA detector has a readout noise below 1 e − , the number of pixels used by the fringe tracker per telescope is 36 (72 in split polarizations). For faint objects, this noise can therefore dominate. Solutions to use fewer pixels have been proposed (Petrov et al. 2014(Petrov et al. , 2016 and could lead to a better sensitivity. The development of infrared detectors could also be a promising path forward. The second path is to increase the coherent throughput. On average, only 1% of the total flux reaches the fringe tracker detector. The 20 mirrors between M1 and the GRAVITY cryostat absorb up to 75% of the light. In addition, 50% of the light is scattered inside the IO beam combiner, 50% is lost by using the beamsplitter on the on-axis mode, and 50% because of the amplification of the detector (the so-called multiplicative noise). The last part (≈60% of the remaining flux) is lost when the light is injected into the single-mode fiber. Throughput improvements could therefore come from using fewer optical elements and possibly switching more of the mirrors involved from aluminium to gold coatings. Focus can also be placed on a more efficient light coupling into the single-mode fiber. This would mean a better AO system for a better Strehl ratio. The advantages of a good Strehl ratio are manyfold. It increases the mean throughput. It also maximizes the fringe contrast by giving a better instantaneous flux equilibrium. Last, it avoids flux drop-out, which is detrimental for good fringe tracking and model prediction.
Third, but not least, special care will be taken in monitoring and removal of the vibrations. The vibrations cause two main problems. Because they are usually at high frequency, they are difficult to predict and affect the fringe-tracking performance (in contrast to the atmospheric perturbations, which are easier to correct because they are at a lower frequency). The main problem, however, is that they limit how slow the fringe tracker can run because the decrease in fringe contrast hurts more than the additional integration time.
Have we reached the ultimate accuracy?
A false assumption is that sensitivity can be gained by trading accuracy for sensitivity because coherencing (i.e., keeping the fringes within the coherent length) requires fewer photons per coherence time than fringe tracking. However, by simultaneously using the group delay and the phase delay, it is possible to have the best of both worlds: the group-delay controller does the coherencing, while the phase-delay controller works in parallel as far as the S/N permits (Fig. 12).
Despite the high sensitivity, we routinely track high S/N fringes within 100 nm residual rms with GRAVITY. However, when the coherence time is short (τ 0 < 3 ms at 500 nm), the fringe tracker performance degrades (Fig. 13). This is caused by the open-loop latency of the fringe tracker (of about 4 ms). Observing during these conditions could clearly benefit from a fringe tracker with a shorter response time. Could we still improve the control loop during good atmospheric conditions, however?
The answer is yes. It lies in the proper management of the S/N by the Kalman filter. For convenience, we used an asymptotic estimation of the Kalman gain from the Riccati equation. A better Kalman filter would propagate errors as well as the state by also applying the equation of state to the covariance matrix: and derive the optimum gain each DIT. This could be achieved with additional computing power. An additional amelioration could come from modal control. This was proposed in Menu et al. (2012) and could theoretically be implemented. However, we have currently not been able to find any practical implementation that would make it robust for a realistic environment. | 16,150.4 | 2019-01-10T00:00:00.000 | [
"Physics"
] |
Assessment of surface topography modifications through feature-based registration of areal topography data
Surface topography modifications due to wear or other factors are usually investigated by visual and microscopic inspection, and—when quantitative assessment is required—through the computation of surface texture parameters. However, the current state-of-the-art areal topography measuring instruments produce detailed, areal reconstructions of surface topography which, in principle, may allow accurate comparison of the individual topographic formations before and after the modification event. The main obstacle to such an approach is registration, i.e. being able to accurately relocate the two topography datasets (measured before and after modification) in the same coordinate system. The challenge is related to the measurements being performed in independent coordinate systems, and on a surface which, having undergone modifications, may not feature easily-identifiable landmarks suitable for alignment. In this work, an algorithmic registration solution is proposed, based on the automated identification and alignment of matching topographic features. A shape descriptor (adapted from the scale invariant feature transform) is used to identify landmarks. Pairs of matching landmarks are identified by similarity of shape descriptor values. Registration is implemented by resolving the absolute orientation problem to align matched landmarks. The registration method is validated and discussed through application to simulated and real topographies selected as test cases.
Introduction
The quantitative assessment of surface topography modifications as a consequence of wear and/or other damage events is of paramount importance in many tribological studies related to product, process and material characterisation. Currently, the most common way to quantitatively assess topographic modifications is using profile measurement and consists of computing surface texture parameters on profiles measured before and after the modification event. The most frequently used parameter is the mean arithmetic roughness Ra as defined in the specification standard ISO 4287 [1]. Several other ISO 4287 profile parameters, such as Rz, Rq, and the parameters related to the material ratio curve (also described in ISO 4287), are also often considered when studying surface modifications [2].
The use of state-of-the-art areal surface topography measuring instruments to investigate surface modifications generated as a consequence of wear or other damage, has been explored in recent literature [3][4][5][6][7][8]. Areal topography measurement allows the reconstruction of entire portions of the measured surface, and introduces a wide range of opportunities for quantification through the set of areal field parameters, as defined in ISO 25178-2 [9][10][11][12]. Some areal field parameters are a direct evolution of their profile counterparts (e.g. Sa is the areal equivalent of the Ra profile parameter). Other field parameters capture properties which are visible only on areal datasets (e.g. Std-surface texture direction). In addition to providing multiple viewpoints and useful information on how a topography has changed, areal field parameters about a topography, and are typically unable to inform about topographic modifications pertaining to individual features, such as a specific peak being eroded, or a cavity being filled. Such an assessment of localised, quantitative topographic modification could shed further light on wear and other surface modification phenomena. The quantification of shape changes pertaining to individual surface formations is in principle now possible, due to the increasingly accurate metrological rendition of topography achievable by state-ofthe-art areal topography measuring instruments. However, one of the biggest obstacles to such a comparison is the need to correctly relocate the two topographies (i.e. before and after the modification event) in the same coordinate system, which is a precondition for the quantitative computation of several local topography differences, e.g. the computation of volumetric modifications.
Accurate repositioning/relocation of the topography datasets in the same coordinate system, often referred to as registration, is required because areal topography measuring instruments return topography information (height maps, point clouds or triangulated meshes) in their own coordinate system. Repeatable sample fixturing can offer limited help, as the sample datum surfaces may change if the sample is physically modified. Fixture-provided positioning accuracy may also be inadequate, considering the small scales of the topography modifications which often need to be assessed.
In the absence of a reliable, external relocation reference, information directly extracted from the measured datasets may be useful for relocation. The general idea is that the registration problem may be solved by aligning topographic regions that have remained invariant during the surface modification process.
In marker-based alignment, the operator is asked to manually place marker points to establish correspondences. Implicitly, while searching for correspondences, the operator tends to select regions that at least look visually similar, i.e. somewhat invariant across measurements. Once multiple correspondences are set (at least three, non-collinear, to solve registration with six degrees of freedom (6 dof)), the alignment transformation (translation and rotation parameters) can be found by solving the absolute orientation problem [13] (also referred to as the Procrustes superimposition problem in statistical literature [14]). The absolute orientation problem has closed-form solution [13], thus it can typically be solved in a fraction of a second on current hardware. The disadvantage is that markers are placed by an operator, relying on subjective visual assessment, resulting in a process which is often not accurate, traceable or reproducible [5]. Marker placement can be facilitated by particularly visible topographic landmarks, which may have been created on purpose (e.g. micro indentations), or may exist naturally (e.g. pores, scratches or any other visible singularities). For a topographic landmark to be suitable for registration, it should be easily recognisable, and conserved across measurements. However, if the surface modification being studied is subtle and spread across the area under observation, suitable landmarks may not be easily found by simple observation.
An approach to address some of the problems associated with manual marker placement has been recently presented elsewhere [5]: the position of manually placed markers is refined by studying the cross-correlation function in the surroundings of the marker position, the position of the maximum crosscorrelation value corresponding to the optimal shift to maximise local similarity. Limitations of such an approach are: (a) cross-correlation can only operate on the x, y translation plane, thus any rotational misalignment will possibly degrade the performance in the identification of matching regions; (b) because of the inability to handle rotation, local cross-correlation around a marker position is only suitable for small displacements, as larger displacement may generate multiple maxima in the cross-correlation function. Finally, the approach does not remove the issue that an initial marker placement is still required, which again implies that some landmarks should still be visible.
When topographic landmarks useful for registration are not easily identifiable by visual inspection, global alignment can be attempted instead. In global alignment, an alignment transform is sought, which attempts to maximise the degree of overlapping of both the entire topography datasets.
The two most common algorithmic approaches to global alignment are based either on computing the cross-correlation function over the entire datasets, and finding its maximum [8], or on applying the iterative closest point (ICP) algorithm [15,16]. As mentioned earlier, the disadvantage of cross-correlation is that it reduces the search space to translations only. The ICP method on the contrary operates in 6 dof [17]. This provides greater freedom while investigating the solution space. However, the biggest limitation of ICP is that it is a fine alignment solution, i.e. it requires a previous, coarse alignment to operate effectively. This is because ICP operates as a minimisation process, with scarce capabilities to verify premature convergence to local minima [15,17].
The fundamental problem with global alignment is that it tries to maximise overlapping everywhere, including the regions that have undergone modification and should not be aligned. Global alignment thus works primarily in those scenarios where invariant regions are prevalent over modified ones, thus modified regions weight little in the alignment process.
Clearly, a selective alignment solution, based on attempting to maximise the overlapping of invariant regions only, would be preferable. To cope with visual identification challenges and to ensure better repeatability, the identification of suitable matching regions across datasets should be algorithmically driven. However, algorithmic approaches to implement the automated identification and alignment of landmark regions have been scarcely explored in the literature on surface metrology. Some work can be found on computing local topography descriptors within moving windows [18], and on using local topography descriptors to obtain coarse alignment results [16,19]. In this work, a selective alignment approach based on the automated identification of reference landmarks and their alignment is presented, where a transform originally devised for image processing, the scale invariant feature transform (SIFT) [20], is adopted for the identification of landmarks.
Test cases
Two simulated test cases and one real test case were selected. The simulated test cases featured known misalignments, thus could be used to quantitatively assess the accuracy of the registration. The real test case was used to investigate the behaviour of the method when confronted with real-life measured datasets.
Simulated test cases
The simulated test cases consisted of surface topographies algorithmically generated in their original state, and then subjected to simulated modification processes. The two resulting topographies (before and after the modification process) were then subjected to simulated measurement ten times each, with simulated measurement error on local height determination and sample localisation error (position of the field of view on the sample surface). Relocation error between each pair of 'before and after' measurements was obtained by combining the sample localisation errors known from simulation.
The first simulated test case (referred to as test case n. 1) consisted of a step-like feature subjected to visible, localised erosion damage (see figure 1). The second simulated case (referred to as test case n. 2) featured a base topography comprised of a mixture of deterministic features (parallel high-spatial frequency pattern) and random features (hills, dales and scratches) subjected to crater-like, material removal phenomena (see figure 2).
To generate the simulated datasets: -Deterministic topography elements (e.g. the steplike feature of test case 1, pattern in test case 2) were procedurally generated by analytical equations. Random components were added as multi-scale overlays of spatially correlated random noise at different wavelengths and amplitudes.
-The modified topographies (i.e. after the modification events) were obtained by simulated material subtraction of further sets of deterministic and multi-scale, spatially interpolated random components.
-Final results (pre and post the modification event) were saved as triangle meshes.
-Simulated measurements (either on original or modified topography) were implemented by raster scanning, i.e. by intersecting the triangulated mesh with z rays located along the rows and columns of a x, y grid of 600×600 points and (1.5×1.5) μm point spacing, consistent with typical lateral resolutions and fields of view achievable with current areal topography measuring instruments (e.g. focus-variation, confocal, and coherence-scanning interferometry instruments [21]). To simulate error in local height measurement, each intersection point (between the z ray and the triangulated mesh-deterministic result) was disturbed by a random component along the z axis, sampled from a normal, random function N(0, σ z ) with σ z =0.2 μm, which represents a worst-case scenario in terms of the vertical precision of current instruments. To introduce localisation error, the position and orientation of the raster scanning grid was randomly displaced on the x, y plane, and rotated about the z axis. The maximum allowed displacement was 10% of the diagonal of the raster scanning grid, maximum rotation was 5°. Maximum lateral and angular displacements were found consistent with manual placement of a sample under the measuring instrument.
-Ten measurements were simulated on each test case (pre and post surfaces), each time with new simulated measurement error and new lateral and angular displacement. With two test cases, this led to a total of twenty registration problems of known solution.
-The results of measurement simulation were recorded as height maps, i.e. gridded sets of height values), the typical format of raw measurement data obtained by using areal topography measuring instruments. Measurements from topographies pre and post modification were referred to as 'pre' and 'post' datasets respectively.
Real test case
The real test case consisted of a block of AISI 316 L steel, with top surface obtained by face milling ('pre' status). The surface modification ('post' status) was imparted by mechanical interaction with a manual tool, in the form of a series of localised compressive deformations. The surface was measured in its 'pre' and 'post' status by means of an Alicona Infinite Focus G5, an areal topography measurement instrument based on focus-variation technology [21], using a 20×objective at 1×Zoom, NA 0.40, FOV (0.42×0.42) mm, lateral resolution 0.43 μm, acquiring regions of variable size, in stitching mode (i.e. by collating individual fields of view). The sample was physically moved away from the focus-variation microscope to be mechanically modified, and then placed back again under the microscope for the second measurement, without particular care for relocation, by only visually estimating consistency with the previous placement. In figure 3, height maps obtained by measurement are shown for regions cropped to an equal area of (2.54×2.80) mm. The real test case is particularly interesting because many portions of the machined surface are self-similar, making it difficult to visually identify corresponding landmarks in the pre and post topographies.
Proposed registration method
The proposed registration method consists of a series of steps, illustrated in the following paragraphs with the help of the simulated test case n. 1.
Topography data pre-processing
Firstly, the height maps are levelled by subtraction of the least-squares mean plane (the most common F-operator, as prescribed by ISO 25178-2 [9]).
Computation of local topography descriptors
A modified version of the SIFT algorithm [20], adapted to operate on height maps, is used to identify correspondences (referred to as keypoints) in the pre and post datasets. This is accomplished through the following steps. Height maps are converted into pseudo-intensity images by encoding height values as grayscale information. Then, a scale-space decomposition [20] is obtained as follows (figure 4): the intensity image is convolved with a Gaussian kernel multiple times, each time doubling the size of the kernel. This is roughly equivalent to halving the resolution of the image at each step (i.e. following a dyadic series), i.e. increasing the scale by one octave. Each level of the decomposition L x y , , s ( ) is thus defined as: and L x y , , 2 s ( ) is used to obtain the levels of an approximated Laplacian pyramid as follows (figure 4, right): Consistent with the original SIFT method [20], initial candidates for the role of keypoint are identified as those local maxima in the Laplacian pyramid levels (i.e. high-curvature points) which would also be local maxima if considering the corresponding points in the above and below levels (figure 5).
The first set of tentative keypoints is then subjected to a series of post-processing operations, again following the SIFT method [20]: -The keypoint positions are refined to sub-pixel accuracy by locally fitting the region surrounding each keypoint by Taylor series expansion, and then by finding the new local maximum. The Taylor series expansion is obtained by considering the Laplacian pyramid as a function of three variables ) so that the expansion is: and the refined positions x corresponding to local maxima (i.e. x, y, σ triplets) can be found by setting the first-order derivatives to zero, i.e.: -Low contrast keypoints are eliminated by thresholding the derivatives of the curvature computed across scales.
-Keypoints corresponding to edges are eliminated by checking curvature across the two principal directions: an edge point will have larger curvature across the edge, and almost zero along the edge.
A topography descriptor is computed for each remaining keypoint, as illustrated in figure 6, again consistent with the SIFT method [20]. Firstly, the local gradient vector at the octave, and the position and level where the keypoint is located, is computed on the Gaussian scale-space decomposition. An example gradient vector is shown by the black arrow in figure 6(a). Magnitude and orientation of the gradient vector (m x y , ( ) and x y , q ( )) are obtained by means of finite differences as follows: The surroundings of the keypoint are then resampled over a grid of specified width and resolution, oriented as the gradient vector ( figure 6(a)). Within each cell of the resampling grid, local gradient vectors are computed (magnitude and angle), as shown in figure 6(b). The magnitudes are then weighed by a Gaussian function covering the entire resampling grid, so that gradient vectors located far from the keypoint are attenuated (see again 6.b). Finally, a three-dimensional binning of the gradient vectors is performed as shown in figure 6(c). Binning is on grid cells (a 4×4 configuration as shown in figure 6(c)) and on orientation values (eight bins is the default, as shown in figure 6(c)). A three-dimensional histogram is finally built with the average intensities (magnitudes) of the binned gradient vectors (in figure 6(c) this leads to a total of 4×4×8 histogram values). The histogram values are stored as the final SIFT descriptor value, together with the local orientation of the resampling grid computed at the keypoint.
Identification of initial correspondences
Keypoints computed from a pair of pre and post datasets, located at the same octave and pyramid level, and with similar SIFT descriptors, are selected as candidates for forming matched pairs. Their similarity is an indication of them being suitable to act as correspondences, as they likely belong to invariant portions of topography, except for measurement error. Therefore, in ideal conditions, the identification of a geometric transformation that maximises the degree of overlapping of the keypoints of each matched pair should correspond to the ideal registration solution. In reality, however, not all correspondences may be equally valid, and simple ranking by similarity of descriptors may not suffice. An example of such a condition is shown in figure 7, where an initial set of correspondences has been identified on a pair of pre and post datasets belonging to one of the datasets belonging to the simulated test case n. 1. Note that the keypoints are shown overlaid to the original datasets, whilst in reality they may belong to different octaves and levels of the associated pyramids.
Pruning of the correspondences and identification of the registration transform
In the conventional image processing scenarios where the SIFT is typically applied, i.e. the creation of panorama images via stitching of individual photographs [20], iterative methods based on random sample consensus (RANSAC) [22] are used to identify a homographic transformation which is consistent with the highest number of correspondences. The homographic transformation (non-rigid) is adopted because in stitching one wants to compensate for slight differences between images due to a change of scale and perspective, due to the different points of view from which the images were taken. Differently from stitching, in this work, the alignment of correspondences is used to achieve co-localisation for metrological purposes, thus only rigid transformations are allowed. For this purpose, a new variant to the original SIFT method is proposed, consisting of using RAN-SAC in combination with a rigid transformation, as 2. The 'model' is a rigid transformation leading to the maximum alignment of the keypoints belonging to the selected correspondences. The transformation is comprised of a rotation about the z axis, and a translation in the x, y plane, because the keypoints are defined in the x, y plane. The model is estimated starting from the selected correspondences, by solving a least-squares minimisation problem through the Procrustes superimposition method [14].
3. Once the model (rigid transformation) has been identified, all the available correspondences are tested against it. A correspondence fits well with the model if its two keypoints are aligned after the application of the transform (in which case the correspondence is said to agree with the model, and it is classified as 'inlier'). If the two keypoints are not aligned, the correspondence is considered as 'outlier', i.e. not in agreement with the model. To incorporate a level of acceptable error in the alignment, a threshold on Euclidean distance is set to determine whether a correspondence is an inlier or an outlier. Controlling parameter for this step: OTV (outlier threshold value).
4. The quality of the model can be assessed by computing the total percentage of inliers; the higher the percentage (larger consensus), the better the model. In figure 8, the results of pruning by RANSAC are shown on the same dataset that had originated 127 initial correspondences, as shown in figure 7. The final result amounts to 65 remaining correspondences (51.18% inliers). RANSAC parameters were: NCM: 3; OTV: 6 (pixels); MNI: 100.
Generation of registered pre and post datasets
The final registration transform identified by RAN-SAC pruning is comprised of a 2×2 rotation matrix R (or equivalently, a rotation angle about the z axis) and a 2×1 translation vector T (translation in the x, y plane). During pruning, the transformation is used to rotate and translate the keypoints of one of the datasets to align them to the keypoints of the other. This is a transformation in 3 dof, on the x, y plane, applied to two-dimensional point clouds. However, after pruning is complete, the final transformation must be applied to the entirety of a dataset in order to align it with the other. This implies that the 'moving' dataset (a height map) must be subjected to resampling by interpolation (if translation is not exactly a multiple of pixel width, and if there is any rotation). Thus, the moving dataset is resampled by linear interpolation at the same locations of the associated 'fixed' dataset, so that one-to-one comparison of height points between aligned height maps are possible. An example of registered pairs of height maps is shown in figure 9. The height maps are shown in false colours to better highlight the quality of alignment of the invariant
Results
In this section, the results of the application of the proposed registration method are illustrated for the simulated test cases n. 1 and n. 2, and for the real test case. A comprehensive, quantitative assessment of performance is only possible for the simulated test cases, as the extents of the surface modifications and the imposed initial misalignment are defined by design. However, all the test cases, including the real one, can still be visually inspected to appreciate many aspects of the registration result.
All the results have been obtained through an original Matlab [23] implementation of the registration method. All the code is native except for the computation of the SIFT descriptors, which relies on the VLFeat open source library [24].
The registration parameters for the entire procedure were the same for the simulated and real test cases, except for RANSAC OTV (see below). The full set of parameters is the following: The difference in OTV is likely imputable to a larger amount of self-similar regions in the simulated surfaces, better treated with a less strict pruning of correspondences.
Analysis of the localisation of correspondences
The first interesting element of investigation is to determine whether the correspondences identified by SIFT, and surviving the RANSAC pruning process, are located in regions of the topographies that have not undergone significant modification in the transition from the pre to the post condition (i.e. invariant regions, except for measurement error). This can be qualitatively assessed on any test case where modifications are clearly visible, but can be reliably assessed in a quantitative way only for simulated datasets where the regions affected by modification are clearly marked. In figure 10, a binary map indicating where the modifications are located for one of the datasets of the simulated test case n. 1 is shown along with the positions of the SIFT keypoints, which survived pruning. The dataset in figure 10 is the same as that previously illustrated in figures 7 and 8.
The analysis of the entire set of ten registration problems for the test case n. 1 and ten problems for the test case n. 2 indicated that, on average, 112 correspondences (pairs of keypoints) survived RANSAC pruning for case n. 1 and 685 for test case n. 2. Of the surviving correspondences, 99.92% (mean for test case 1) and 99.97% (mean for test case 2) fell within 'invariant' regions (i.e. both keypoints of the correspondence must lay on invariant regions for the correspondence to be counted as valid). Thus, the likelihood of encountering misplaced correspondences between modified and unmodified regions is expected to be low.
Registration performance and dependency on amount of initial misalignment
The results of the twenty simulated registration problems for the simulated test cases n. 1 and n. 2 were plotted against the simulated displacements to assess the overall performance in terms of residual alignment error, and to investigate whether registration performance would get worse or improve with the amount of initial misalignment. Separate plots were generated for translation (x and y displacement combined into a vector, vector length considered) and for rotation (angular displacement about the z axis). Lateral displacement values were expressed in pixel units (because all the simulated datasets have the same pixel width/spacing), whilst angular displacement values were expressed in degrees. Each plot was fitted to a straight line by regression, to identify possible trends. In figure 11, 'initial' displacement values are those set by the simulation, whilst 'final' values are the residual errors remaining after application of the registration transform.
For the simulated test case n. 1, the analysis of figure 11(a) shows that the residual, translation displacement error after registration is on average one or two orders of magnitude smaller than the applied initial displacement. For a maximum simulated displacement of 77.2 pixels, the maximum residual lateral displacement after registration was 0.63 pixels (i.e. sub-pixel), with an average residual displacement of 0.43 pixels. For a maximum angular error generated by simulation (4.87°) ( figure 11(b)) the maximum final angular displacement error after registration was 0.07°, with an average residual displacement of 0.03°, i.e. two orders of magnitude smaller.
The results for the simulated test case n. 2 are shown in figure 12. The analysis of figure 12(a) shows that the residual, translation displacement error after registration was on average two or three orders of magnitude smaller than the applied initial displacement. For a maximum simulated displacement of 61.7 pixels, the maximum residual lateral displacement after registration was 0.03 pixels, with an average residual displacement of 0.02 pixels. For a maximum angular error generated by simulation (4.49°) ( figure 12(b)) the maximum final angular displacement error after registration was 0.005°, with an average residual angular displacement of 0.003°i .e. three orders of magnitude smaller.
The regression lines shown in figures 11 and 12, despite the less than ideal fitting results (as indicated by the R 2 coefficients reported in the figure captions), indicate little dependence of registration performance on the amount of imposed, initial displacement (almost horizontal regression lines and large residuals with no apparent trend). Interestingly, in all cases, registration performance improved with larger amounts of initial displacement.
Registration performance and dependency on the test case
As already hinted at in figures 11 and 12, the observed registration performance for the simulated test cases n. 1 and n. 2 was different, with registration for the test case n. 2 performing consistently better. This observation was confirmed by the box plots shown in figure 13, illustrating the distributions of the final lateral and angular errors (i.e. residual displacement after alignment). Not only the mean error was significantly smaller for the test case n. 2, but also the scatter of the performance values was smaller.
Registration performance for the real test case
In figure 14, a cross-section of the aligned topography datasets is shown. The surfaces are rendered in different colour to highlight the regions where one surface is above or below the other. Despite it being impossible to obtain an accurate, quantitative assessment of the quality of the registration result (as the real initial misalignment is unknown), a simple visual inspection of the overlapping between the machining marks of the two datasets (see figure 14(b)) reveals a very good result. Moreover, the inspection of the local surface heights not only reveals the depressed regions corresponding to the mechanical modification processes, which were immediately visible even before the pre-post comparison, but also highlights the presence of raised regions surrounding each depression, consistent with the the surface modification process.
A successful relocation allows for individual surface topography modifications to be quantitatively assessed. For example, in figure 15 it is shown how the void volume of an individual depression can be quantified.
Discussion
The results presented in this work show that the proposed registration method based on automated identification of correspondences is successful at relocating topography data measured pre and post a modification event. The analysis of the results on the simulated datasets show that, in most cases, the residual displacement errors (i.e. the residual displacement after registration) are at the sub-pixel level. Assuming measurements by optical instrumentation, in most cases the observed errors would be below the optical resolution limits [21]. The results also appear to be qualitatively confirmed for the real test case, despite the lack of a proper reference for a comprehensive, quantitative assessment. However, a number of questions remain to be addressed.
The biggest, looming issue to address, pertains to whether the topographies generated by simulation and the real test case investigated, are sufficiently representative for the wide array of diverse scenarios which may be observed in real industrial practice. How would the performance change with different topographies? What about smaller or larger modifications? The simulated test cases n. 1 and n. 2 feature topographies with different amounts of determinism and randomness, modifications that have different spatial distribution and volumetric relevance, presence of additional topography features which may weigh-in more or less negatively in the determination of reliable registration landmarks. The results for the two simulated test cases show important differences, in particular concerning the performance related to compensation of linear displacement. Despite the fact that one real test case was presented, showing that the method can be successfully applied (at least, up to what is visually appreciable), it would be prudent to test the method on a larger number of measured surfaces, and this work will form the basis of future publications.
Testing on real surfaces is far from being straightforward though. Major challenges remain unsolved on how to actually evaluate the 'goodness' of a registration result. Real measurements would lack associated knowledge of the ideal registration transform, which on the contrary was fundamental in this work to assess performance on the simulated test cases. If one can only rely on the information contained within the measured datasets, and no external common reference for co-localisation can be adopted, then the same metrics used to solve the minimisation problems underlying registration could be adopted as approximations. In other words, one may operate under the assumption that the real problem of minimising the distance between a dataset and its ideal position (relocation problem) can be approximated by either minimising the distance between visible landmarks that have been recognised as occupying the same position, or by minimising the global distance between the two datasets. The latter approach had been discussed in section 1 under the name of 'global registration', and is demonstrated to fail in the presence of significant modifications between the pre and post datasets. Regardless, even operating by aligning landmarks is marred by approximation, thus further work is needed to set up experimental conditions where an actual, external co-localisation reference can be set up to validate the proposed registration methods on real measurements.
Another issue is whether the current method could be further improved, either by operating on its many control parameters (e.g. the SIFT parameters, or the RANSAC pruning parameters), or by pairing it with a further fine registration method, such as one derived by ICP, as already investigated on surface topography [15,16]). Experimental investigations are in progress, but again the problem is that the ICP method is intrinsically global, and must be turned into a selective technique if one wants to avoid overfitting of significantly modified regions. As the current results already lead to sub-pixel accurate registration results, it may very well be that further refinement is not needed, as it would be difficult to evaluate actual improvement when operating below the measurement resolution limits.
Finally, the problem of how to define and compute uncertainty associated to relocation is of fundamental importance, and has not been addressed yet. Both because it is still unclear how uncertainty should be associated to areal topography datasets such as those serving as inputs to the method described here (see [24][25][26] for a discussion of the challenges of building an uncertainty budget for a measured areal topography dataset), and because it is still unclear how such information should propagate through the relocation method, finally affecting the registration result. Clearly, knowledge of relocation-associated uncertainty is fundamental to later estimate the uncertainty associated with any topographic comparison, and consequently, the traceability of the whole data processing and analysis chain.
Conclusions
In this work, a method was proposed for the geometric registration (i.e. co-localisation within the same coordinate system) of surface datasets obtained by areal topography measuring instruments. The registration method is useful in many surface measurement scenarios where the interest is the quantitative assessment of surface topography evolution as a consequence of modification events, e.g. due to functional life or mechanical testing.
As opposed to global alignment solutions, the method proposed in this work is based on the algorithmic identification of invariant regions which can act as fiducial landmarks to be superimposed in a selective alignment process. Correspondences between landmarks are algorithmically recognised by computing the similarity of candidate regions encoded through a dedicated shape descriptor adapted from the scale invariant feature transform (SIFT); a popular descriptor originally devised for image processing. The rigid transformation (rotation and translation) leading to alignment is then computed by solving an absolute orientation/Procrustes superimposition problem.
The performance of the proposed procedure was quantitatively assessed through application to simulated test cases of known solution. The performance of the method was also qualitatively assessed on a real test case involving a machined surface subjected to mechanical modification and measured by focusvariation microscopy. For the simulated test cases, the registration results indicate residual errors at the sub-pixel scale, usually below the resolution limits of optical areal topography measuring systems. Visual inspection of the real test case also indicates good relocation results, with a visually accurate alignment of visible surface features (machining marks).
Future work will involve additional refinements to the registration method based on understanding performance when applied to a wider array of surface types, the development of experimental solutions to validate the method on real measurement results and the development of methods to estimate the associated measurement uncertainty. | 8,034.6 | 2019-05-03T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
The Emergence of Wittgenstein’s Views on Aesthetics in the 1933 Lectures
In this paper I offer a genetic account of how Wittgenstein developed his ideas on aesthetics in his 1933 lectures. He argued that the word ‘beautiful’ is neither the name of a particular perceptible quality, nor the name of whatever produces a certain psychological effect, and unlike ‘good’, it does not stand for a family-resemblance concept either. Rather, the word ‘beautiful’ has different meanings in different contexts as we apply it according to different criteria. However, in more advanced regions of aesthetics the word ‘beautiful’ ceases to play an important role. Instead, we judge things to be more or less correct according to genre-specific standards or criteria, which in an aesthetic discussion are presupposed, rather than argued for. Finally, Wittgenstein came to realise that providing support for an aesthetic appraisal according to some given criteria is not the only and perhaps not even the main focus of aesthetic discussion. More interesting to him became the idea of a puzzle or perplexity in aesthetics, which he discussed in greater detail in his 1938 lectures.
Aesthetic questions, however, are often regarded as comparatively unimportant or secondary. To elevate aesthetics to the same level of dignity as ethics might be at least part of the point of Wittgenstein's bold identification of the two in the Tractatus. In that he would display his cultural background, the high esteem in which aesthetic questions were held both in fin-desiècle Vienna in general and in the Wittgenstein family in particular. 6
II
The Tractatus idea of juxtaposing ethics and aesthetics continues in the lectures of the May Term of 1933 (that is, in Wittgenstein's so-called middle period, characterised by some radical criticisms of his earlier philosophy and increasing emphasis on the manifold uses of language). Although ethics and aesthetics are no longer boldly identified, Wittgenstein repeatedly declares them to be conceptually so similar as to allow them to be discussed jointly and switches freely from one to the other to illustrate parallel conceptual points: 'Practically everything I say of "beautiful" applies in a slightly different way to "good"' (M, p. 339). 7 The philosophical starting point of the May Term lectures is verificationism and the observation that the meaning of a word is 'the way in which it is used' (M, p. 308). Wittgenstein rejects the idea that nouns must stand for 'something at which you can point' (M, p. 313) and then introduces ideas, later to be developed in the Philosophical Investigations, of how the meaning of a word can be far more flexible, less sharply defined than one may be inclined to believe. Sometimes we change the meaning of a word in the course of a conversation, ' as we go along' (see PI,§ 83). Meanings are often vague, so that no precise boundaries can be drawn (M,p. 313;see PI,(76)(77)88). And he introduces the idea of a family resemblance concept, using the same example as in the Investigations (PI, § § 65-71): the concept of a game is not held together by a set of qualities all games have in common. 8 Rather, we explain the concept by giving an open-ended list of examples, each being similar to some others in some respect, although there may be no one similarity shared by all of them (M, pp. 323-24). He then applies these insights to the word 'good', 9 and later also to 'beautiful'. Furthermore, he suggests that the word 'good' may be used attributively: what counts as a (morally) good x, may partly depend on what kind of thing x is (M, p. 325). He also suggests that the meaning of the word 'good' is determined by what in discussion we present and accept as reasons for calling something good (ibid.).
III
After that sketch of his new undogmatic and flexible ideas about meaning (and a digression on Frazer's anthropological explanations), Wittgenstein focuses on two prevalent accounts of meaning, first again with respect to the word 'good', before changing to a more detailed discussion of the word 'beautiful'. The ideas in question are, in the case of beauty, that (i) a certain perceptible quality (or set of qualities) is what all beautiful things have in common, or that (ii) beauty is the causal power to give us certain pleasant sensations. As a case of comparison, Wittgenstein considers the concept of elasticity: If I want to know whether a rod is elastic I can find out by looking through a microscope to see the arrangement of its particles, the nature of their arrangement being a symptom of its elasticity, or inelasticity. Or I can test the rod empirically, e.g., see how far it can be pulled out. The question in ethics, about the goodness of an action, and in aesthetics, about the beauty of a face, is whether the characteristics of the action, the lines and colours of the face, are like the arrangement of particles: a symptom of goodness, or of beauty. Or do they constitute them? a cannot be a symptom of b unless there is a possible independent investigation of b. (
IV
On the first view (i), beauty is a quality all beautiful things have in common: ' an ingredient in beautiful things' (M, p. 332).
Wittgenstein hints that the idea of a common ingredient is mistaken by pointing to the difference between colours and pigments (ibid.). The point is perhaps even clearer in the culinary case, where the word 'ingredient' is naturally used. Two dishes may have a taste note in common -for example, they both taste of vanilla -without having any actual ingredients in common. 'Vanilla flavour' ice cream is typically not made from real vanilla.
But even a certain perceptible quality, produced by whatever ingredients, is clearly not what all beautiful things have in common. Just as not all delicious food tastes of vanilla, or whatever particular taste you care to mention, not all beautiful paintings contain certain colours or shapes. Less crude is the idea that what beautiful things have in common are neither particular ingredients nor a particular perceptible quality, such as a colour, but a second-order quality, for example, a relational quality between first-order qualities. Thus it was suggested (by Aristotle) that beautiful things show order and symmetry or (by Francis Hutcheson) that beauty is 'uniformity amidst variety'. But it is easy to show that such definitions won't do. They are either too vague and inclusive (virtually everything can be said to display some uniformity amidst variety) or fail to cover all instances of beauty. As Kant had already concluded in 1790, no such descriptive definition of 'beautiful' seems possible. Beauty cannot be identified with any particular set of perceptible qualities. So, can I be aware of all the details of a painting and yet not know whether it's beautiful or not? Yes, because 'no arrangement is beautiful in itself. The word "beauty" is used for a thousand different things' (AWL,. 10 The word 'beautiful' is applied in extremely heterogeneous domains (M, p. 333): the human face, a landscape, a painting, a piece of music, even a mathematical proof, can all be beautiful, yet it is hard to see how they might all display the same perceptible quality or feature of qualities. Certain arrangements of shapes and colours are called beautiful, but not in themselves: only in a suitable context. In another context the same arrangement might be rejected as inappropriate.
V
That might suggest the other option: a dispositional analysis of the concept of beauty. As we aren't able to identify beauty with any specifiable (kind of) arrangement of shapes and colours -just as ' elastic' doesn't mean having a specific molecular structure -it would appear that we have to accept the other possibility, namely that such perceptible features are only a symptom of beauty. That is, it may be a well-supported empirical generalisation that whenever something has a certain arrangement of perceptible features it will be found beautiful. But then, there must be an independent criterion of what it actually means for something to be beautiful, corresponding to the possibility of bending a rod as what constitutes its elasticity.
In the following lectures, Wittgenstein considers the idea that beauty is what gives us a certain feeling, say, pleasure (M, p. 339). Then aesthetics would turn out to be a branch of psychology: 'in comparing musical arrangements, for example, one [would be] making a psychological experiment to determine which produces the more pleasing effect' (AWL, p. 38).
Wittgenstein offers six reasons why such a view is mistaken: (i) Aesthetic discussions are about features of the objects perceived, not about our feelings (M, p. 334). (ii) If we were interested in the psychological question as to whether particular features cause certain independently specifiable sensations, it wouldn't be enough to perceive the features and note our sensations (as you can't feel the causation); we would require series of experiments (M, p. 335). (iii) We don't treat works of art as mere causes: means to hedonic ends, but as ends in themselves (M, pp. 339-41). (iv) To the extent to which they can be said to give us pleasure, it is a specific pleasure that cannot be identified independently of our experience of the work in question and is incommensurable with other pleasurable aesthetic experiences. There is no meaningful question of comparing the amount of pleasure derived from entirely different art forms (M, p. 339). One can of course use a locution such as 'it gives me pleasure' in the sense of 'I like (to do) it'. In that sense, any work of art I care to look at or appreciate can be said to 'give me pleasure'. But then to say so is a mere 'tautology' (AWL,p. 38;M,pp. 335,339) and no longer a substantive psychological claim. By the word 'pleasure' in such a locution we don't mean an independently identifiable sensation. As noted above, earlier in the term Wittgenstein had presented his idea of a family-resemblance concept, which he also applied to the word 'good', speaking of ' a transition between similar things called "good"' (AWL, p. 33). However, the discussion of the word 'beautiful' leads him to a different account. A family-resemblance concept applies to different things that do not all share the same set of defining features, and yet that is not a case of polysemy. The word 'game' is applied to football in exactly the same sense in which it can also be applied to chess or hide-and-seek. Indeed, it is exactly the point of a family-resemblance concept that its one meaning has the flexibility to make it applicable to fairly heterogeneous instances. The word 'beautiful', by contrast, is not only applied to different kinds of objects, it is used in different language games, with different meanings (M, p. 338). 12 First, there is semantic changeability due to the word's being used attributively (a point made earlier about the word 'good'): 'The words "beautiful" and "ugly" are bound up with the words they modify, and when applied to a face are not the same as when applied to flowers and trees' (AWL, p. 35). Furthermore, it is not only that the word 'beautiful' functions differently when applied to very different kinds of objects, say, a smell and a mathematical proof; even when applied to the same kind of object it may, depending on the context, have a different role and meaning: 'The phrase "beautiful color", for example, can have a hundred meanings, depending on the occasion on which we use it' (AWL, p. 35). A colour that is beautiful as a feature of a flower arrangement or as the colour of a tie may not be found beautiful on a monochrome wallpaper.
VII
Why, on Wittgenstein's account, do different kinds of objects and different contexts make for different meanings of the word 'beautiful'? Because we apply it for different reasons, according to different criteria, as shown by the way we justify our applications in discussion.
In different cases e.g. beauty of a face, of a flower, you are playing quite different games; & this is shewn by the way in which you can discuss whether the face is beautiful or not.
If you want to know how "beautiful" is used: ask what sort of discussion you could have as to whether a thing is so. (M, p. 333) Here Wittgenstein returns to the key idea with which he began that term's lecture series: verificationism. Possible methods of verification are significant for determining the meaning of a statement, and thus also the meaning of a predicate. To understand it is to understand under what conditions it is correctly applied: the criteria by which its application is to be assessed; so in the case of 'beautiful'. He gives the following examples: To shew ambiguity (& more than this), suppose (1) you are calling a smell beautiful; & can say no more than 'I like the smell of lilac', 'I don't care particularly about it' (2) you are talking about arrangement of flowers in a bed: here you can say much more. This shews that 'beautiful' means something quite different in 2 cases. (M, p. 335) First, there is the most elementary case of an aesthetic judgement: a single sense impression without any discernible structure, a smell, or a colour looked at in isolation (M, p. 337). Where there is no further question of how to combine it with other things, there is little one can say to justify one's judgement. Here, 'This is beautiful' doesn't amount to much more than: 'I like it.' As Wittgenstein later remarked to G. F. Stout, the question of the possibility of verification is illuminating even where it calls forth a negative answer. Far from ruling an unverifiable statement out as nonsensical (as some dogmatic and polemical versions of verificationism suggest), it simply indicates another possible language game, different from that of empirical descriptions. 13 The second example, a flower arrangement, by contrast, has a describable structure and various details, such as colour contrasts or symmetries, to which one can point to say what exactly it is about it that one finds appealing.
In the same lecture Wittgenstein mentions two objects of more advanced aesthetic discussion: musical harmony (M, p. 336) and the design of a door (M, p. 337). Obviously, with works of music or architecture aesthetic discussion can become a lot more sophisticated. Harmony in Western music is a complicated system of rules and conventions that can be invoked to justify an aesthetic judgement, which therefore can be far more thoughtful and richer in content than the praise of a flower arrangement, let alone one's predilection for a certain smell.
VIII
However, at this point, a little inconsistency appears to creep into Wittgenstein's presentation. He promised us different meanings of the word 'beautiful', due to different kinds of discussion of the word's applications, but then, when it comes to more interesting aesthetic discussions, in the realm of music or architecture (and the same could be said for art or literature), he observes that the word 'beautiful' tends not to be used: 'Discussions about the design of a door don't mention "beautiful". They say such things as "It's top-heavy"' (M, p. 337).
Again, a book on harmony is unlikely to say much about 'beauty', rather, it provides rules for what, in a certain system, is deemed correct and what incorrect (M, p. 336). In fact, the word 'beautiful' is scarcely ever used 'in an aesthetic controversy' (M, p. 340; AWL, p. 36). It is typically used to express the kind of aesthetic preference which does not lead to any discussion. Your finding a certain smell beautiful tells me more about you than about the smell. And if you explain to me that you find a certain flower bed particularly beautiful because of the symmetrical arrangement and the contrast of white tulips and purple columbines in it, I may or may not agree; but if I don't -if I can't see much in it myself -I will not on that account begin an argument. In such cases we can give reasons why we like something, by way of explaining what exactly it is about the thing we like, but we couldn't give any reasons to show that different preferences would be wrong. Later Wittgenstein asks: 'Is this flower beautiful or not? What test is there for [the] right answer?' (M, p. 339). The reply to this latter question is clearly: none.
Often, Wittgenstein remarks, the word 'beautiful' serves merely to draw attention to something that appeals to us (M, p. 340; AWL, p. 36), rather than to make any seriously debatable claim.
Serious aesthetics begins where there is room for aesthetic discussion and controversy, and that can only be where we have standards of correctness, such as those introduced by musical theory, and where, therefore, the question of beauty is replaced by the question of correctness (AWL, p. 36). So it was a bit hasty and infelicitous for Wittgenstein to introduce the investigation of the concept of beauty as if it were the key concept of aesthetics. 14 Instead, it would have been better to say that there are different language games of aesthetic appraisal, only some of which, the more primitive ones, involve the word 'beautiful'.
IX
Is beauty a quality? At one point Wittgenstein suggests that it is a confusion to say so, explaining that the attribution of a quality must be a contingent matter. A table has the quality of being brown, as it might have been red instead (M, p. 333). That is to say the bearer of a quality must be identifiable as the same even without it. So being a physical object Wittgenstein would presumably not call a ' quality', for it seems that you cannot make sense of a table still being the same while no longer being a physical object. Or can you? For Macbeth it was a real question whether the dagger he saw was a physical object or a hallucination. Our criteria of identity are very flexible. Even things like daggers or tables can be identified without taking their physicality for granted. And it is certainly possible to identify a beautiful object in such a way that its beauty is not taken for granted. My back garden, for example, might have possessed or lacked beauty depending on my gardening skills and efforts. And it is not plausible to argue that the identity of an object must be determined by the totality of its physical properties, for then even the colour of a table could not be regarded as a contingent quality.
Wittgenstein's concern was whether 'shapes & colours' determined beauty, and he continued to argue that in that case calling them 'beautiful' would not be a contingent matter, hence beauty shouldn't be regarded as a quality (M, p. 333). But the determinate link would only be between certain shapes and colours, and beauty, not between a physical object and beauty. The painting or building or garden might well have lacked beauty, because they might have lacked those specific shapes and colours. So even on Wittgenstein's account, beauty could still be regarded as a (contingent) quality of a physical object, even if it was logically implied by that object's shapes and colours. (Likewise, Jones has the contingent property of being a bachelor, even though it follows logically from his being an unmarried man.) However, as we saw, Wittgenstein denies that an object's shapes and colours determine its beauty (or aesthetic qualities), because aesthetic judgements are context dependent: dependent on what kind of object displays those shapes and colours, but also on what norms and ideals constitute our language game. So, if beauty is not determined by shapes and colours alone, the question arises: 'Is there, when I know what the shapes & colours are like, another investigation as to whether it is beautiful?' (M, p. 333) That sounds a little paradoxical, for what could one possibly investigate in aesthetics if not the perceptible qualities of an object, in the visual case: shapes and colours. But consider the question whether a certain action is legal. Is the answer determined solely by the physical features of the action? Obviously not. So where else do we look? Apart from having to take into account various non-physical circumstances (for example, whose property something was or whether some relevant permission had been granted), we have to consider where the action was performed and investigate what laws were in force there. There is nothing paradoxical in saying that legality is a quality of an action, although it is a relational and conventional quality.
Similarly, there should be no objection to speaking of aesthetic qualities, which are not qualities ' of a distribution of colours in visual space' (M, p. 338). The difference is, however, that the relevant criteria for aesthetic judgements are rarely as straightforwardly codified as laws. There is no book of regulations where I could look up if a given flower arrangement is beautiful or if a certain musical transition is effective. That is why talk of ' another investigation' sounds somewhat odd. But then again, one only needs to remind oneself of our ways of explaining and supporting such judgements to see that there can indeed be ' another investigation', but it would be discursive and clarificatory, rather than empirical (see M, p. 342).
X
What does Wittgenstein say about the standards and criteria that regulate aesthetic language games? On the 15th May 1933 he talks about the ' concept of an ideal' (M, pp. 339-41). An aesthetic ideal is comparable to a law and 'Aesthetic discussion is like discussion in a court of law' (M, p. 351). That is to say, it is not trying to settle the general evaluative question what in a given artistic genre is to count as right or wrong, just as in a court of law there is no discussion as to whether theft is a criminal offence; the question is merely whether in a given case a theft has been committed. Likewise, an aesthetic discussion is not trying to establish norms but to apply them in a given case. Aesthetic norms or criteria are presupposed, like the law in a judicial hearing.
And yet what corresponds to the law in aesthetics; an aesthetic 'ideal' is far more elusive. Typically, it is not laid down anywhere. It does not exist independently of our discussions as a kind of canonical sample that could be produced to settle a dispute. Rather, it is only an abstraction from our practice: 'To find what ideal we're directed to, you must look at what we do: the ideal is the tendency of people who create such a thing' (M, p. 341). An aesthetic ideal has no independent existence and authority, it lies only in a certain consistency of our reactions and preferences, and it changes over time (M, p. 340), especially as there is usually no social enforcement. An example of such an aesthetic ideal is the view Wittgenstein attributes to Bach: that ' a piece mustn't slink away like a thief', meaning that a piece must continue and end in at least the same number of voice-parts (M, p. 352). By contrast, in Bach's aesthetics there appears to be no objection to augmenting the number of voice-parts, as occasionally a piece having three parts at the beginning ends in four.
Naturally, there is no sharp distinction between widely accepted genre conventions and the preferences of only a small group of practitioners and connoisseurs. At the latter end of the spectrum there are individual tastes, for instance: 'you always prefer slightly stronger contrasts, I always prefer slightly weaker [ones]' (ibid.). But then, the taste of an influential individual, an artist or art critic, may appeal to others and become widely accepted.
The concept of an aesthetic ideal or taste is developed further in Wittgenstein's 1938 lectures, where he speaks of a ' cultured taste' (LC,. 15 But already in 1933 he emphasises that such a taste or 'ideal' is presupposed in aesthetic discussion and not itself argued for. Therefore, taking an evaluative stance for granted, aesthetics is descriptive (M, p. 342), rather than a matter of taste (M, p. 346). 'Whenever we get to the point where the question is one of taste, it is no longer aesthetics' (AWL, p. 38; see M, p. 347). Thus, the supreme principle of Wittgenstein's aesthetics turns out to be De gustibus non est disputandum. For a fruitful aesthetic discussion to be possible, a considerable agreement in taste must be presupposed. 16
XI
Another interesting development in the 1933 lectures is this. As we saw, Wittgenstein began the discussion of aesthetics in a fairly conventional way with considering the word 'beautiful' and the traditional question of how to justify calling something beautiful. He soon realised that in more advanced regions of aesthetics the word 'beautiful' ceases to play an important role. Then he observed that interesting aesthetic discussions are largely descriptive, because relevant standards and criteria are presupposed, rather than argued for. Finally, he seems to have realised that providing support for an aesthetic appraisal according to some given criteria was not the only, and perhaps not even the main, focus of aesthetic discussion. More interesting to him became the idea of a puzzle or perplexity in aesthetics.
Typically, our concern is not to demonstrate why something is beautiful or aesthetically successful or correct according to certain standards, but rather to clarify and characterise our distinctive impression of it. It may be a matter of finding the right word to characterise the expressiveness of a melody (M, p. 348) or to hit on a good simile to capture what is striking about a novel (M, p. 356). This idea of an aesthetic puzzle and its explanation -which he thought similar both to philosophical and to mathematical problems (M, p. 358) -was also merely introduced and briefly sketched in 1933 and explored in more detail in the 1938 lectures (LC,. 17 | 6,066.2 | 2020-04-15T00:00:00.000 | [
"Art",
"Philosophy"
] |
Nature and Control of Shakeup Processes in Colloidal Nanoplatelets
Recent experiments suggest that the photoluminescence line width of CdSe and CdSe/CdS nanoplatelets (NPLs) may be broadened by the presence of shakeup (SU) lines from negatively charged trions. We carry out a theoretical analysis, based on effective mass and configuration interaction (CI) simulations, to identify the physical conditions that enable such processes. We confirm that trions in colloidal NPLs are susceptible of presenting SU lines up to one order of magnitude stronger than in epitaxial quantum wells, stimulated by dielectric confinement. For these processes to take place trions must be weakly bound to off-centered impurities, which relax symmetry selection rules. Charges on the lateral sidewalls are particularly efficient to this end. We propose that the broad line width reported for core/shell CdSe/CdS NPLs may relate not only to SU processes but also to a metastable spin triplet trion state. Understanding the origin of SU processes opens paths to rational design of NPLs with narrower line width.
Results
We analyze the emission spectra of trions in core-only and core/shell NPLs. Negative trions are studied unless otherwise noted, as it is the most frequently reported species in these structures, but the conclusions do not depend on the sign of the charged exciton (see Fig. S2 in the supporting information, SI). Once the general behavior of SU processes in these systems is understood, we discuss how our conclusions fit the interpretation of different experimental observations and the practical implications of our findings.
Core-only NPLs
We start by studying core-only CdSe NPLs. The NPLs are chosen to have 4.5 monolayer (ML) thickness and a lateral size of 20 × 20 nm 2 , for similarity with the core dimensions of Ref. 20 They have a pronounced dielectric mismatch with the organic environment, which we model with ǫ in = 6 and ǫ out = 2 as dielectric constants inside and outside the NPL, unless otherwise stated. 28,29 The presence of few-meV spectral jumps in photoluminescence experiments 20 suggests that the trion is subject to the influence of carriers temporarily trapped on the surface. 19,30 To model this phenomenon, a fractional point charge is placed on the surface, with charge Q = e Q X (|Q X | ≤ 1 and e the full electron charge). The fractional value of Q X accounts for the screening of trapped charged (e.g. hole) by the trap defect itself (e.g. surface dangling bond). 31 Two scenarios are considered: a charge centered on the top facet (Q top ) and an off-centered charge, located along the edge of a lateral facet (Q edge ). The latter setup is suggested by studies showing that edge and vertex atoms in CdSe structures have weaker binding to oleate ligands. 32 The two systems are represented in Figure 1a Fig. 1c), but their strength is two orders of magnitude smaller than that of the fundamental transition (main line). This is similar to the case of epitaxial quantum wells. [22][23][24][25] (iii) Stronger SU replica are however obtained for charges located on the lateral sidewall, provided the charge is attractive (acceptor impurity) and binding to the trion is moderately weak, see Fig. 1d. For Q edge = 0.4 (marked with a star in the figure), the SU peak reaches ∼ 25% of the main peak height. This ratio is about 20 times higher than in epitaxial quantum wells, and it holds despite the Giant Oscillator Strength enhancing the band edge recombination, [5][6][7]29 which suggests that SU satellites also benefit from this phenomenon. For Q edge > 0.4, however, the SU peak intensity is lowered again and the energy splitting (redshift) with respect to the main line increases. Second (c,d) Corresponding X − emission spectrum for charge strength Q = Q X e. The arrows point at the SU satellites (dotted lines are guides to the eyes). The highest SU peak is observed for off-centered acceptor charges weakly bound to the trion (Q edge = 0.4, marked with a star in (d)). The spectra are normalized to the intensity of the fundamental transition at Q X = 0, and offset vertically for clarity. The insets for Q edge = 0.7 in (d) show amplified SU peaks.
To gain understanding on the origin of strong SU peaks when trions bind to lateral surface acceptors, beyond the full numerical calculation of Fig. 1, in Fig. 2a and 2b we compare sketches of the SU processes, in the absence and presence of an attractive edge charge. Within effective mass theory, the conduction band and valence band energy levels of (non-interacting) electrons and holes can be described as particle-in-the-box states, with quantum numbers (n x , n y , n z ). It is useful however to label the states by their symmetry (irreducible representation). When Q edge = 0, because the NPL has squared shape, the point group is D 4h . When Q edge = 0, the electrostatic potential yields a symmetry descent to C s .
As a consequence, degeneracies are lifted and additional states with the same symmetry as the ground orbital (A ′ ) are obtained. This is important because after electron-hole recombination, the excess electron can only be excited to an orbital with the same symmetry as the initial one (vertical arrows in Fig. 2a and 2b). Therefore, lowering the system symmetry opens new channels for SU processes. Furthermore, these can involve low-energy orbitals, which have fewer nodes and will then have larger overlap with the trion ground state, as we shall see below. Both the number and the intensity of the SU processes are in principle enhanced. By contrast, a centered charge on the top surface barely affects the system symmetry, which remains high (C 4v ), and SU processes are only slightly stronger than in the Q edge = 0 case. The qualitative reasoning above can be substantiated with a CI formalism on the basis of independent particle (non-interacting) electron and hole states, which has the additional advantage of giving intuitive insight on how Coulomb interactions affect the likelihood of SU processes. We consider that the transition rate from the trion ground state |GS X − to an electron spin-orbital |f e , is proportional to: 33 (1) P is the dipolar transition operator,P = ie,i h i e |i h e ie h i h , where e ie and h i h are annihilation operators for independent electron and hole spin-orbitals |i e and |i h , respectively. We Figure 2: (a,b) Sketch of SU processes in NPLs with (a) and without (b) an edge charge. Labels on the left are (n x , n y , n z ) quantum numbers for the (independent particle) energy levels. Labels on the right are the corresponding irreducible representation. The surface charge lowers the point group symmetry, from D 4h to C s , lifting degeneracies and enabling new channels for SU transitions (vertical arrows). (c,d) Two main configurations |m X − in the CI expansion of |GS X − , with and without edge charge. Thin (thick) arrowsheads denote electron (hole) spin. Only when Q edge = 0 a SU process is expected. (e) Energy splitting between |1 X − and |2 X − at an independent particle level. (f) average value of electronelectron repulsion and (g) electron-hole attraction in configurations |1 X − and |2 X − . describe the trion ground state with a CI expansion, where |m X − is a trion configuration: |m X − = e † re e † se |0 e h † t h |0 h , with e † re and h † t h creator operators, |0 e and |0 h the vacuum occupation vectors of electron and hole, and c m the coefficient in the expansion. InsertingP and |GS X − into Equation (1), one obtains: ( In SU processes, |f e is an excited spin-orbital. It then follows from Equation (3) that such a transition will only take place if |GS X − contains at least one configuration |m X − in the CI expansion where one electron is in the excited spin-orbital and the other electron has finite overlap with the hole ground state (|s e = |f e and r e |t h = 0 or |r e = |f e and s e |t h = 0).
The larger the weight of this configuration, |c m | 2 , the more likely the SU process. It is worth noting that in the strong confinement limit, the trion ground state is well described by a single configuration where all carriers are in the lowest-energy spin-orbitals (configuration |1 X − in NPLs constitute an ideal system at this regard, because they combine weak confinement in the lateral direction with strong Coulomb interactions. 34,35 Hereafter, we refer to this condition (c m = 0 for m > 1) as Coulomb admixture.
The role of Coulomb correlation and symmetry breaking in activating SU processes can be illustrated, in the simplest approximation, by considering the two lowest-energy configu-rations of the trion ground state, In Fig. 2c and 2d we depict such configurations in the absence and presence of an edge charge, respectively. These can be expected to be the two most important configurations in the full CI expansion. Notice that the two configurations must have the same symmetry, for Coulomb interaction to couple them. Because the lowest-energy configuration, |1 X − , is always totally symmetric, so must be |2 X − . Thus, when Q edge = 0 (D 4h group), the electronic configuration The recombination of the E u electrons with the hole, which stays in a A 1g orbital, is then symmetry forbidden ( r e |t h = s e |t h = 0 in Eq. (3)). By contrast, when Q edge = 0 (C s group), |2 X − is formed by a monoexcitation where one electron is placed in the (n x , n y , n z ) = (2, 1, 1) orbital, which also has A ′ symmetry, resulting in an electronic configuration [ Fig.2d). The hole can then recombine with the ground orbital electron, as both have A ′ symmetry ( r e |t h = 0 or s e |t h = 0 in Eq. (3)) and leave the excited electron as the final state. This constitutes a SU process. Because both SU and fundamental transition rely on the recombination of the same electron-hole pair (same overlap integral, e.g. r e |t h ), the ratio between SU and fundamental radiative rates can be approximated as: i.e. it is set exclusively by the degree of Coulomb admixture.
One can guess the requirements that maximize |c 2 | 2 by looking which conditions favor energetically |2 X − over |1 X − . These include: (i) small energy splitting between the two configurations, at an independent particle level, ∆ sp in Fig. 2d, (ii) weaker electron-electron repulsion (V ee ) and (iii) stronger electron-hole attraction (V eh ) in |2 X − as compared to |1 X − . When the off-centered charge is switched on, ∆ sp rapidly decreases (see Fig. 2e) because the symmetry descent turns one of the E u (p-like) electron orbitals into a A ′ (s-like) one. However, the surface charge brings about electrostatic confinement and hence ∆ sp increases again soon after. As for inter-electron repulsion, 1 X − |V ee |1 X − increases more rapidly than 2 X − |V ee |2 X − (see Fig. 2f) because the former involves placing the two electrons in identical orbitals, while the latter does not. Last, 1 X − |V eh |1 X − is rapidly quenched (see Fig. 2g) because it involves the ground orbitals of electron and hole -(1, 1, 1) e and (1, 1, 1) h -, which dissociate rapidly under an external charge. 2 X − |V eh |2 X − stays strong up to Q edge ∼ 0.3 because it involves the (2, 1, 1) e orbital, which is spatially more extended and then keeps significant overlap with the (1, 1, 1) h hole. Figs. 2e-f further evidence that Q edge > 0.3 − 0.4 is inconvenient for SU processes, because the electrostatic potential increases lateral quantum confinement (∆ sp increases) and because electrons and At Q edge ≈ 0, the two orbitals are quasi-orthogonal. As a result, Coulomb interaction cannot couple configurations |1 X − and |2 X − . and c 2 ≈ 0. This is why the two-electron charge density closely resembles the (1, 1, 1) e orbital. SU processes are not expected in this case.
At Q edge ≈ 0.4, symmetry lowering and energetic considerations enable efficient Coulomb coupling. The oval shape of the two-electron charge density reflects a significant contribution from (2, 1, 1) e to |GS X − (i.e. |c 2 | > 0). At the same time, the electron (1, 1, 1) e orbital and the hole ground state have sizable overlap. This is an optimal situation for the appearance for the transition P GS→(2,1,1)e to show up as a SU process, according to Equation (3). Further increasing Q edge separates the (2, 1, 1) e electron orbital from the hole. Coulomb attraction is then weaker, making c 2 and consequently P GS→(2,1,1)e small again. Figure 4: Normalized X − emission as a function of the environment dielectric constant. With increasing dielectric contrast, the SU peak increases and becomes more redshifted. For every value of ǫ out , the value of Q edge that maximizes SU transitions is shown. In all cases, ǫ in = 6.
We have argued above that strong Coulomb admixture of configurations facilitates the appearance of SU processes. A distinct feature of colloidal NPLs when compared to epitaxial quantum wells is the presence of a prounounced dielectric contrast with the organic ligands surrounding the NPL, which enhances Coulomb interactions by effectively reducing the system dielectric screening. 29,34,36 To study the influence of this phenomenon over SU transitions, in Figure 4 we compare the trion emission spectrum for different values of the environment dielectric constant ǫ out , while fixing that of the NPL to the high-frequency CdSe value, ǫ in = 6. For the sake of comparison, the emission spectrum is normalized so that the band edge peak has the same intensity in all cases. Also, we have selected the value of Q edge that maximizes the relative size of the SU peak in each case. Because ǫ out screens the surface charge electrostatic field, larger Q edge values are needed when ǫ out increases. The figure evidences that lowering ǫ out increases the SU peak height and energetic redshift. For typical ligands of CdSe NPLs (e.g. oleic acid), ǫ out ∼ 2. 29, 37 We then conclude that dielectric confinement makes SU processes in colloidal NPLs more conspicuous.
Core/shell NPLs
We next consider heterostructured core/shell NPLs. The first case under study are CdSe/CdS NPLs. [12][13][14]38 The NPLs have the same CdSe core as in the previous section and 6 ML thick CdS shells on top and bottom (see inset in Figure 5a). In general, the behavior of SU replicas is found to be analogous to that of core-only NPLs. An off-centered acceptor impurity is needed to yield sizable SU replicas, with an optimal value of Q edge maximizing the relative size of the SU peak. Figure 5a shows the emission spectrum of X − for the optimal Q edge value, in CdSe/CdS NPLs (green line) against CdSe core-only NPLs (black, dashed line). One can see that the SU replica of the CdSe/CdS structure is again significant (11% of the main transition), but less pronounced than in the core-only structure (26% -15 15 30 Qedge Understanding the conditions which promote SU processes allows us to devise structures where their impact would be maximal. In Fig. 6 we consider a core/shell NPL with the same dimensions as before, but CdSe/CdTe composition. The NPL is chosen to be charged with a positive trion (X + ). Because of the type-II band alignment, the electron stays in the CdSe core and the holes in the CdTe region, as observed in related core/crown structures. 39,40 In the absence of external charges, the two first hole orbitals are (1, 1, 1) h and (1, 1, 2) h , i.e. the symmetric (A 1g ) and antisymmetric (A 1u ) solutions of the double well potential, respectively, which are almost degenerate because tunneling across the core is negligible (i.e. ∆ sp → 0).
Switching on a negative surface charge, Q edge < 0, lifts the inversion symmetry so that both orbitals acquire A ′ symmetry and can be Coulomb coupled. The admixture between configurations |1 X + and |2 X + , depicted in Fig. 6b, is then very strong. In the presence of the charge, the two hole orbitals tend to localize on opposite shell sides to remain orthogonal, as shown in Fig. 6c. This implies that configuration |1 X + , which has two holes in the same orbital, has much stronger repulsion than configuration |2 X + , which distributes the two electrons on opposite sides of the core. This makes 1 X + |V hh |1 X + ≫ 2 X + |V hh |2 X + .
Altogether, the small ∆ sp value and the large difference in hole-hole repulsion explain the strong admixture between configurations |1 X + and |2 X + . As shown in Fig. 6a, this gives rise to SU peaks whose magnitude is almost as large as that of the fundamental transition (72% for Q edge = −0.5).
Discussion
Our simulations show that SU processes can be expected for trions in core-only and core/shell NPLs, if off-centered impurities are present. We discuss here the potential relationship of this finding with experiments and practical implications.
Relationship with experiments
In core-only CdSe NPLs, the low temperature photoluminescence is thought to arise from subpopulations of excitons and negative trions. Fig. 1b). This possibility is suggested by studies showing that Z-type ligand desorption -and hence surface traps-in CdSe NPLs is more frequent on these facets, 44 and by the fact that CdSe/CdS core/crown NPLs generally improve the photoluminescence quantum yield as compared to core-only structures, despite having larger surfaces on top and bottom. 45 Because off-centered charges are needed to originate SU peaks, lateral charges are candidates to trigger such processes.
In core/shell CdSe/CdS NPLs, SU processes have been also proposed as the origin of multi-peaked fluorescence emission -and hence broadened line width-. 20 Our simulations in Fig. 5a confirm one can indeed expect a sizable SU peak in such structures. We note that earlier experimental studies had so far interpreted the line width broadening as a result of either SU processes 20 or of surface defects. 12 By showing that the second effect is a prerequesite for the first one, our study helps to reconcile both interpretations. Nonetheless, two remarkable disagreements are observed between our simulations and Ref. 20 measurements.
First, the experiments show from 2 to 4 emission peaks, which are interpreted as the X − fundamental transition plus up to three redshifted, SU peaks. In our calculations, however, we fail to see more than one significant SU replica. Second, the highest-energy peak in the experiment is never the brightest one. This is inconsistent with our results and with earlier studies on epitaxial quantum wells and dots, where the higher-energy peak corresponds to the fundamental transition, which is the most likely recombination channel. [22][23][24][25][26] Tentatively, one may suspect that a large number of SU peaks in core/shell CdSe/CdS NPLs could be connected with the thick CdS shell (12 ML in Ref. 20 ), which makes surface defects more likely than in core-only structures. A significant presence of defects in these structures has been hinted by studies showing that the long radiative lifetime is not due to electron delocalization but to the influence of impurities. 13 However, Coulomb interactions are weaker than in core-only structures (Fig. 5c,d), where only one SU peak has been measured. 21 It is then not surprising that, despite investigating different charge locations (Figs. S3, S6 and S7 in SI), conduction band-offset values (Fig. S4) and shell thicknesses (Fig.S5), we see at most one significant SU satellite.
Regarding the relative intensity of the peaks, as mentioned in the previous section, the highest-energy one (fundamental transition) is proportional to the weight of configuration |1 X − in the CI expansion, |c 1 | 2 , while subsequent (SU) peaks would be proportional to |c 2 | 2 , |c 3 | 2 , . . . Configuration |1 X − (all carriers in the ground orbital, Fig. 2c) is nodeless and hence naturally expected to be the dominant one, so the highest-energy peak is also the brightest one. We have not observed SU peaks exceeding the fundamental transition height despite considering different charge locations and shell thicknesses (see SI). Even in CdSe/CdTe NPLs, which constitute a limit case, SU peaks never exceed the height of the main transition, see Fig. 6a. band edge recombination To illustrate this point, in Figure 7a we show the calculated emission of X − assuming equipopulation of S e = 0 and S e = 1 trion states. One can see that the number of sizable peaks in the spectrum ranges from two to four, depending on the strength of surface charge, Q edge . The origin of these peaks is summarized in the sketches of Fig. 7b and 7c. The singlet ( Fig. 7b) can give rise to a fully radiative transition (s-R1) and a SU transition (s-SU), as described in the previous sections. In turn, the triplet (Fig. 7c) For example, because all peaks in Fig. 7a arise from the same NPL, they will experience simultaneous spectral shifts when surface impurities migrate. 20 Also, the hot trion emission is expected to vanish when the impurities are removed, as t-R1 becomes deactivated and t-R1 almost merges with the singlet emission, s-R1, see Fig. 7a for Q edge = 0. This fits the transition from asymmetric to symmetric band shape as temperature increases. 12 The fact that triplet emission is observed in CdSe/CdS NPLs, but not in CdSe ones, may be explained from the strong spin-spin interaction of resident carriers and surface dangling bonds in the latter case, 48 which should speed up spin relaxation through flip/flop processes.
This mechanism is expected to be inhibited in core/shell structures, because X − carriers stay far from the surface, as shown in Fig. 5d. On the other hand, the triplet trion is expected to have fine structure through electron-hole exchange interaction, 49 which may not fit the mono-exponential photoluminescence decay reported in Ref. 20 Further experiments are needed, e.g. on polarisation of the different peaks under external fields, 41,50 to confirm the different spin of the emissive states in CdSe/CdS NPLs.
The observation of metastable triplet trion photoluminescence has been previously reported in epitaxial quantum wells 24,51 and dots, 50 and more recently in transition metal chalcogenide monolayers. 52 To our knowledge, however, its presence in colloidal nanostructures has not been confirmed.
Control of SU processes
Inasmuch as SU processes can be responsible for the line width broadening NPLs, their supression is desirable to improve color purity in optical applications. It has been suggested that this job could be achieved by increasing quantum confinement, reducing either lateral dimensions or shell thickness -the latter would favor electrostatic confinement. 20 Both strategies have the drawback of introducing size dispersion in ensemble luminescence. From our theoretical analysis, we confirm that reducing Coulomb admixture would minimize SU processes, but this can be achieved by weakening Coulomb interactions instead of increasing quantum confinement. For example, reducing dielectric confinement or using thinner cores to enhance the quasi-type-II character should contribute to this goal. Obviously, this approach would have the drawback of reducing the band edge recombination rate as well.
Alternatively, since our study shows that impurities are ultimately responsible for SU processes, experimental routes to suppress SU processes could be directed to control of traps. Appropriate choice of surface ligands, 44 electrochemical potentials 53 and interface alloying 16,17 could contribute to this end.
Because we find surface charges on lateral sidewalls particularly suited to induce SU processes, the growth of core/crown heterostructures is expected to reduce their influence by keeping the outer rim away from the photogenerated carriers. This suggestion seems to agree with experimental observations by Kelestemur and co-workers, indicating that core/crown/shell CdSe/CdS NPLs have more symmetric emission behavior than core/shell ones at cryogenic temperatures, 54 This can be understood as a consequence of the suppression of SU processes in the low-energy tail of the emission band. It is also consistent with This is at least one order magnitude larger than in epitaxial quantum wells. The SU peak is redshifted from the band edge peak by up to few tens of meV, thus providing a source of line width broadening.
These results are in excellent agreement with recent experimental findings in CdSe NPLs 21 in terms of number of emission peaks, energy splitting and relative intensity, but only partially so with those of core/shell CdSe/CdS NPLs. 20 Experiments in the latter structure are however in line with an alternative interpretation involving simultaneous participation from trion singlet and metastable triplet states.
Strategies to narrow the line width of NPLs through suppression of SU processes should aim at controlling electrostatic impurities or Coulomb admixture.
Additional calculations
We present here additional calculations for further understanding of SU processes.
Convergence of CI calculations
Configuration Interaction (CI) calculations on the basis of independent particle (or Hartree-Fock) orbitals provide an excellent description of repulsions in few-and many-fermion systems. 1,2 However, large basis sets are needed to describe strong attractions, 3,4 which are certainly present in colloidal NPLs 5 and are involved in a correct description of SU processes. Figure S1: X − emission spectrum for Q edge = 0.4 (see main text). Zero energy is set for the fundamental transition with ne = nh = 22. ne and nh are the number of single-electron and single-hole spin-orbitals, respectively, used to build the CI basis sets.
In Fig. S1 we compare the X − emission spectrum calculated for CdSe NPLs -same dimensions as in main text-using different basis sizes. The basis is formed by all possible combinations of the first ne (nh) independent particle spin-orbital states of electrons (holes).
With increasing basis dimensions, the band edge transition peak redshiftes and gains intensity, which reveals an improved description of electron-hole correlation. The intensity of the SU peak height and its redshift with respect to the band edge transition are however less sensitive to the basis dimensions. It follows from the figure that quantitative assessment on S2 the ratio of fundamental vs SU peak heights requires large basis sets. In the main text we use ne = nh = 22. By comparing with smaller values of ne/nh in the figure, it is clear that for this value -which involves very time-consuming computations-the ratio is reaching saturation. This validates the order of ratios provided in the main text. For the calculations in this Supporting Information, however, we may resort to ne = nh = 12, which overestimates the relative height of SU peaks, but suffices to provide qualitative assessment.
Positive trion behaviour -20 -10 0 10 20 Emission (a.u.) Figure S2: X + normalized spectrum emission for different charge intensities. The arrows are pointing to SU satellites (dotted lines are guides to the eyes). The highest SU peak (Q edge = −0.3) is marked with a star. The origin of energies is set at the band edge recombination peak. The insets correspond to Q edge = 0.5 amplified SU peaks.
In the main text, we have mostly considered the case of negative trions. We show here that the same behavior holds for positive ones. To illustrate this point, we choose the case of the core-only NPL with an edge charge, equivalent to Fig.1d of the main text. Figure S2 shows that the presence of SU peaks in the emission spectrum is again strongly dependent on the value of the surface charge. For Q edge = 0, no SU peak is observed. For repulsive (Q edge > 0) charges, SU are formed but very small in magnitude. The highest SU peaks are formed for weakly bound donor charges (Q edge < 0), which attract the holes of X + , marked with a star in the figure. As in the X − case, if the attractive charge further increases it S3 starts dissociating the trion. Consequently, SU peaks are quenched again. Notice however energy splittings for X + (Fig. S2) are smaller than for X − (Fig. 1d in the main text). This is expected from the heavier masses of holes. In the main text we present the representative cases of a surface charge centered on top of the NPL (Q top ), and centered and that of a charge on the edge of lateral sidewall (Q edge ).
Effect of charge impurity location
In Figure S3 we compare with different locations. One can see that the effect of a charge located in the corner, red line in Fig. S3a, provides similar SU peaks to that of the edge charge, blue line in the figure, both in energy and intensity. We recall that these traps seem to be particularly likely according to recent studies on ligand desorption. 6,7 Off-centered charges on top and bottom surfaces are studied in Fig. S3b. They give rise to SU peaks of similar height to that of Q edge , although they reach the optimal charge value sooner than Q edge (Q top−edge ∼ Q top−corner ≈ 0.2 versus Q edge = 0.4), because they lie closer to the center of the NPL, where photogenerated carriers tend to localize.
Effect of conduction band offset in CdSe/CdS NPLs
The value of the CdSe/CdS conduction band offset (CBO) has been a subject of debate in nanocrystal heterostructures. [8][9][10] We used, along our main text, an upper-bound unstrained value of 0.48 eV, 8 which is partly reduced by compressive strain in the core. 10 Here we explore the scenario where we use a lower-bound 9 value as well, to see the possible effect of enhancing electron delocalization over the CdS shell. Figure S4 compares the two cases.
Lowering the CBO gives rise to slightly weaker electron-electron repulsion (V ee ) and electronhole attraction (V eh ), however the differences are very small. One can then expect similar role of SU processes as in the main text.
b)
Qedge In this section we compare qualitatively the response in the two cases using a moderate basis set (ne = nh = 12), which permits addressing the experimental dimensions without the computational burden of the large basis set (for 12 ML thickness, the extended CI computation is beyond our current resources). 0.5 If we focus on the charge location in both systems, Fig. S5a, one may expect similar behaviour. The main difference, as can be seen in Fig. S5b (left panel) occurs for repulsive electron-electron interactions, which are slightly weaker for thick shells. This is a consequence of the larger electron delocalization, which translates into smaller |c 2 | coefficients in the CI expansion (see main text) and hence slightly smaller SU satellite, as observed in Fig. S5c.
Effect of inserting multiple impurities in CdSe/CdS
We consider here the possibility that two surface traps, instead of one, are acting as electrostatic impurities in CdSe/CdS NPLs. Since there is a general preference of forming defects in the heterostructure interfaces -because of lattice mismatch 10,12 -and on lateral facets -where ligand desorption is more likely to happen 6 -, we choose the charges to be located as shown in Fig. S6a. The presence of two charges, combined with the weak in-plane confinement, easily dissociates the trion by driving one electron to each surface impurity. This can be seen in the charge densities of Fig. S6b. The number of visible SU peaks, however, remains one (see Fig. S6c). In the case of strong surface charges (Q = 1.0), the trion triplet | 7,499.8 | 2020-07-30T00:00:00.000 | [
"Physics"
] |
Phonon driven charge dynamics in polycrystalline acetylsalicylic acid mapped by ultrafast x-ray diffraction
The coupled lattice and charge dynamics induced by phonon excitation in polycrystalline acetylsalicylic acid (aspirin) are mapped by femtosecond x-ray powder diffraction. The hybrid-mode character of the 0.9 ± 0.1 THz methyl rotation in the aspirin molecules is evident from collective charge relocations over distances of some 100 pm, much larger than the sub-picometer nuclear displacements. Oscillatory charge relocations around the methyl group generate a torque on the latter, thus coupling electronic and nuclear motions.
I. INTRODUCTION
The interplay of electronic and nuclear motions in molecular systems is at the heart of numerous processes in physics and chemistry. In the ultrafast time domain, coherent nuclear motions have been induced by broadband vibrational and/or vibronic excitations. A vibrational wavepacket represents a nonstationary coherent superposition of quantum states in a potential determined by the electronic structure of the molecule. In the most elementary case described by the Born-Oppenheimer picture, the wavepacket undergoes a periodic oscillation in a timeindependent electronic potential, i.e., nuclear and electronic motions are decoupled with a negligible impact of the nuclear motions on the shape of the vibrational potential. The coherent motion is eventually damped by vibrational decoherence induced, e.g., by the coupling of the molecule to an external bath.
A different regime of electronic and nuclear motions exists in polar and/or ionic molecular crystals with strong internal electric fields between the molecular (sub)units. The internal fields represent a major component of the total field acting locally on the individual molecular groups. As a result, the subtle nuclear rearrangements connected with vibrational excitations induce a pronounced relocation of electronic charge, in order to minimize the electrostatic energy of the crystal. Femtosecond x-ray diffraction has been applied to make such behavior directly visible. [1][2][3] In the prototype material potassium dihydrogen phosphate (KH 2 PO 4 , KDP), coherent vibrational motions along a transverse-optical (TO) phonon coordinate induce a relocation of electronic charge within the PO 4 groups and, to lesser extent, between the K þ ion and the PO 4 groups. 2,3 The length scale of charge relocation is on the order of 100 pm, i.e., a chemical bond length, while the nuclear elongations along the TO phonon coordinate are in the sub-picometer range. This hybrid character of the TO phonon is very similar to the behavior of low-frequency soft-modes in crystalline ferroelectrics which display a strong coupling to the electronic system and undergo a pronounced frequency down-shift upon the phase transition from a para-to a ferroelectric phase of the material. [4][5][6][7][8][9] A strong coupling between electronic charge and phonon degrees of freedom makes the vibrational frequencies and absorption strength susceptible to both the local electric field strength and electronic correlation effects. Recently, this basic nonequilibrium behavior has been elucidated in nonlinear terahertz (THz) experiments with polycrystalline acetylsalicylic acid (C 9 H 8 O 4 , aspirin). 10 Upon THz excitation of a methyl (CH 3 ) rotational mode which couples strongly to the p-electron system of the aspirin molecules, one observes a strong blue-shift of the rotational frequency from its equilibrium value of 1.1 THz to 1.7 THz. This behavior represents a manifestation of the dynamic breakup of the strong electron-phonon correlations and has been reproduced by theoretical calculations including dynamic local-field correlations. 10,11 While nonlinear THz spectroscopy maps the nonlinear response of the coupled system, it provides only indirect information on the electronic charge relocations connected with the vibrational excitation.
In this article, we present a study of phonon driven charge relocations in polycrystalline aspirin by femtosecond x-ray powder diffraction. Optically induced coherent motions along the methyl rotational coordinate induce strong changes in the pelectron system of the aspirin molecules. Electronic charge is shifted over interatomic distances on the order of 100 pm, a length scale orders of magnitude larger than the sub-picometer nuclear displacements upon CH 3 rotation. The charge redistribution results in pronounced changes of the electronic dipole moment of the molecular units. The behavior observed here is a direct manifestation of a dynamic hybrid-mode response.
II. EXPERIMENTAL METHODS AND RESULTS
At room temperature, aspirin crystallizes in a monoclinic crystal structure (space group P2 1 /c) with four formula units per unit Fig. 1(a)]. [12][13][14] In the prevailing form I of the crystallites, individual molecules form centrosymmetric hydrogenbonded cyclic dimers as shown in Fig. 1(b), which are stacked along the crystallographic b-axis. The linear absorption spectra of aspirin molecules diluted in liquid solvents and of aspirin crystals display a similar pattern of electronic absorption bands, pointing to a minor electronic coupling of aspirin molecules in the crystal and a localized character of the underlying electronic excitations. 15 The aspirin samples were prepared fresh every day from a finely ground (grain size $1 lm) commercially available starting material (Sigma Aldrich, purity of 99.0%). Tightly pressed pellets of a thickness of $40 lm were placed between two polycrystalline diamond windows ($20 lm thickness) and fixed on a sample holder continuously rotating around an axis parallel to the x-ray beam, with an offset of $300 lm to mitigate potential sample damage due to the pump beam.
The ultrafast diffraction experiments are based on an optical pump/x-ray probe scheme where the sample is optically excited by 70 fs pulses with a center wavelength of 400 nm and a hard x-ray probe pulse is diffracted from the excited sample. 16 Both pump and x-ray probe pulses are derived from an amplified Ti:sapphire laser system delivering sub-50 fs pulses centered at 800 nm with an energy of 5 mJ and a repetition rate of 1 kHz. The optical pump pulses have an energy of 25 lJ and are focused to a spot size of $500 lm, providing a peak intensity I p $ 2  10 11 W/ cm 2 at the sample surface. The sample is electronically excited predominantly via 2-photon absorption of the pump pulses over the bandgap (E g % 4.3 eV). 15 An experimental analysis of the pump geometry of the sample shows that the fractions of pump light backscattered from and transmitted through the powder are both less than 10%. In other words, the powder layer practically absorbs all the incident pump photons. From the absorbed energy per volume, the incident pump photon flux, the molecular weight of aspirin, and the mass density of the powder sample, one estimates a fraction of 1% 6 0.5% of aspirin molecules in the irradiated sample volume which are promoted to the excited state. At this pump level, electronic excitation via nonlinear absorption processes of higher order plays a minor role. The major part (80%) of the 800 nm laser output is focused on a 20 lm thin Cu tape target to generate hard x-ray pulses with a photon energy of 8.04 keV (Cu K a ) and a duration of roughly 100 fs. 17 The emitted x-ray pulses are collected, monochromatized, and focused to an $100 lm spot size at the sample position by a Montel multilayer mirror (Incoatec) providing a flux of $5  10 6 photons/s. Further details of this table-top femtosecond hard x-ray source and the entire experimental setup have been described earlier. 9,16,18 The hard x-ray pulses probe the pump-induced structural dynamics in the photoexcited sample. The Cu K a photons diffracted from the sample were recorded in transmission geometry by a large area detector (Pilatus Dectris 1M; pixel size 172 lm  172 lm) which allows us to determine the intensity of multiple Debye-Scherrer rings at each delay time simultaneously. The powder diffraction pattern from an unexcited aspirin sample at room temperature is shown in Fig. 2(a). Integrating over all pixels with identical scattering angle 2h yields 1D powder diffraction patterns as shown in Fig. 2(b), which allow for an assignment of 15 Bragg peaks to sets of lattice planes. 14 For each individual pump-probe delay, a total integration time of the x-ray detector of 140 s was chosen. Measurements were performed in a random order over 14 days for $1500 different randomly generated delay times with an average 5 fs spacing in-between, covering a total delay range of 8 ps. All-optical cross correlations of the pump pulses with 800 nm pulses traveling along the optical path of the x-ray pulses were measured repeatedly to ensure a proper stacking of data from different days. These procedures in combination were chosen to mitigate the influence of potential long term fluctuations of the laser system. Finally, we sorted all individual data points according to their delay time and then averaged neighboring data points within a 250 to 400 fs interval of delay times.
Upon optical excitation, the angular positions of all observed reflections remain unchanged within the experimental
ARTICLE
scitation.org/journal/sdy accuracy and no Bragg reflections forbidden by the symmetry of the equilibrium space group P2 1 /c occur within our experimental sensitivity. The diffracted intensities display pronounced oscillatory changes DI hkl ðtÞ=I 0 hkl ¼ ðI hkl ðtÞ À I 0 hkl Þ=I 0 hkl which are plotted in Fig. 2(c) as a function of pump-probe delay [I hkl ðtÞ; I 0 hkl : intensity diffracted with and without optical excitation] with standard errors of the mean values from averaging the raw data in the respective bins. The typical period of these oscillations is $1 ps, corresponding to a frequency of $1 THz. The binning of raw data points ensures a sufficiently high signal-to-noise ratio for a faithful reconstruction of the transient electron density via the Maximum Entropy Method detailed below. Averaging the raw data with a reduced bin size, resulting in twice the number of averaged data points, nicely retains the oscillatory signal as indicated by the light gray points in Fig. 2(c). There are no significant additional high frequency components in the transients with a reduced bin size.
III. RECONSTRUCTION OF TRANSIENT ELECTRON DENSITY MAPS AND CHARGE DYNAMICS
The time dependent intensity changes DI hkl ðtÞ=I 0 hkl observed in the experiment are related to the transient x-ray structure factors F hkl ðtÞ according to DI hkl ðtÞ=I 0 hkl ¼ (jF hkl ðtÞj 2 À jF 0 hkl j 2 Þ= jF 0 hkl j 2 , where F 0 hkl are the known structure factors of the unperturbed material. 14 The time dependent electron density qðr; tÞ averaged over all crystallites and its change relative to the unperturbed electron density q 0 ðrÞ of aspirin are extracted from the structure factors F hkl ðtÞ by employing the maximum entropy method as implemented in the BayMEM suite of programs. [19][20][21] The maximum entropy method maximizes the information entropy S which is defined as S ¼ À P N ðv¼1Þ q v ðtÞ log ðq v ðtÞ=q 0v Þ. The summation runs over a discretized grid of N voxels while fulfilling a set of constraints for the supplied structure factors F hkl ðtÞ. 19 The treatment assumes a preservation of the initial crystal symmetry, as supported by the absence of forbidden reflections in the transient diffraction patterns. As a result, the total electronic charge on the individual aspirin entities is constant. In Fig. 3, equilibrium and transient charge density maps are summarized for the plane containing the C 6 rings and the COOH carboxy groups of the aspirin molecules, highlighted by green and blue circles in Fig. 3(a). Figure 4 displays analogous maps in the plane of the CH 3 CO 2 acetoxy group, highlighted by a red circle in Fig. 4(a). Figures 3(b) and 4(b) show the equilibrium charge density q 0 ðrÞ, while Figs. 3(c)-3(f) and 4(c)-4(f) display differential charge densities Dqðr; tÞ ¼ qðr; tÞ À q 0 ðrÞ for different pump-probe delays. The absolute values of Dqðr; tÞ have an uncertainty of up to 1.7 e À /nm 3 . The differential charge densities reveal a pronounced modulation of charge density with time, close to the original positions of the lattice atoms, which are indicated by black circles. It is important to note that all major changes Dqðr; tÞ are centered on the ground state atomic positions, without a charge transfer to previously unoccupied positions in space. This behavior demonstrates that the molecular arrangement of the ground state crystal structure is preserved and chemical processes are absent, in contrast to other polar materials such as paraelectric ammonium sulfate (NH 4 ) 2 SO 4 . 22 In order to gain insight into the coupling between the electronic charge density oscillations and the rotation of the methyl group, we investigated the charge dynamics in a spherical shell around the methyl group in more detail. To this end, we integrated both the stationary and the differential charge density in a spherical shell which is centered at the carbon atom, with a radius of 150 pm and a gaussian radial profile of 50-pm thickness. Figure 5 shows the results plotted in a spherical coordinate system characterized by the angles h and /. The spheres display filet indentations at the solid angles corresponding to the proton positions. The color code represents the stationary charge density [panel (a)] and differential charge density maps [panels (b) t ¼ þ0.24 ps and (c) t ¼ þ0.78 ps] as a function of h and /. Both the stationary electron density and its changes as a function of the pump-probe delay are distinctly asymmetric with respect to the proton positions. In particular, the transient excess charge opposing the protons on the shell is concentrated close to one of the protons with a maximum at a distinctly different solid angle. As a result, the electronic charge density oscillation exerts a net torque on the methyl group which is the basis for the strong coupling between the electronic charge density oscillations and the rotation of the methyl group in the phonon mode of aspirin.
The Dqðr; tÞ maps in Figs. 3 and 4 reveal an oscillatory relocation of charge density involving the electron system of the entire molecule, which is particularly pronounced within the plane containing the carboxy group and the benzene ring. To The results presented in Fig. 6 reveal pronounced oscillatory changes of local charge on the benzene ring, the acetoxy group, and the carboxy group as a function of delay time. The oscillation frequency is derived from a piece-wise fit of a sinusoidal function to the data points in the time-resolved transients (solid lines).
This procedure results in a total of 9 momentary frequencies for each data point with a positive delay time which is the midpoint of an interval of seven neighboring delay times. The average oscillation frequency has a value of 0.9 6 0.1 THz. The stated uncertainty describes the range of frequency values occurring during the decay of the oscillation. The charge changes on the benzene ring and the acetoxy group occur in phase, while the charge on the carboxy group changes with the opposite phase. This behavior is equivalent to an oscillatory intramolecular charge relocation between the carboxy group and the rest of the molecules and connected with changes of the electronic dipole moment along the axis connecting the carboxy group and the benzene ring. The amplitude of the charge oscillations averaged over all aspirin molecules is on the order of 1% of an elementary charge. Assuming charge transfer only on the excited molecules and taking into account the 1% fraction of excited molecules, the oscillation amplitude per excited molecule is on the order of one elementary charge e, corresponding to a transient change of the electric dipole moment on the order of 1 Debye along the axis connecting the ring structure and the carboxy group. The oscillations are severely damped on a time scale of several picoseconds. This phenomenon is mainly caused by inhomogeneous broadening of phonon frequencies in the polycrystalline sample, as has been discussed in the context of the nonlinear THz experiments on aspirin. 10,23
IV. DISCUSSION
The time resolved x-ray diffraction data and the transient charge density maps derived from them reveal periodic motions of electronic charge with an oscillation frequency of 0.9 6 0.1 THz. The oscillations are due to coherent nuclear motions which are generated upon electronic excitation of the aspirin crystallites by the pump pulse. The absence of a time delay in the onset of the oscillations points to an impulsive displacive driving mechanism. 22,24 In this process, the change of the molecular electron distribution in the excited compared to the ground state modifies the potential of vibrational and rotational modes and induces coherent motions via the electronic deformation potential.
The observed oscillation frequency of 0.9 6 0.1 THz is very close to the methyl rotation frequency in the electronic ground state. In addition to the methyl rotation, there are a number of FIG. 5. Integrated charge densities in e À / nm 3 in a spherical shell with a radius of 150 pm and a thickness of 50 pm around the central C atom of the methyl group in dependency of the polar angles h and / (a) for the ground state density distribution and (b) and (c) two differential charge density distributions at selected delay times. The value of the (differential) charge density is given by the color coding. In all three cases, the viewing direction is chosen along the C-C bond and filet indentations are shown at the solid angles corresponding to the proton positions. The arrows schematically indicate the direction of the net torque on the methyl group.
FIG. 6. Spatially integrated group charge changes DQðtÞ of (a) the benzene ring, (b) the entire CH 3 CO 2 acetoxy group, (c) solely the CH 3 methyl group, and (d) the carboxy group plotted as a function of pump-probe delay (black symbols). Colored solid lines: At the individual data points, the solid line has exactly the value of the corresponding piecewise fit. Between two data points, the solid lines are linear interpolations of the two overlapping piecewise fits of neighboring sets of data points.
ARTICLE
scitation.org/journal/sdy other low-frequency modes, among them the intermolecular hydrogen bond modes of the aspirin dimer structure (Fig. 1) 10,25,26 which, however, occur at higher frequencies. The frequency of the methyl rotation is strongly red-shifted compared to that of a free methyl rotator, a consequence of its strong coupling to the electronic system. 10 We conclude that the charge oscillations are due to coherent methyl rotations in the crystallites. The transient electron density maps of Figs. 3 and 4 demonstrate charge relocations on a length scale of 100 pm, comparable to the interatomic distances in the molecular structure. Such distances are much larger than the sub-picometer displacements of vibrational and/or rotational elongations. The mismatch of length scales represents a hallmark of the impact of a hybrid mode on the electronic system. 2,9 The strong correlation of nuclear and electronic degrees of freedom and the presence of strong local field effects make the polarizable charge distribution highly sensitive to minute changes in atomic positions. The observed behavior bears strong similarity to the early qualitative picture of soft-modes developed by Cochran, 4,5 here with quantitative insight into charge distributions and their dynamics.
To further validate this picture, we retrieved the transient positions of the carbon and oxygen atoms in the aspirin entities from the time dependent electron density qðr; tÞ, in particular from the core electron density which follows the motion of the respective nucleus. This is done by fitting a three-dimensional gaussian distribution to the high core electron density in the transient electron density maps. The sub-picometer displacements connected with the 0.9 6 0.1 THz methyl rotations are complemented by motions of the carbon and oxygen atoms in the aspirin molecule with similar amplitudes (Fig. 7). As is evident from the data in Fig. 4, the hydrogen atoms of the CH 3 groups are not discernible with the resolution provided by the experimental density maps and, thus, one cannot follow their motion upon excitation. Instead, the transient in Fig. 7(b) reflects the motion of the entire CH 3 group from its ground state position Dr CH3 . An analysis of its momentary position reveals that it mainly oscillates along a line roughly perpendicular to the C(1)-C(2) bond and roughly parallel to the C(1)-H(1a) bond in the plane of Fig. 4 (schematically indicated by an arrow).
The onset of the carbon displacements in Fig. 7(a) and of the methyl motion shown in Fig. 7(a) occurs in phase within the first 0.25 ps, i.e., there is no mutual delay in the nuclear response of different parts of the aspirin molecules. This observation is in line with a picture in which the relocation of electronic charge follows the methyl displacement adiabatically, leading to a synchronous change of the vibrational potential along different atomic coordinates. As the sub-50 fs time scale of carbon motions is much shorter than the methyl rotation period, the different nuclei adopt to their momentary position well within the time resolution of the present experiment and the motions appear to be in phase. It should be noted that the maximum change of the total charge on the methyl group occurs only in the second half cycle of the methyl oscillation [ Fig. 6(c)], suggesting that the contribution of the methyl group to the total periodically shifted charge is limited.
A transient position change of the carboxy group affects the hydrogen bond length in the aspirin dimer. To assess this potential geometry change, we plot the transient change of the OHÁ Á ÁO distance Dd OÀO in Fig. 7(c). The absolute displacements are similar to those of the other atoms and of the methyl group, i.e., the elongation by several hundreds of femtometers is small compared to the length of the hydrogen bonds of 262 pm. Thus, there are minor changes of hydrogen bond strength upon excitation of the methyl rotation.
In conclusion, the hybrid-mode character of the 1-THz methyl rotation in polycrystalline aspirin is manifested in pronounced relocations of electronic charge on a length scale of interatomic distances or covalent chemical bonds. The present study adds in-depth structural information to recent work in which the nonlinear vibrational response in the electronic ground state of aspirin has been studied by ultrafast twodimensional THz spectroscopy in conjunction with high level theoretical calculations. 10 A combination of THz excitation with x-ray diffraction probing holds strong potential to unravel the hybrid-mode-induced charge dynamics in the ground state of aspirin and soft-mode dynamics in ferroelectric materials.
SUPPLEMENTARY MATERIAL
See supplementary material for an animation that illustrates the phonon driven oscillatory charge relocation in an acetylsalicylic acid molecule. | 5,226.2 | 2019-01-01T00:00:00.000 | [
"Physics",
"Chemistry"
] |
T Cell Epitope Prediction and Its Application to Immunotherapy
T cells play a crucial role in controlling and driving the immune response with their ability to discriminate peptides derived from healthy as well as pathogenic proteins. In this review, we focus on the currently available computational tools for epitope prediction, with a particular focus on tools aimed at identifying neoepitopes, i.e. cancer-specific peptides and their potential for use in immunotherapy for cancer treatment. This review will cover how these tools work, what kind of data they use, as well as pros and cons in their respective applications.
INTRODUCTION
T cells recognize and survey peptides (epitopes) presented by major histocompatibility complex (MHC) molecules on the surface of nucleated cells. To be able to perform this task, T cells must be able to differentiate between native "self" peptides versus peptides deriving from pathogens, infections or genomic mutations. In order to effectively mount and initiate an immune response, T cells must undergo activation. The main requirement of T cell activation is the molecular recognition between the T cell receptor (TCR) expressed on the T cell surface and peptide-MHC complexes (pMHC) presented on the surface of other cells. This precise recognition process is of paramount importance for a well-functioning immune system, and is shaped by a mechanism named central tolerance. In order to ensure that T cells do not react against ubiquitous peptides found in an individual, T cells undergo the process of negative selection. Early in their development, T cells are presented with a plethora of self-peptides, where any T cell that recognizes self-peptides is eliminated, leaving only T cells with little or no specificity for self. Cases in which this mechanism fails and T cells recognize self-epitopes are typically associated with harmful effects on the organism and might result in autoimmune disorders.
As mentioned earlier, T cells recognize epitopes only when they are presented by MHC molecules. Early in the thymic development of T cells, they undergo the process of positive selection ensuring that they bind to host MHC molecules. There exist two classes of MHC molecules: class I expressed on surfaces of all nucleated cells and class II found on surfaces of specialized antigen-presenting cells (APCs). As two classes of MHC molecules occur, two types of T cells are specially equipped for binding to the MHC I and II, the CD8+ and CD4+ T cells, respectively. The general focus of this review will be on cytotoxic CD8+ T cell binding to MHC I presented epitopes.
The immune system in general is very good at identifying "foreign" peptides stemming from bacterial or viral infections. On the other hand, as initially proposed by Burnet and Thomas through the idea of immunosurveillance (1,2), the same process can also protect our organism from cancer, by recognizing cancer-specific peptides (neoepitopes) generated by somatic mutations or genomic aberrations ( Figure 1). The ability of the immune system to target cancer cells has been exploited by a novel class of therapies, such as adoptive T cell therapy and cancer vaccines, named immunotherapies. These approaches, by exploiting the high selectivity of the immune system, have the advantage to be more specific and less invasive than traditional cancer therapies, and potentially effective even at later stages by providing immunological memory.
Broadly, immunotherapy can be divided into two categories: "active" and "passive". The "active" works to stimulate T cells of the individual's immune system into attacking tumor cells i.e. effectively training the immune system in vivo. The "passive", focuses on in-vitro training and subsequent injection of immune agents that will help battle the disease in vivo (3). Passive immunotherapy includes therapies such as adoptive cell therapy, cytokine injection, monoclonal antibodies and lymphocytes (4,5). Active immunotherapies encompass therapies such as non-specific immunomodulation and vaccination (6,7).
Computational tools for epitope prediction have been recognized as being crucial for successful development of various cancer immunotherapies (8). This review will therefore give an overview of both general and cancer specific epitope prediction tools and discuss the pros and cons of the different tools and future perspectives in the field.
EPITOPE PREDICTION METHODS
As mentioned before, a peptide needs to be presented by an MHC I molecule for it to be able to elicit effector T cell responses. Contrarily to MHC II molecules, which can bind to peptides that are longer and more variable, MHC I binding is restricted to peptides typically 8-14 amino acid long in sequence and that some of the residues in the peptide, denoted anchor residues, are important for peptide-MHC binding (9) (Figure 2). In most human alleles the anchors are the second and the last residues in the peptide (10), but this depends on the allele and species. The binding of peptides to MHC molecules is therefore a very selective step, which has been a major focus in many epitope prediction models. However, most peptides presented by MHC molecules will not elicit an immune response as they do not evoke TCR specific recognition by the T cell. In order to shed light on this interaction, computational models are being constructed with the goal of predicting T cell recognition of the presented peptide and its connection to an overall immune response. Epitope prediction can thus currently be divided into FIGURE 1 | Graphic representation showing genomic aberrations, which can lead to the occurrence of cancer-specific peptides (neoepitopes). The left panel shows gene fusions, which is the rearrangement of two genes leading to the encoding and translation of a potentially novel immunogenic peptide. The right upper panel shows single nucleotide variations (SNV) and the right lower panel shows insertions and deletions (indels), that may cause the creation of immunogenic cancerspecific peptides. For further detail see the main text.
two main focus areas. The first addresses the presentation of peptides by MHC molecules. Extensive reviews on this subject have been published recently, and we single out the in depth work by Peters et al. (11). In this review, we mainly focus on the second part of the interaction: predicting T cell recognition of pMHC complexes.
One of the first attempts at defining the immunogenic potential of peptides was based on their local and global physico-chemical characteristics, regardless to the specific T cell interaction. One of such tools is POPI (12), which is a support vector machine (SVM) based method. SVMs are machine learning tools that can identify complex non-linear relationships between the input data and the predicted variable. In this case, a feature set of physico-chemical properties derived from MHC I binding peptides is used to predict the peptide's immunogenicity. POPI uses averaged values of the physicochemical properties independent of the amino acid positions in the peptides, therefore being unable to take local information into consideration in the predictions.
Another model named POPISK (13), by the same group, tries to improve on this by utilizing a SVM in conjunction with a weighted degree string kernel. The model is seemingly only capable of predicting immunogenicity for HLA-A2-binding peptides. Where predictions reached an overall accuracy (ACC) of 0.68 and 0.74 for area under the curve (AUC). The ACC and AUC are calculations based on a confusion matrix, which in different ways essentially estimates how often an algorithm predicts correctly. In both cases, a perfect prediction would have both ACC and AUC equal to 1, and lower values for worse predictions. A more exhaustive introduction to accuracy metrics for prediction tools can be found in Peters et al. (11). It should be mentioned that the dataset was not pre-processed to remove or reduce the redundancy -i.e. very similar peptides might be present. This has been observed to have a negative impact on the methods' ability to generalize, that is the ability of an algorithm to achieve good results on data that is different from the data used to train. A typical strategy to deal with this issue is to perform some form of homology reduction to reduce redundancy. In the discussion we will discuss more about the importance of such procedure when assessing the actual accuracy of prediction tools. Furthermore, it should be noted that both POPI and POPISK are not available for general use anymore.
Calis et al. created the immunogenicity model (14) based on experimental indications. The authors discovered that T cells show a preference for binding peptides containing aromatic and large amino acids. They also showed that positions 4-6 were important in regards to immunogenicity. Based on this information, a scoring model was created which scores peptides based on the ratio of an amino acid between a nonimmunogenic and immunogenic dataset. Furthermore, it weights the amino acid based on its position in the ligand. The authors estimated the accuracy of the model on new MHC I binding peptides, and obtained an AUC of about 0.65, thus the model is only to some extent predictable. It should be noted, that where models such as POPISK only is capable of predicting TCR propensity for HLA-A *02:01, the Calis et al. immunogenicity model can make predictions for any MHC I molecule.
PAAQD (15) is a model which focuses on predicting T cell reactivity. It works by encoding nine-mer peptides which are processed in a random forest algorithm, in order to predict the immunogenicity of a peptide binding to MHC I. The peptides are numerically encoded by combining information regarding quantum topological molecular similarity (QTMS) descriptors and amino acid pairwise contact potentials (AAPPs). In the article it was mentioned that an ACC of 0.72 and a AUC of 0.75 was obtained for immunogenicity prediction. It obtained a higher AUC and ACC than POPISK and a higher AUC than the immunogenicity model by Calis et al., however, like POPISK, no homology reduction was done to reduce redundancy. Furthermore the model had a focus on HLA-A2 and will have limited success in predicting immunogenic peptides for other HLA molecules.
Jørgensen and Ramussen, who developed NetMHCstab (16) and NetMHCstabpan (17) respectively, theorized that instead of entirely focusing on the HLA binding affinity one should also take pMHC stability into account to predict immunogenic MHC I ligands. They based this hypothesis on the assumption that a more stable presentation of an epitope bound to an MHC will increase the likelihood of a T cell recognizing the epitope. However, as the authors have also indicated in the papers themselves, stability alone did not give as good results as combining a stability predictor with a pMHC I binding predictor.
Experimental investigation of peptide presentation and binding by Schmidt et al. (18) showed poor correlation with predictions for the same peptides by NEtMHCstab and NetMMHCpan in combination with a binding affinity predictor. These models were outperformed by another epitope prediction model: NetTepi (19). This model has been built on top of previous efforts and combines: peptide-MHC stability using NetMHCstab, T cell propensity predictions using the immunogenicity model by Calis et al. and peptide-MHC binding affinity using NetMHCcons (20). The model has been stated to be capable of predicting T cell epitope for multiple HLA molecules with a sensitivity of 90% and a false positive rate of 1.5%.
One of the newer models for predicting which epitopes will be recognized by T cells is NetTCR (21). NetTCR implements a convolutional neural network (CNN) model to predict TCR recognition of a peptide. CNNs are a type of neural network which are very popular for different tasks (e.g. image recognition) and capable of identifying local patterns in the input data. The model takes as input a HLA-A *02:01 binding MHC I peptides and the CDR3 protein sequence of a T cell receptor. The model obtained a somewhat high AUC of 0.727. The AUC is lower than the AUC for POPISK (0.74) and PAAQD (0.75). However, it should be noted that unlike POPISK and PAAQD, NetTCR performed homology reduction to reduce any redundancy in the data.
A major bottleneck in improving the accuracy of models is in the limited amount of available training data. However, several databases collecting experimental immunogenicity data are now available, with one of the first to pioneer this area being SYFPEITHI from Rammensee et al. in 1999 (22). Newer databases have since been created such as IEDB (23), VDJdb (24), McPAS-TCR (25), ATLAS (26) and STCRDab (27). The steadily increasing amount of experimental data will support the generation of models with greater prediction power.
STRUCTURAL EPITOPE PREDICTION
The energetic balance of the TCR-pMHC interaction is one of the main drivers in dictating the initiation of an immune response. As evident from structural (28) and mutagenesis studies (29), this balance is very delicate. All circulating T cells have undergone the so-called positive selection process, meaning that they must bind with low affinity to MHC molecules, regardless of the specific epitope. Additionally, TCR interaction is highly cross-reactive, meaning that a single TCR will potentially be able to bind to thousands of peptides. This poses a serious hurdle to develop computational tools to predict immunogenicity based on structural calculations. In recent years, it has been shown that, when using fine-grained molecular dynamics (MD) simulations, one can to some extent predict TCR-pMHC interactions (30). Unfortunately, this approach is neither very precise nor feasible. For such calculations, high quality structures of the interacting molecules are needed, and the current available amount of solved structures for TCRs is very limited -less than three hundred at the time of writing. In contrast, the number of different TCRs that circulate at any time in humans is 10 6 to 10 8 (31), and the theoretical numbers of different TCRs is at least 4 x 10 11 (32). This stark difference greatly reduced the usefulness of such methods to a tiny minority of the available cases. Even when solved structures are available, MD simulations are very demanding in terms of computing time. The dynamics of the TCR-pMHC interaction, especially regarding their dissociation rate, have time scales that are currently at the very limit of what one can achieve with full-grain MD Simulations.
Some works have focused on solving these 2 problems -the lack of structural information and the need for more efficient structurebased algorithms. It is now possible to model to a very good accuracy TCRs, pMHCs, and their complexes. Without delving in too much detail, most currently available methods (33)(34)(35) can model pMHC complexes to a very good accuracy -often less than 1Å Root Mean Square Deviation (RMSD) -from the native structure, and almost as good as the experimentally resolved structures. TCRs can also be modeled with good accuracy (in general less than 2Å RMSD), with some minor exception for the CDR3 regions of both TCR chains. The real culprit of all modeling tools is in predicting the correct mutual orientation of the TCR with respect to the pMHC, for which only a decent accuracy can be achieved: approximately, only 50% of the molecular contacts between TCRs and pMHC are recovered in the model. Given the current accuracy of the modeling tools for TCR-pMHC complexes, together with the computational cost of running detailed atomistic simulation, underline the need of more coarse-grained models, that can ease both the aforementioned problems. In recent years, Lanzarotti and co-workers (36, 37) used TCR-pMHC models to refine existing computational force fields [Rosetta (38) and FoldX (39)], and combined such refined energy calculations in a simple statistical framework to improve the prediction of existing TCR-pMHC complexes. The authors show that, even in such a simple approach, it is possible to exploit structural models to identify, among a pool of TCRs and pMHCs, the actual interacting partners.
The same results have recently been confirmed using a similar approach (40). The authors show that, by investigating the energy and the structural variability in TCR-pMHC models, it is possible to improve the prediction of TCR-pMHC pairs. At the current stage, structure-based methods can greatly reduce the number of false positive predictions obtained by sequence-only methods, at the cost of reduced sensitivity.
NEOANTIGEN PREDICTION
Genome aberrations are a typical feature of many cancer types (41). On the one hand such aberrations are linked to the cancer occurrence and growth, i.e. by disrupting normal cell cycle and apoptosis control. On the other hand, they can be exploited by the immune system to recognize and eliminate cancer cells. As mentioned previously, neoepitopes have been a major target of immunotherapy approaches such as adoptive T cell therapy or cancer vaccination. Several computational tools have been developed to assist and improve immunotherapy. The main rationale of these tools is to first identify aberrations in the cancer genome, and then, to a different extent and with individual approaches, to predict the ones that are more likely to trigger an effective immune response. Besides genomic aberrations, events such as post-translational modifications (PTMs) (42) and peptides derived from non-coding regions (43) can also cause neoepitopes to arise. However, due to the limited availability of data and of the biological basis of these, there are currently only very few computational tools for their analysis and prediction (44). Broadly speaking, the available tools can be categorized by the type of input data they accept, by the type of variants they can call, and by the strategy used to filter or prioritize the most immunogenic variants. Regarding the first point, neoepitopes can arise due to events such as single nucleotide variations (SNV), insertions and deletions (indels), intron retention, and chromosomal aberrations (45)(46)(47)(48) Table 1. Another difference between the tools is the types of data that these models rely on. In most cases the tools use whole genome sequencing (WGS), whole exome sequencing (WES), transcriptome sequencing (RNA-seq), peptide sequencing, or a combination of those. Finally, in order to filter and prioritize neoepitopes, many tools incorporate predictions from NetMHC (68) and NetMHCpan (69), alongside some other tools for predicting MHC binding. In the following, we will briefly present the available tools based on the characteristic that we have just discussed.
Single Data-Based Models
Both RNA-seq and DNA-seq data can be exploited to identify variants in the cancer genome, and several tools make use of these data to predict neoantigens. It is important to notice that these two experimental methods provide complementary information. DNA-seq data is in general more sensitive, i.e. it can identify more variants. RNA-seq experiments can be used to generate expression levels at the gene or, as at the transcript level, thus helping to prioritise variants that are present in highly abundant genes over those that have low or no expression. It should be noted that the transcript level is often recommended, since this can further give information regarding events important for neoepitope prediction, such as isoform selection and alternative splicing (70)(71)(72). Peptide sequencing can also be used for neoantigen prediction. This holds information regarding whether a gene is actually expressed or not at the protein level. This is very important information; identified variants at DNA or RNA level are not always expressed at protein level. The reader should take this into account when deciding which tools they want to use. Epi-Seq (49) is a tool which only uses tumor RNA-seq data. Epi-Seq works as a wrapper tool, i.e. it combines the output of other tools to perform an integrated prediction. It only supports SNV variant calling and neoantigen prediction from those calls. The Epi-Seq pipeline is very useful when only RNA-seq data is available. However, since the pipeline only focuses on SNV variants other potentially important variants are not predicted on.
ScanNeo (63) is a tool capable of predicting neoepitopes from small to large-sized indels. ScanNeo is a wrapper tool, which takes as input RNA-seq data. The three major steps in its pipeline are i) indels discovery, ii) annotation and filtering and iii) neoantigen prediction. ScanNeo uses NetMHC in its pipeline. Besides NetMHC, the tool also employs NetMHCpan in its pipeline to predict peptides that bind to HLA class I with high affinity.
NeoFuse (64) is a computational pipeline predicting neoantigens from gene fusions. It is a wrapper tool which uses raw RNA-seq data from patient tumors as input to do HLA class 1 typing, predict fusion peptides and quantification of gene expression. MHCflurry (73) to predict pMHC binding and the gene expression levels are utilized to filter out candidate fusion neoantigens. Like Epi-seq this is convenient when only tumor RNA-seq data is available.
DeepHLAPan (67) is a recurrent neural network-based approach, which takes both peptide-HLA binding and potential peptide-HLA immunogenicity into account. The tool predicts neoepitopes utilizing HLA class I typing provided by the user and peptides. The tool further filters the candidate neoantigens based on a score generated by an immunogenicity model based on immunogenicity data from IEDB.
Data Integration-Based Models
Next generation sequencing (NGS) has made it easier to sequence in parallel the DNA and RNA of a patient. By integrating the use of both DNA and RNA data, the researcher can call somatic mutations from the DNA and quantify gene and transcript expression from the RNA data, which can help in identifying which variants are more likely to be expressed. Also in this case, most of the computational tools are in fact wrappers of multiple different methods which are integrated in multi-step workflows to perform the neoepitope prediction. Besides integrating DNA and RNA data, it is also possible to predict neoepitopes from peptide and RNA sequencing data. The peptide data enables us to know which genes are actually expressed at protein level and the RNA data helps with identifying which of the peptides will be presented by the HLA alleles, since expression of messenger RNA is strongly correlated with HLA peptide presentation (74). In general integrating data can often help in generating more accurate predictions, as many of the tools which will be mentioned in this section also have shown in their studies. When choosing tools, the reader should keep in mind the somatic variations they want to account for and what kind of data they possess. pVACseq (53) is a neoantigen prediction tool, which can work with either WES or WGS data together with RNA data. This tool can predict neoantigens from small indels and SNVs. pVACseq utilizes HLAminer (75) to infer the patients HLA class I typing and NetMHC to predict HLA class I restricted epitopes. The tool prioritizes neoepitopes based on sequencing depth and fraction of reads containing the variant allele.
INTEGRATE-neo (65) is another tool which also uses NetMHC in its pipeline. This tool is based on INTEGRATE (76), which uses DNA sequencing data to predict peptides generated by gene fusion events, and thereafter uses HLAminer to perform in silico HLA typing, and lastly uses NetMHC to predict neoantigens based on the gene fusions. Where the other tools can work just with the DNA data, optionally also integrating RNA data into their pipelines, INTEGRATE-neo requires the use of both DNA and RNA. A tool suite named pVACtools which includes pVACseq and INTEGRATE-Neo among other tools to not only account for SNVs and small indels but also include support for structural variants.
MuPeXI (54) like pVACseq requires the user to provide HLA types, somatic variants and optionally gene expression estimates. The tool predicts neoantigens from SNVs and indels. The tool can use either WES or WGS data and optionally also RNA data and have similar features to pVACseq. However, unlike pVACseq, MuPeXI also offers i. a priority score to rank peptides ii. a comprehensive search for self-similarity peptides and lastly iii. besides being a downloadable command-line tool it is also available as a webserver. Furthermore, this model incorporates the use of NetMHCpan (69) in its pipeline instead of NetMHC.
Epidisco (55) takes as input wild type DNA, tumor DNA and tumor RNA sequencing data. The tool maps the normal and tumor DNA samples to the human GRCh37 reference genome. Epidisco, like many of the other tools mentioned works as a wrapper around other existing tools, and also like many of the other tools, Epidisco uses NetMHCpan in its pipeline. The tool supports SNV and indel based neoantigen prediction. Epidisco focuses on vaccine peptide selection, and generates a ranked list of peptide candidates.
TIminer (50), like many of the other tools, is a tool which as input requires a pre-existing set of variants derived from DNA. The tool also incorporates NetMHCpan in its pipeline and unlike other tools it is able to process raw RNA-seq data which may obtain more information relevant for neoantigen prediction. This tool, however, only supports neoantigen prediction from SNVs.
OpenVax (56) is another pipeline which integrates the use of NetMHCpan into its pipeline, however, it is also possible to choose other MHC binding peptide predictors. The OpenVax pipeline, unlike many of the other tools takes as input raw DNA and RNA sequencing files. The OpenVax pipeline has also included somatic variant calling tools in its pipeline which are capable of calling SNVs and indel variants. It has a ranking function similar to MuPeXI, but with less features, namely MHC class I affinity scores and RNA-seq read count based variant expression.
NeoEpiScope (57) is another tool which can use NetMHCpan in its pipeline. The tool in general uses MHCflurry or MHCnuggets, however, NetMHCpan can also be used if installed individually. Like many of the other tools, NeoEpiScope requires as input a set of somatic variants and supports SNV and indel based neoantigen prediction. The main focus of this tools is to prioritize handling phased variants. To use the phasing function, the user must submit patient haplotypes.
CloudNeo (58) is a tool developed for cloud computing, created to eliminate the need for local infrastructure investment in computation, data storage and transfer, while also providing scalable computational capabilities for neoantigen identification. CloudNeo is a wrapper like many of the other tools which also utilizes NetMHCpan in its pipeline. CloudNeo supports SNVs and indels for neoantigen prediction. Although CloudNeo uses RNA data in its pipeline, it seemingly only utilizes the RNA data for HLA typing, however, DNA data can also be used for this purpose.
Neopepsee (51) is a tool which takes as input a list of somatic mutations and raw RNA seq data. The tool focuses on nonsynonymous somatic mutations and works as a wrapper tool, which uses tools such as NetMHCpan to predict MHC binding affinity. For peptides with the highest binding affinity, immunogenicity features are then calculated and fed into a locally weighted naïve Bayes classifier. The idea with Neopepsee is to use a classifier to decrease the amount of falsepositives that using only binding affinity would provide. pTuneos (59) predicts and prioritizes candidate neoantigens from SNVs and indels. The tool is a wrapper tool, which takes as input raw WGS/WES tumor normal matched sequencing data and optionally also tumor RNA-seq. The tool utilizes HLA class I typing and NetMHCpan to predict binding affinity of normal and mutant peptides, which is then run through a random forest model to predict a T cell recognition probability. Finally they use a scoring schema to evaluate whether a candidate neoepitope that can be recognized by a T cell will be naturally processed and presented. This can be used to prioritize the peptides based on in vivo immunogenicity.
The package antigen.garnish (60) is an wrapper tool in R, utilizing NetMHCpan among others for peptide MHC binding in its pipeline. It predicts neoantigens from SNVs and indels. Besides MHC binding it also takes hydrophobicity, comparison of MHC binding affinity between mutated and non-mutated counterpart, and dissimilarity into account. Furthermore, the tool also calculates a TCR recognition probability based on the dissimilarity.
NeoPredPipe (61) is another tool which incorporates NetMHCpan into its pipeline. Like many of the other tools the user has to submit files regarding patient haplotypes and SNVs and indels. NeoPredPipe unlike the other tools provides the opportunity of neoantigen prediction on multi-region sequencing data and also asses the intra-tumor heterogeneity, which is done based on multi-region samples, where the neoantigen burden is reported for clonal, subclonal and shared variants. NeoPredPipe furthermore also predicts the likelihood of TCR recognition. This based on the probability of the mutant epitopes ability to bind to MHC I molecules and the epitopes similarity to pathogenic peptides.
TSNAD (62) is a tool which earlier had netmhcpan integrated in its pipeline, however, in their version 2.0, which was updated in 2019, they replaced NetMHCpan with the earlier mentioned DeepHLAPan to predict binding of the mutant epitopes to MHC I molecules. TSNAD works by, like many of the other tools by integrating multiple tools into its pipeline. The tool takes as input raw read of tumor normal DNA pairs. The sequences can either be mapped to GRCh37 or GRCh38. In the updated version, raw RNA-seq data can optionally be added to help filter neoantigens. The tool supports neoantigen prediction from SNVs and indels.
DeepAntigen (52) is a deep sparse neural network model based on group feature selection (DNN-GFS). Uniquely this model bases its predictions on the DNA loci of the neoantigens in a 3D genome perspective. The authors discovered that the DNA loci of the immunonegative and immunopositive MHC class I neoantigens have distinct spatial distributions. The model uses preprocessed WES and messenger RNA-seq for calling somatic mutations and estimating gene expression. The model also takes as input Hi-C (77) data (captures chromosome conformation) for 3D genome information. However, this method can only predict neoepitopes from non-synonymous point mutations and 9 mer peptides. EDGE (66) is a commercial platform for neoantigen identification. The EDGE model is a neural network trained on HLA peptide mass spectrometry data and RNA-seq data from various human tumors. The model uses HLA class I type and sequence, RNA and peptide sequencing data or peptides generated from somatic variant calling data to predict neoantigens. Although the model does not incorporate TCR binding, it is still to a certain extent able to capture T cell recognition with the addition of RNA expression.
DISCUSSION
In recent years, the number of computational tools for epitope and neoepitope prediction has exploded. In many cases, these tools combine the results of other methods, using different heuristic approaches, to perform their predictions. Unfortunately, the amount and quality of available data make it difficult to decide which of these approaches are sound, and which are not. As an example, many of the currently existing epitope and neoepitope prediction methods are mainly focusing on MHC presentation. This is because, from a quantitative point of view, MHC binding is the most selective step. According to Yewdell et al. around 1 in 200 peptides bind to MHC class I with an affinity strong enough (500 nM or lower) to induce a immune response (78). Other studies, such as Sette et al. (79), also indicated an MHC affinity threshold of 500 nM to be associated with T cell recognition of HLA class I bound peptides. Moreover, MHC binding is considered necessary but not sufficient for a molecule to be immunogenic: in general only the minority of epitopes predicted are immunogenic (80)(81)(82). However, this paradigm has been challenged on many occasions. In particular for neoepitopes, there is not a general consensus on the fact that a strong MHC binding is connected to immunogenicity. A recent study by Bjerregaard et al. (83), supports the theory that strong binders are immunogenic. Their study indicated that immunogenic neopeptides bind significantly stronger compared to non-immunogenic peptides and that they in general bind with a strong affinity. However, Duan et al. (49) deemed binding affinity scores alone, especially from NetMHC, as not being an effective predictor of tumor rejection and immunogenicity. In fact, in their study they noticed that the epitopes that did elicit tumor protection were in general not strong MHC class I binders. They therefore created an algorithm which subtracts the predicted NetMHC scores of unmutated counterpart peptides from the NetMHC scores of the mutated peptides. This setup is referred to as the differential agretopicity index (DAI). The idea is that this can reflect to which degree the binding of mutated peptides differ from their unmutated counterparts (49). Even this score, however, performed poorly for identifying effective neoepitopes (84). Similar indications have also been made by (85) and (86), where it was shown that not only peptides predicted as strong binders but also peptides predicted as weak binders or non-binders are capable of initiating a T cell response. At the current stage, there's no clear consensus on the importance of MHC binding for identifying dominant epitopes and neoepitopes. Further studies will be needed to decide if and how the threshold of 500 nM routinely being used as a threshold for peptide selection should be reconsidered.
The lack of experimental data is also among the causes of another potential problem. The datasets that are used to train these models are often very redundant: they contain many epitopes that are either identical or very similar. If not properly managed, redundancy can cause the tools to overfit: this means that their actual prediction accuracy on new data will be worse than the one reported in the publications. As a general suggestion, we encourage the users to check that the tools they are using take redundancy into account, for example by performing homology reduction procedures (87), rather than basing their choice on a purely numerical comparison of the accuracies reported in the papers.
A potentially very important but much less studied area is PTMs. Different PTMs exist such as phosphorylation, ubuiquitinylation, glycosylation, methylation, citrullination, to name a few. PTMs have been thought to be potential neoepitope candidates. This is based on the theory that peptides with aberrant PTMs have not been exposed to the immune system and thus potentially not subject to central tolerance. It has been shown that PTM self-antigens are capable of escaping central tolerance and being recognized by the immune system (88). Aberrant PTMs have been discovered in multiple cancers. Increased levels of glycans have for example been observed in cancers such as breast cancer (89,90). However, identifying glycosylation sites as well as other PTM sites is not an easy task. In general mass spectrometry is often not capable of identifying less abundant proteins, due to its low sensitivity, thus capturing PTM information can be difficult due to the general low abundance.
Another lesser explored avenue are neoantigens derived from generally considered non-coding regions of the genome. Since they are less explored and studied, they are less utilized for analysis. Despite this, Laumont et al. (43) showed in their recent study that non-coding regions were possibly a considerable source of neoantigens.
There are still many events which are partially or completely disregarded by the current prediction models but can affect peptide binding and T cell recognition. Some examples include PTMs, local environment, self-similarity, clonality, and noncoding derived peptides. Moving forward, a tool which covers as many different neoepitope causing events as possible would be ideal. Another open question is whether some genomic aberrations are more effective than others for attacking the cancer cells. This begs the question of whether this is a generalized property or inherently specific for individual cancers, thus impairing the effectiveness of one-fits-all models.
Some of the tools presented in this review have been used in developing therapies that are being tested in ongoing clinical and pre-clinical trials. To mention a few, the development of neoantigen targeted personalized cancer treatments for cancers such as melanoma (91), glioblastoma (92) and non-small cell lung cancer (93) have been showing promising results. In particular, the use of tools that rely heavily on mhc binding prediction has propelled the discovery of candidates for test and use in targeted personalized immunotherapy in these studies. Even though these trials had encouraging results, they have also met some limitations in regards to the efficiency of the targeted immunotherapy, indicating that we are still in the early stages of development for neoepitope prediction tools. We envision that a growing amount of evidence on neoepitopes and on the ability of different tools to predict them will have a major impact on the development of better epitope and neoepitope prediction tools, and in turn help guide future immunotherapies. | 8,338.4 | 2021-09-15T00:00:00.000 | [
"Biology"
] |
E ff ects of graphene oxide on the geopolymerization mechanism determined by quenching the reaction at intermediate states
The e ff ects of graphene oxide on the geopolymerization reaction products at di ff erent times were investigated by quenching the reaction. The phase composition and valence bond structure evolution were investigated systematically. The results show that the ethanol/acetone mixture helps isolate reaction products early in the process (0 – 24 h). RGO bonded well with the geopolymer matrix during the geopolymerization. The degree of densi fi cation increased and the amorphous nature of the material decreased with reaction time. The addition of rGO accelerated the conversion of fi ve and six coordinate Al – O sites into four coordinates and Si atoms forming Q 4 (3Al) network structure.
7][28][29][30][31][32][33][34] Thus, Xu et al. 27 used ve-membered alumino-silicate framework ring models in Ab initio calculations.Weng et al. 29,30 compared the hydrolysis and condensation reactions between low and high Si/Al ratio geopolymers.The partial charge model predictions and experimental results suggested possible Al and Si monomeric species that might form during the dissolution and condensation process.Mitchell et al. 34 reported that polar solvents (acetone or isopropyl alcohol) could help to stop hydration reactions.But aldol reactions can occur by the presence of nC-H during the derivative thermo gravimetric analysis of the reaction products.Chen et al. 31 demonstrated a solvent extraction method using a mixture of alcohol and acetone to stop the reaction and acquire samples for nuclear magnetic resonance (NMR) studies.Unfortunately, a detailed understanding of the entire process remains largely elusive.
Since the reaction products of geopolymers depended highly on the raw material, alkali-activated solution and curing conditions, 32 the structure changes in the geopolymer matrix used have not been quantitatively characterized especially in the early stages during the geopolymerization process.The reaction products oen have considerable water, that interferes with characterization tools, such as Nuclear Magnetic Resonance (NMR) and X-ray diffraction (XRD).Thus, it is quite necessary to develop a method of arresting reaction during the geopolymerization process.
In the present work, we describe such a method and there-aer systematically characterize the effects of graphene oxide on the initial products using micrographs, phase composition, functional group and valence bond structure evolution.
Preparation of reaction products
The alkaline mixture was synthesized by mixing silica sol with KOH for 3 days with the help of magnetic stirring.The rGO/ geopolymeric alkaline mixture (GO powders/metakaolin 1 wt%) was prepared by dropping the obtained GO dispersion to the alkaline mixture and stirred for 15 min.Thereaer, the rGO/ geopolymer slurry was prepared by adding the metakaolin powders into the alkaline mixture and mixing for 45 min using a high-shear mixer and ultrasonication in an ice bath.The continuous stirring under ice bath ensured complete distribution and fully dissolution of both metakaolin particles and rGO powders resulting in a low viscosity, homogeneous slurry.Then, the slurry was casted in plastic containers and cured at 60 C for different times (0-24 h) to get the reaction products.
In order to remove water, some preliminary preparations were carried out.Slurry samples (if they had not set, 0-2 h) were directly added into a 50/50 (vol) ethanol/acetone mixture and stirred hard for round 5 min (around 1 g samples and 100 mL ethanol/acetone mixture), then continued adding new solvent.This procedure was repeated for three times, until the reaction products became particles.Samples (if they had set, 3-24 h) were rst divided into micron-sized particles using a mortar and pestle, and then mixed with 50/50 (vol) ethanol/acetone mixture for round 5 min to remove water.Then, taking out and drying the particle samples at room temperature, as can be seen in Fig. 1.Finally, use a mortar and pestle to grind the samples for characterization.
Characterization
The micrographs of the geopolymer (KGP) and rGO/geopolymer (rGO/KGP) reaction products were observed using scanning electron microscope (SEM, FEI, Helios NanoLab 600i).Fouriertransform infrared (FT-IR) spectra of raw kaolin, metakaolin and reaction products during geopolymerization were obtained on a Nicolet Nexus 6700 Fourier transform infrared spectrometer.The phase compositions of the reaction products were examined by X-ray diffraction (XRD, Rigaku, RINT-2000) with Cu-Ka radiation.The 27 Al and 29 Si solid state NMR were conducted using Bruker Avance III400 spectrometer with a magnetic eld strength of 9.4 T and 4 mm rotor.The relaxation delay times were 2 and 1 s for 27 Al and 29 Si NMR spectra, respectively.The spectrometers were 5 kHz for Al and 12 kHz for Si.The chemical shis of 27 Al and 29 Si nuclei were referenced to AlNO 3 and tetramethylsilane, respectively.
Results and discussions
Micrographs of particles for reaction products of both the KGP and rGO/KGP samples formed at various reaction times are shown in Fig. 2. Aer stirring for 45 min, the reaction products exhibit a porous and spongy morphology, with many irregular surface voids (Fig. 2(a) and (h)).The rGO bonded well with the reaction products of KGP matrix.The voids decreased signicantly as time increased.At 3 h, the reaction products became dense, without obvious voids (Fig. 2(e) and (l)).It could be found that the rGO was wrapped around by the particles of KGP matrix.Aer 24 h, both the KGP and rGO/KGP offers a relatively homogeneous and dense microstructure (Fig. 2(g) and (n)).
Fig. 3 shows XRDs of the KGP and rGO/KGP products formed at various times.There was no obvious difference between the Fig. 7 27 Al NMR spectras of reaction product of rGO/KGP formed at various reaction times in geopolymerization process: (a)-(g) corresponding 0 min, 30 min, 1 h, 2 h, 3 h, 6 h and 24 h, respectively.
two type samples.The patterns samples displayed typical geopolymer broad amorphous humps around 17-32 2q.The amorphous content decreased with the reaction time.The minor a-quartz remained.
Fig. 4 and 5 show the FT-IR spectra of kaolin, metakaolin and reaction products of KGP and rGO/KGP formed at various reaction times.As for the kaolin samples in Fig. 4, the bands at 3694, 3670, 3654 and 3620 cm À1 are associated with nO-H in kaolinite structure. 37,38The bonds at 1115, 1099, 1032 and 1008 cm À1 corresponded to nSi-O from SiO 4 .Sharp bands located at 937 and 913 cm À1 were attributed to nAl-OH vibrations. 38The IR peaks at 795 and 755 cm À1 are assigned to nSi-O-Al.The bands located at 696, 539 and 470 cm À1 was attributed to the Si-O, Si-O-Al and Si-O vibrations from AlO 6 , respectively. 39Aer treating at 800 C for 2 h, the transformation to metakaolin removes most of these bands, leaving three obvious peaks located at 1090, 803 and 463 cm À1 , which attributed to the nSi-O from SiO 4 , nAl-O from AlO 4 and Si-O vibrations, respectively.It can be concluded that the Si-O and Al-O bonds hydrolyze during geopolymerization. 32he FT-IR spectras during geopolymerization over 24 h are also shown in Fig. 4 and 5.There was no obvious difference between the FT-IR spectra of the KGP with and without rGO.The bands positioned at 3500 and 1659 cm À1 were attributed to nO-H, indicating that adsorbed atmospheric water existed in the molded geopolymer sample. 40The intensity of the band located at 463 cm À1 increased with reaction time, suggesting formation of a greater number of Si-O-Si units.At 6 h, the intensity of the bands at 593 and 717 cm À1 increased, related to increases in Si-O-Al and the four-coordinated AlO 4 structure units in the reaction product.These represent the structural changes and rearrangement occurring during geopolymerization of Al units.Different from the FT-IR spectrum of the metakaolin, a weak shoulder located at 880 cm À1 appeared and increased with time, which was corresponding to the carbonate from atmospheric air.
The intense band related to nSi-O shis from 1090 to 1022 cm À1 aer 24 h, caused by the presence of Al-O bonds owing to silicon substitution by aluminum in the second coordination sphere and the generation of the Si(Al)-O units. 41The Al-O band at 803 cm À1 decreases with the reaction time and disappears aer 6 h, conrming the dissolution of metakaolin and the disruption of the Al environment during the geopolymerization process.
It is difficult to distinguish the characteristic rGO peaks from the FT-IR spectra of the rGO/KGP products due to their small content.Further inuence of rGO on the reaction products could be detected by the 27 Al and 29 Si solid state NMR.Fig. 6, 7 and Table 1 show the 27 Al NMR spectra and the Gaussian t of raw kaolin, metakaolin and reaction products of KGP and rGO/KGP formed at various times.As shown in Fig. 6(a), the 27 Al chemical shi of raw kaolin is 0.92 ppm, corresponding to six-coordinated aluminum.The three peaks at 53.3, 29.5 and 3.1 ppm in the 27 Al NMR spectrum of metakaolin Fig. 6(b), can be attributed to 4-, 5-and 6-coordinate Al.The 5-coordination accounts for 48% of total Al atoms, the major species in metakaolin.
The broad peaks arise due to highly distorted geometry at aluminum sites. 32Aer metakaolin reacts with alkaline solution for 0-30 min, the Al coordination states of both the KGP (Fig. 6(c) and (d)) and the rGO/KGP (Fig. 7(a) and (b)) show no obvious change.The three peaks in the 27 Al NMR spectrum were attributed to 4-, 5-and 6-coordinate Al.The 4-coordinate Al in the reaction products of rGO/KGP was relatively more than the KGP matrix (Table 1), which may be attributed to the acceleration of rGO on the initial products early in the geopolymerization process.
The 5-and 6-coordinated Al atoms in the KGP samples appear to transform to 4 coordination gradually during geopolymerization over 1-3 h.The relative content of 4-coordinated Al atoms increased to 71.7% within 2 h (Fig. 6(f)).However, as for the rGO/KGP samples, the relative content of 4-, 5-and 6coordinate Al atoms was 46.8%, 37.5% and 15.7% within 1 h, respectively (Table 1).The species of the Al atoms were not changed, whereas, the 4-coordinated Al atoms increased to 69.8% within 2 h (Fig. 7(d)).When the reaction time was longer than 3 h, the 4-coordinated Al atoms (56.6 ppm) became the mainly species (Fig. 7(e)).However, at the same time, the vecoordinated Al in the KGP and rGO/KGP products was disappeared aer 6 h (Fig. 6(h) and 7(f)).
Aer 24 h, the peaks of both the KGP and rGO/KGP products at 56.Compared with the KGP matrix products, the addition of rGO accelerated the conversion of 4-coordinated Al in the rGO/KGP products at the early stage (0-30 min) during the geopolymerization process.While, the effects of rGO was not obvious at the later stage (1-24 h).This was due to the KGP particles likely priority formed, attached and grown on the Fig. 8 29 Si NMR spectras of kaolin and reaction product of KGP formed at various reaction times in geopolymerization process: (a)-(i) corresponding to kaolin, metakaolin, 0 min, 30 min, 1 h, 2 h, 3 h, 6 h and 24 h, respectively.
surface of the rGO sheets.However, the matrix was solidicated, surface of the rGO was all parceled with the geopolymerization products.Thus, the geopolymerization promotion was not obvious. 29Si NMR spectra of kaolin, metakaolin and reaction products of both the KGP and rGO/KGP formed at various reaction times are given in Fig. 8, 9 and Table 2.The 29 Si MAS NMR spectrum of raw kaolin consists of a single resonance centered at À92.4 ppm assigned to Si-O-Si linkages only (Fig. 8(a)). 42ccording to previous studies of aluminosilicate materials, 29,30,32 the broad 29 Si NMR spectra peak could be divided into ve possible silicon Q 4 (mAl) species.Two peaks at À99.3 and À90.8 ppm correspond to tetra-coordinated Si atom, (Q 4 (1Al) and Q 4 (3Al)).Q 4 (1Al) is the major species in the metakaolin (Fig. 8(b)).
However, aer 24 h, only one broad 29 Si NMR peak of KGP at À90.1 ppm is seen (Fig. 8(i)).The peaks at À87.0 and À90.1 ppm of rGO/KGP are observed (Fig. 9(g)), indicating that Si is present mainly as Q 4 (3Al) structural units.Compared with the pure KGP, the addition of rGO accelerated the conversion of Si structure unit from Q 4 (1Al) to Q 4 (4Al), Q 4 (3Al), Q 4 (2Al) and Q 4 (1Al) species.Thus, the addition of rGO affects the Si structure during the geopolymerization and has an positive inuence on the generation of Q 4 (3Al) species.
To sum up, the effects of GO on the geopolymerization of the rGO/KGP could be illustrated in Fig. 10.For the KGP (Fig. 10(a)), the geopolymerization mechanism can be rationally expressed according to the experimental analysis as follows.First, metakaolin particles dissolve from the surface aer mixing with alkaline silicate solutions; the Si-O bond and Al-O bond hydrolyze; Si and Al monomers diffuse; polycondense and rearrange; ve and six coordinate Al-O sites convert to four coordination, condensing Si species with Al species mainly in the form of Q 4 (3Al) and Al in four coordinate.
GO can be in situ reduced to rGO in alkaline silicate solutions and have positive effects on the geopolymerization during the reaction process. 22As described in Fig. 10(b), the rGO accelerates the conversion of Al-O sites into four coordinates and Si atoms in the form of Q 4 (3Al), but has not changed the nal network structure of the rGO/KGP.The reaction products bond well with the rGO sheets show denser microstructure and lower amorphous degree with the increased in reaction time.Based on the above analyses, the addition of GO is proper in preparing rGO/KGP composites.Meanwhile, the presence of rGO contributed to the enhancement of mechanical performance of geopolymer.As reported in our previous studies, 23,24 compared with pure geopolymer, improvements in mechanical properties were achieved through rGO reinforcement of 0.05-1 wt% at room temperature.With 1 wt% GO addition, the fracture toughness and exural strength of rGO/geopolymer composites increased by approximately 30% and 7%, respectively, attributing to the proper interface bonding, crack deection and propagation and rGO pull-out.
Conclusions
In the present study, the method to stop geopolymerization reaction was provided and the effects of graphene oxide on the geopolymerization mechanism of geopolymer based on natural metakaolin were investigated systematically.
(1) The reaction products of KGP and rGO/KGP in the geopolymerization process (0-24 h) can be isolated by introducing ethanol/acetone mixtures.The voids in the reaction products decrease signicantly with the geopolymerization time.The products display typical broad amorphous humps around 17-32 2q and the amorphous degree decrease with the reaction time.
(2) During geopolymerization, the metakaolin dissolved in the alkaline silicate solutions, and Si-O bond and Al-O bond hydrolyzed, ve and six coordinates of Al-O sites converted into four coordinates, condensing the network structure in which Si mainly in the form of Q 4 (3Al) and Al in four coordinate.
(3) The addition of GO accelerated the conversion of Al-O sites into four coordinates and Si atoms in the form of Q 4 (3Al).The reaction products of geopolymer matrix bonded well with the rGO sheets and showed denser microstructure and lower amorphous degree with the increase in reaction time.
Fig. 1
Fig. 1 Photographs of (a) KGP and (b) rGO/KGP samples obtained at different reacting time.
Fig. 3
Fig. 3 XRD patterns of the reaction products of (a) KGP and (b) rGO/KGP formed at various reaction times in geopolymerization process.
Fig. 4
Fig.4FT-IR spectras of kaolin, metakaolin and reaction products of KGP formed at various reaction times in geopolymerization process.
Fig. 5
Fig.5FT-IR spectras of metakaolin and reaction products of rGO/ KGP formed at various reaction times in geopolymerization process.
3 ppm became sharper, indicating the coordination of Al-O in nal geopolymer converts to four coordinate (Fig. 6(i) and 7(g)).The shi to 4-coordinated Al (from 53.3 to 56.3 ppm) likely arises because of extensive formation of Si-O-Al links.29,32
Table 1
27ussian fit results of27Al NMR spectra of reaction products formed at various reaction times
Table 2 29
Si NMR spectra results of reaction products formed at various reaction times | 3,666.6 | 2017-02-24T00:00:00.000 | [
"Materials Science"
] |
Evaluation of Tensile Strength of a Eucalyptus grandis and Eucalyptus urophyla Hybrid in Wood Beams Bonded Together by Means of Finger Joints and Polyurethane-Based Glue
Created in the 1940s, the splice finger-joint type for wood has now been more used to compose structural materials wood base as Glued Laminated Timber (Glulam) and Cross Laminated Timber (CLT). The main advantage of this amendment is to provide a simple and economical way to join timber parts on segments. This study evaluated by means of tensile tests the capacity of this type of joint (structural dimension of 21mm) to bond together Lyptus® wood beams (a Eucalyptus grandis and Eucalyptus urophyla hybrid) using Jowat polyurethane glue (Model 680.20) as compared to similar seamless beams. The results indicate that the seamless beams are 47.72% more resistant to traction (in characteristic values) than those with finger joints. However, to form structural elements where there is redundancy overlapping parts, such as Glulam and CLT, the values obtained can be considered satisfactory. Also noted is that denser samples have better traction results due to better bonding of the densest parts. The use of finger-joint and polyurethane adhesive o bond hybrid eucalyptus, although more brittle than wood without seams, enable the use of shorter wood sections for the composition of major structural elements, optimizing better forest material.
Introduction
The invention of finger joints for wooden structures, known as finger joints, is commonly attributed to Karl Egner and Jagfeld from Otto Graf Stuttgart Technischer Hoschuele prior to the Second World War.This technique was employed by German forces to repair structural damages resulting from bombing from 1939 to 1945 1 .A 1947 study conducted by these authors included reviews of finger joints in wooden bridges built in 1937.However, despite this being the first known reference to structural use of finger joints, German and American automotive industry had already employed some kind of toothed joint in the manufacture of wooden steering wheels and wooden parts of car wheels in the 1920s 2 .
Finger joints and glue can be used to bond together two pieces of wood lengthwise on the same plane without resorting to hardware or wooden dowels so as to obtain longer pieces of wood, thereby increasing the use of shorter pieces.It also allows the removal of large nodes from wood pieces and putting them back together afterwards.The advantage of finger joints over other types of joints such as beveled or top joints (Figure 1) is that top joints yield very low mechanical strength, i.e., they do not transmit efforts to the adjacent piece effectively whereas beveled joints, though yielding good strength, demand a lot of wood to be manufactured, since bevels must have a 1:10 slope 3 .Thus finger joints provide an economical approach for joining pieces segments longitudinally 4 .
The use of wood pieces bonded together lengthwise is particularly suitable for manufacturing glued laminated timber (Glulam) beams and CLT (Cross Laminated Timber) boards, in which wood lamellas are glued together to obtain a wooden beam, arc or boards with special dimensions.Section 5.7.4 of the draft revision (2011) of Brazilian Standards for Wooden Structures -NBR 7190 5 addresses the use of Glulam and indicates the required dimensions for finger joints.In order to be considered structural, finger joints have to have the following dimensions: The geometric parameters cited in Table 1 are presented in Figure 2.
For finger joints to yield the necessary strength to withstand tensile loads, it is necessary to use glues whose structural features and properties are compatible with the environmental conditions to which the wooden structure will be subjected during its service life.There are several structural wood glues, the most common ones being those based on phenol resorcinol, melamine formaldehyde, and polyurethane.Gluing parameters, e.g., quantity, pressure, and pressing time, vary according to manufacturers.In the absence of manufacturer parameters, NBR 7190 recommends the pressure for finger joints to be at least 0.7MPa for wood with density below 0.5g/cm 3 and 1.2MPa for wood with density above 0.5g/cm 3 According to Appendix B (Determination of properties of wood for design of structures) of NBR 7190 -Brazilian Standards for Wooden Structures 5 , which defines testing methods for structural wooden elements, the tensile strength parallel to the fibers (f wt,0 or f t0 ) is given by the maximum tensile stress that can act on an elongated sample whose central portion has a uniform cross section area A, length equal to or above 8√A, ends stronger than central portion, and concordances that ensure rupture at the central portion, as follows: the pressing time should be six hours at about 20 degrees Celsius and relative humidity around 65% 5 .Densest timbers tend to have a higher tensile strength in the notched amendment compared the less dense woods 6 .this behavior is shown in published results, especially among dicotyledonous woods that are denser and conifers that are less dense.This study evaluated the amendment finger-joint type of wood Lyptus, a hybrid of two species of eucalyptus (Eucalyptus grandis and Eucalyptus urophyla hybrid species).This timber has an average density 750 kg / m³, and can be considered a medium density wood.The samples analyzed were structural size, and were divided into two groups, one group with finger-joint amendments and other without amendments, the same part, ie, the tested samples had 2 meters long, and the original pieces that gave rise to samples with and without finger had 4 meters long.The samples were manufactured and supplied by Ita Construtora, a Brazilian company that designs, manufactures and assembles Glulam beams.The polyurethane-based glue in question is manufactured by Jowat (Model 680.20) 7 .
Materials and methods
In order to prepare the test samples, specific cutters were employed to carve finger joints with final tooth length (L) of 21mm.Soon afterwards, Jowat glue (680.20 Model) was applied to them (weight as recommended by the manufacturer).Then, they were pressed at a load of 0.7MPa using a specific press.
The tensile tests were performed on a Model 422 Metriguard machine, which measured the tensile strength and rupture mode of samples with and without finger Where F t0 is the maximum tractive force applied to the sample during the test, expressed in Newton (N), A is the cross section area, expressed in m², and f t0 is the tensile strength parallel to the fibers, expressed in MPa.
The European Standard EN 408 8 presents in item 13.1 the test setup to be done, as Figure 3 shows.The characteristic tensile stress values of the samples are estimated according to Item B.3 of Appendix B of NBR 7190 5 , which describes testing methods for determination of properties of wood for structural design, in which: Where X wk is the generic characteristic strength, in MPa; n is the number of samples; X 1 is the strength of Sample 1, in MPa; X 2 is the strength of Sample 2, in MPa; and X n is the strength of the n th sample in MPa.
The standard in question recommends the characterization of at least 12 samples, whose values should be placed in ascending order X 1 <X 2 <...<X n , disregarding the highest value if the number of samples is odd and discounting X wk values lower than X 1 or lower than 0.7 of mean value (X m ).
Table 2 shows the dimensions of each sample.
Results and discussion
The Table 3 shows the test results for beams with finger joints, and shows mode of rupture, showing the type of rupture (wood or adhesive) and the amount of each in a percentage relative to the cross sectional area of the test region, as recommended by the point 'n' the item 7.1.3.4 of DIN EN 385 9 .
Table 4 shows dimensions and results for samples with no finger joints.
The characteristic tensile strength value found for sample with finger joints was 24.21MPa, and this value is greater than the value of samples that had the lowest amount of tension that was 23.63MPa and higher than the mean value (X m =19.10MPa), thereby meeting the requirements in Item B of NBR 7190 5 .
The characteristic tensile strength value found for samples without finger joints was 40.27MPa, and this value is greater than the value of samples that had the lowest amount of tension that was 22.06MPa, but lower than the mean value (X m =50.73MPa).Thus, in order to meet the aforementioned 10.The Table 6 shows the variability between samples with and without finger joints (%), Figure 11 shown comparative density to tensile strength for samples with finger joints and the Figure 12 shown comparative density to tensile strength for samples without finger joints.
The analysis of the results indicates that of all the samples under investigation, those without finger joints yielded the highest tensile test results.This is certainly due to the fact that their fibers have not been broken up, unlike those with finger joints, which do not transmit tensile loads as effectively in spite of their glued surface.Any denser wood species as denser samples, have a value of tensile strength greater with finger-joint splice as shown in Figure 11, when compared to the use of less dense species or strains.This behavior is found in the literature.The Kuring species (Dipterocarpus sp.) which has density 780 kg / m³, together with polyurethane adhesive (PU) for finger-joint type splices obtained average results of 63.76 MPa tensile strength, while less dense coniferous species, Southern Pine and Douglas Fir obtained mean values of 55.99 MPa and 54.64 MPa, respectively 6 .
In tests with wood species Manilkara sp.(Maçaranduba), joining two pieces through finger-joint using polyurethane adhesive (PU) was obtained average values tensile strength of 73.78% lower for the united samples compared to samples without emendas 10 .Also for testing the adhesive base of resorcinol-phenol joining pieces of wood species Eucalyptus grandis the results obtained by the authors in the tensile strength test were twice as high for wood seamless compared to wood with finger-joint 1 .
The mean tensile strength variability of seamless samples was 167% higher than that of samples with finger joints.Sample 8 was the only sample yielding similar values in both situations; it obtained a much lower tensile strength value than the mean value for the test on samples without joints.However, this discrepant behavior does not represent any statistically relevant tendency.
The characteristic values shown in Table 4 indicate lower variability (47.72%), this statistic estimation being recommended by the Brazilian standard for wooden structures, i.e., NBR 7190 5 for characterizing consignments of wood and new wood species.
As the samples with and without joints derive from the same wood samples, the comparison between their densities and tensile strengths indicates similar behavior, as shown in Figures 11 and 12.
The glue-wood behavior was shown to be inadequate.The data in Table 3 indicate that 50% of the samples in question Evaluation of Tensile Strength of a Eucalyptus grandis and Eucalyptus urophyla Hybrid in Wood Beams Bonded Together by Means of Finger Joints and Polyurethane-Based Glue broke up along the gluing line, i.e. the glue was not capable of transmitting the efforts efficiently and ruptured before the wood fibers did in half of the cases under investigation.As to those samples whose wood fibers ruptured, was also breaking the adhesive.Furthermore, the glue was weaker than the wood in all cases, as shown in Figure 5.
Conclusion
It is possible to conclude that samples made of Lyptus® wood (a Eucalyptus grandis and Eucalyptus urophyla hybrid) bonded together by finger joints of the finger-joint type with 21mm teeth in and Jowat polyurethane glue (Model 680.20) yield strength 47.72% lower than samples made of the same wood without finger joints.
It is also concluded that although the tensile strength is lower spliced into pieces with finger joints, the use of fingerjoint allows the use of short pieces considered for structural use, taking advantage of the best available natural resource and to compose a Glulam or CLT where there is redundancy of more rigid elements.In this case, should be tested in the laboratory with the requirements for each product, for example, minimum distances between amendments on the same slide stipulated by paragraph 5.7.4.7.1 of the draft revision of Brazilian Norm NBR 7190, are respected.In addition, it is recommended that different combinations of glue, wood species, and chemical treatment be tested to verify the quality of the bonding finger joints in structural beams.
The results confirm the literature and show that the density of the piece influences on tensile strength and can be concluded that this occurs due to the longitudinal direction of the joining direction where the fibers are disposed at the top of the pieces bonded, where denser woods have more fibers per unit area, but the influence of other natural or anatomical characteristics of the wood can interfere, in order it's not possible to say that this relationship is a rule for all wood species.
Table 1 :Figure 2 :
Figure 2: Geometric parameters for finger joints of the fingerjoint type 5 .
Figure 3 :
Figure 3: In tensile testing scheme for timing amendments 8 .
Figure 5 :
Figure 5: Specimen with finger joint right after rupture (50% in the wood and 50% in the glue).
Figure 6 :
Figure 6: Chart showing tensile strength values for sample with finger joints.
Figure 7 :
Figure 7: Histogram showing tensile strength values for samples with finger joints.
Figure 8 :
Figure 8: Chart showing tensile strength values for samples without finger joints.
Figure 9 :
Figure 9: Histogram showing tensile strength values for samples without finger joints.
Figure 10 :
Figure 10: Chart comparing tensile strength values of samples with and without finger joints.
Figure 11 :
Figure 11: Chart comparing density to tensile strength for samples with finger joints.
Figure 12 :
Figure 12: Chart comparing density to tensile strength for samples without finger joints.
. Similarly, in the absence of manufacturer recommendations,
Table 2 :
Dimensions of specimens with finger joint.
Table 3 :
Tensile test results for specimens with finger joints.
The statistical values of the tests are shown in Figures 6, 7, 8 and 9 as well as the comparative tensile strength values of samples with and without finger joints shown in Figure Evaluation of Tensile Strength of a Eucalyptus grandis and Eucalyptus urophyla Hybrid in Wood Beams Bonded Together by Means of Finger Joints and Polyurethane-Based Glue
Table 4 :
Tensile test results for specimens without finger joints.
Table 5 :
Comparison between characteristic tensile test values.
Characteristic value with finger joint (MPa) Characteristic value without finger joint (MPa) Variability (%)
Table 6 :
Variability of tensile values between specimens with and without finger joints. | 3,407.6 | 2016-09-22T00:00:00.000 | [
"Materials Science"
] |
Immunohistochemical and Proteomic Evaluation of Nuclear Ubiquitous Casein and Cyclin-Dependent Kinases Substrate in Invasive Ductal Carcinoma of the Breast
Nuclear ubiquitous casein and cyclin-dependent kinases substrate (NUCKS) is 27 kDa chromosomal protein of unknown function. Its amino acid composition as well as structure of its DNA binding domain resembles that of high-mobility group A, HMGA proteins. HMGA proteins are associated with various malignancies. Since changes in expression of HMGA are considered as marker of tumor progression, it is possible that similar changes in expression of NUCKS could be useful tool in diagnosis and prognosis of breast cancer. For identification and analysis of NUCKS we used proteomic and histochemical methods. Analysis of patient-matched samples of normal and breast cancer by mass spectrometry revealed elevated levels of NUCKS in protein extracts from ductal breast cancers. We elicited specific antibodies against NUCKS and used them for immunohistochemistry in invasive ductal carcinoma of breast. We found high expression of NUCKS in 84.3% of cancer cells. We suggest that such overexpression of NUCKS can play significant role in breast cancer biology.
Introduction
Nuclear ubiquitous casein and cyclin-dependent kinases substrate (NUCKS) is a nuclear DNA binding protein occurring in almost all types of human cells, adult and fetal tissues [1,2]. The NUCKS gene is located on human chromosome 1q32.1 and consists of seven exons and six introns. It has all the features of being a housekeeping gene [2]. Although its biological function is poorly understood, the structural similarity to the high-mobility group A (HMGA) proteins suggests that it plays a role in regulation of chromatin structure and its activity (for a review see [3,4]). The HMGA proteins modulate DNA structure altering transcription of several genes by either facilitating or impeding binding of transcription factors [5].
The benign human tumors are mainly of mesenchymal origin and result from chromosomal rearrangements. These include lipomas [6,7], uterine leiomyomas [8,9], and pul-monary chondroid hamartomas [10] or tumors consisting of epithelial and mesenchymal parts like breast fibroadenomas [11]. Rearrangements and overexpression of the hmgA genes were described in nonmesenchymal benign human tumors, such as pituitary adenomas [12]. A high expression of HMGA proteins was observed in all neoplastic tissues analyzed including pancreatic, thyroid, colon, breast, lung, ovarian, uterine cervix and body, prostate, and gastric carcinomas (for review see [4]). High levels of HMGA are also causally related to the neoplastic cell transformation. Finally, HMGA proteins are also involved in hematological neoplasia [13]. It was suggested that HMGAs might be promising targets for therapeutic drugs aimed at alleviating pathological conditions [14]. It is worth mentioning that in normal tissues HMGA protein level is low or even undetectable. Usually, high HMGA expression correlates with bad prognostic factors and metastases. It was assumed that HMGA genes expression might be used as a marker of tumor progression 2 Journal of Biomedicine and Biotechnology [15]; therefore it can also be considered that NUCKS plays a similar practical role in histopathological analysis.
The abundance of NUCKS in rapidly growing cells as well as the overexpression of nucks mRNA in ovarian cancer [16] suggests that it may be involved in facilitating and maintaining activity of transcription of some genes during rapid proliferation and in cancer. Until now NUCKS was studied in detail using a variety biochemical and cell biological methods [1,2,[17][18][19]. These analyses however do not relate the occurrence of the protein to histological grade or with a particular cell type. In this work we analyzed the occurrence of NUCKS in breast carcinoma. Using proteomic methods we demonstrated that NUCKS is highly overexpressed in invasive ductal cancer (IDC). Immunohistochemical analysis confirmed this finding and also revealed abundant expression of NUCKS in 26 cases of this cancer. This type was found to be the most frequent malignant tumor of the breast [20] and presents very serious therapeutic and socioeconomical problems. Estimated new cases from breast cancer in the United States in 2008 were 182 460 (female) and 1990 (male) and about 41 000 deaths (National Cancer Institute; http://www.cancer.gov/cancertopics/types/breast).
Tissues and Protein Extraction.
Samples of IDC of grades II or III were retrieved during surgery. Analysis of the samples followed an informed consent approved by the local ethics committee. The entire protein extraction procedure was carried out as described previously [21]. Briefly, frozen tissue was homogenized with 3 vol. (m/v) of 5 % (v/v) HClO 4 using an IKA Ultra Turbax blender and the homogenate was centrifuged at 15,000× g for 5 minutes. Proteins were precipitated with 33% (w/v) CCl 3 COOH for 30 minutes and collected by centrifugation at 15,000× g for 10 minutes.
SDS-PAGE and Protein Digestion.
Aliquots of protein fractions containing NUCKS were separated by SDS-PAGE, using NuPAGE Novex Bis-Tris 4%-12% gels (Invitrogen, Carlsbad, CA, USA) and MES running buffer according to the manufacturer's instructions. The gel was stained with Coomassie Blue using Colloidal Blue Staining Kit (Invitrogen).
The NUCKS bands were subjected to a standard ingel trypsin digestion protocol [22]. Briefly, the pieces were washed twice with 25 mM ammonium bicarbonate and dehydrated with absolute ethanol. Subsequently, 0.5 μg trypsin (Promega, Madison, WI) solution in 25 mM ammonium bicarbonate was added, and the enzyme was allowed to digest overnight at 37 • C. The peptide mixtures were extracted with 80% CH 3 CN, 1% CF 3 COOH (TFA), and the organic solvent was evaporated in a vacuum centrifuge. The resulting peptide mixtures were desalted using in-house made C 18 STAGE tips [23], vacuum-dried, and reconstituted in 0.05% CF 3 COOH prior to the analysis.
LC-MS/MS Analysis.
Peptide mixtures were separated by online reversed-phase nanoscale capillary liquid chromatography and analyzed by electrospray tandem mass spectrometry as described previously [24]. The data were searched against the full International Protein Index (IPI) database with the aid of the MASCOT (Matrix Science, London, UK) search engine [25] followed by manual verification. All peptides identified from the NUCKS gel bands are listed in the Supplementary
Antibodies.
Antibodies against NUCKS were elicited in rabbits using synthetic peptide DEDYGRDSGPPTKKC (residues 23-26) conjugated to ovalbumin (Imject Maleimide Activated Ovalbumin, Pierce). Animals were injected with 0.2 mg of cross-linked peptide. Titer and specificity of the antisera were increased by three injections (boosts).
Monospecific antibodies were purified on affinity columns which were prepared by coupling of the peptide to iodoacetate activated gel (SulfoLink, Pierce) according to the manufacturer's protocol using 1 mg of the peptide per 1 mL of the coupling gel. The unreacted maleimide groups were quenched with 20 mM cysteine. Then 1 mL of the gel was loaded into a column and washed with 50 mL PBS. 5 mL of the antiserum was diluted with 5 mL PBS and incubated with the gel for 4 hour. Following washing of the gel with 50 mL PBS the bound antibodies were eluted with 10 mL of 0.1 M glycine-HCl, pH 2.5, and after that with 10 mL 4 M guanidine hydrochloride in PBS. The eluates were diluted with 10 mL of PBS and concentrated in Centriprep-Ultracel YM-10 (Millipore) concentrators to a volume of 2 mL. The concentrates were dialyzed against PBS overnight. The guanidine hydrochloride fraction of antibodies was used in all experiments.
Western
Blotting. For Western analysis, proteins were transferred from SDS gels onto the nitrocellulose membrane by electroblotting at 10 V/cm for 40 min. The proteins were cross-linked to the membrane by incubation in 0.5% (v/v) glutaraldehyde in PBS for 10 min. The membranes were blocked with 10% (v/v) normal goat serum (NGS) for 30 min prior to incubation with the primary antibodies together with 1% NGS in PBS containing 0.1% Tween 20 (PBS-T). The concentration of the primary antibodies was 1.5 μg/mL. Following 2 hour incubation, the membrane was extensively washed with PBS-T. Primary antibodies were visualized on the membrane with ECL peroxidase-conjugated IgGs. After the detection of antibodies blots were stained with Amido Black stain (Sigma, St Louis, MO).
2.6.
Histopathology. 26 samples of breast cancer, derived from 26 different female patients, were investigated in this experiment. In particular, this was the invasive ductal carcinoma (IDC) of grade (G) I (10 cases), II (6 cases), and III (10 cases). Cancer was excised during radical mastectomy or tumorectomy depending on the result of the previously performed imaging studies, fine-needle aspiration biopsy, oligobiopsy, or intraoperative diagnosis.
The samples from the tumors were fixed in 4% formalin and embedded in paraffin. After that, the paraffin blocks were cut on microtome to obtain 4 μm thick slices which were mounted on glass slides and stained with hematoxylin-eosin (HE) or with antibodies. Evaluation of HE-stained samples was performed by use of light microscope (OLYMPUS BX50) at magnification 100 or 200 times.
Immunohistochemical Staining.
Following deparaffinization the antigen determinant was retrieved using normal pressure cooking in 0.01 M sodium citrate for 9 minutes in 350 W microwave oven at pH 6. The slides were blocked for endogenous peroxidase (Peroxidase Blocking Reagent, DAKO Cytomation) and incubated overnight with anti-NUCKS antibodies at concentration of 15 μg/mL in PBS containing 125-fold diluted swine serum at 4 • C. The binding of the primary antibodies was visualized using the ABC method (kit LSAB, DAKO Cytomation) and stained with DAB (DAKO). The PBS was used on each step. Finally, the slides were counterstained with hematoxylin and rinsed in tap water.
Percentage of positively stained cancer cells was evaluated on 5 fields from the centre of tumor: number of positively stained cells (nuclei and cytoplasm) was divided by the total number of cancer cells seen on that field at magnification of 400 times. A mean value from 5 such fields is shown in "results" section. The selection criterion for the 5 fields measured was the centre of each sample and this was assessed by two pathologists. We also evaluated staining in other cells: lymphocytes and endothelial cells used as positive control because it was suggested that NUCKS is localized in both proliferating and nonproliferating cells [18]. Negative control obtained by omitting the primary antibody.
The microphotographs were taken using light microscope (OLYMPUS BX40) at magnification 200 times and digital camera DP10. Two independent pathologists evaluated each slide. Statistical analysis of correlation between histological grading and number of positively stained cells was made. ANOVA one-way analysis of variance by ranks was used for this statistical analysis.
NUCKS Is Highly Overexpressed in Ductal Invasive
Breast Cancer. A group of nuclear proteins including linker histones H1 and HMG proteins can be selectively extracted from cells and tissues with diluted perchloric acid [21]. NUCKS also can quantitatively be extracted using 5% perchloric acid. To compare the levels of NUCKS in human cancer and nondiseased tissues we extracted the protein from five pairs of matched diseased and normal samples, from 5 patients, and analyzed its abundance on SDS gels. The identity of the NUCKS-band was analyzed by mass spectrometric analysis. We found that in each of the five cases NUCK was the only abundant component of the stained gel-band. All identified peptides with their precursor ion accuracy and Mascot scores are listed in Supplementary Table 1.
We found that the band of NUCKS with a relative Mr of 43,000 occurred abundantly in all samples of IDC, whereas in all normal samples either it did not appear or the band was very weak (Figure 1). The levels of NUCKS in IDC were comparable with those in cultured breast cancer cells MCF7. Since overexpression of NUCKS in ovarian cancer was recently described [16], we analyzed also ovarian cancer sample and found similar extent of upregulation of NUCKS in this type of cancer in comparison to IDC. From this experiment we concluded that NUCKS is highly overexpressed in IDC and we decided to study its occurrence in more detail.
Generation of Polyclonal Monospecific Antibodies against NUCKS.
To investigate the occurrence of NUCKS in cells and tissues, antibodies against a peptide corresponding to residues 23-36 of the protein were elicited in rabbits and were affinity purified. The specificity of the antibodies was analyzed on western blots using purified NUCKS, perchloric acid extracts and whole SDS whole lysates of MCF7 cells, and IDC and normal breast tissue ( Figure 2). The blots revealed that our antibodies decorated selectively the band of NUCKS, indicating the high specificity of the antibodies. Western analyses using other lysates and extracts of human tissue and cells resulted in identical results (not shown).
Histochemical Analysis of NUCKS in Invasive Ductal
Carcinoma. The study comprised 26 cases of invasive ductal carcinoma (IDC). They were first evaluated with regard to histopathology (grading) and then the samples from particular tumors were studied using immunohistochemical method. The occurrence of NUCKS was observed in tumor cells as well as in other cells seen in samples, like endothelia and lymphocytes. In invasive ductal carcinoma, IDC (carcinoma ductale infiltrans), the positive staining of cytoplasm was rather less intensive than that of nuclei ( Figure 3, with an inset). The cytoplasmic staining in IDC cells is shown in Figure 4 (with an inset). The number of stained cells, both nuclei and cytoplasm, ranged from 77.5% of cancer cells in group of grade I ( Figure 5) to 89.6% in the group of grade III (mean: 84.3%, for the whole group of 26 cases, see Table 1). A low number of lymphocytes which formed inflammatory infiltration at the border of tumor foci showed positive nuclear reaction ( Figure 6). Positive nuclear reaction was also observed in endothelial cells. Negative control is shown in Figure 7, while Figure 8 shows fat tissue from the IDC patient at a morphologically normal site away from the lesions without any immunohistochemical reaction.
Considering the significance level, α = 0.05, we can confirm statistically significant difference between group of GI cancers and GII (P = .00235) and also between GI and GIII (P = .000004), whereas there was no significant difference between groups GII and GIII (P = .0931) (Figure 9 and Table 2).
Discussion
Until now NUCKS was studied in detail using a variety of biochemical and cell biological methods [1,2,17,18]. However, these analyses did not correlate with the occurrence of the protein in particular type of cancer and/or type of cells. In this study the occurrence of NUCKS in human breast cancer was studied. Using the proteomic approach we demonstrated that NUCKS is overexpressed in tumor cells. Histochemical analyses revealed positive staining for NUCKS with different frequencies. The observed expression of NUCKS in cancer cells was found in all examined cases. There are some unpublished reports confirming NUCKS presentation in both cellular compartments, that is, nuclei The positive staining in this case was less frequent than that in the case shown in Figure 3. Some cells show also faint cytoplasmic parallel to nuclear reaction. ABC method. Magnification 200x, inset 40x. and cytoplasm. The importance of NUCKS remains still unknown and disputable. NUCKS has 2 regions termed as nuclear localization signals (NLSs) where NLS1 is assumed to be main nuclear localization signal that binds to importin α3 and α5 in vitro [18]. Importin is in turn a protein which after binding with NLS moves the other proteins into nuclei. This movement between two compartments may probably play an important role in some phenomena leading to protection of a cell against undesirable factors. The varied expression of NUCKS in our study may reflect the well-known heterogeneity of cells in breast cancer, which was confirmed for Ki67 reactivity [26]. The Ki67 activity was higher in ductal cancer and lower in mucinous and lobular carcinoma. In other studies it was found that using the Van Nuys grading system of worst grade, the grade of tumor was not associated with either ductal carcinoma in situ recurrence or development of invasive disease [27]. There is also evidence that invasive breast cancer is the disease with multiple cytogenetic subclones already present in preinvasive lesions [28]. Grade III tumors were found to manifest high levels of several genes involved in regulation of gene expression including NUCKS [1]. In our study we observed correlation between NUCKS expression and histological grading. Statistically significant differences were found between group of GI and GII cancers and also between GI and GIII, whereas there was no significant difference between groups GII and GIII.
The NUCKS was reported to localize in both proliferating and nonproliferating cells: positive staining for NUCKS was found in our study in endothelial cells and lymphocytes. Since endothelial cells and lymphocytes belong to the relatively fast growing cells or at least they show high metabolism, overexpression of NUCKS is probably related to high levels of transcription.
In our study the positive staining was mainly observed in nuclei of tumor cells and nuclei of normal cells. Mixed nuclear and cytoplasmic reaction, which was seen in cells of IDC, may reflect known distribution of NUCKS throughout the cytoplasm in mitotic cells whereas nuclear localization of NUCKS is connected with telophase of cellular cycle [18].
Previously, the expression of the high-mobility group protein gene hmgA2 and its protein as a possible marker detecting malignant growth of thyroid tumors was used [15]. Whereas HMGA2 is highly expressed in most embryonic tissues, its expression in adult tissues is very low. However, reactivation of expression was described for various malignant tumors and correlated with aggressiveness of tumors. Since HMGA2 expression alone allowed to distinguish between benign and malignant thyroid tumors with sensitivity of 95.9% and specificity 93.9%, we assume that by similarities between HMGAs and NUCKS the latter as well might be used as a marker in histological evaluation of tissues. NUCKS was previously reported to be expressed in breast, for example, in lobular carcinoma, and ovarian cancers as a part of a module containing at least six members that are believed to be involved in either transcriptional regulation or ubiquitin proteasome pathway [29].
Recently, the elevated expression of DYRK3, NUCKS, COX-2, and translin and tubulin-α4 genes, which are involved in proliferation and inhibition of apoptosis and cell movement, was found to be associated with high-invasive phenotype in mouse lung adenocarcinoma cell strains [30]. Statistically significant increase in the expression of transcripts for tubulin-α4, COX-2, DYRK3, and translin was associated there with amplification of mouse chromosome 1. The NUCKS protein was increased in the majority of the cell strains [30].
In conclusion, it has to be emphasized that further studies are needed to determine a role of NUCKS in human cancer and that this work provides only early insights in the abundant occurrence of the protein in breast cancer. The future work also should pay additional attention in elucidating potential functions of specific posttranslational Journal of Biomedicine and Biotechnology 7 modifications of NUCKS [19] in etiology and progression of the disease. | 4,302.2 | 2009-12-24T00:00:00.000 | [
"Biology"
] |
Seismic imaging and petrology explain highly explosive eruptions of Merapi Volcano, Indonesia
Our seismic tomographic images characterize, for the first time, spatial and volumetric details of the subvertical magma plumbing system of Merapi Volcano. We present P- and S-wave arrival time data, which were collected in a dense seismic network, known as DOMERAPI, installed around the volcano for 18 months. The P- and S-wave arrival time data with similar path coverage reveal a high Vp/Vs structure extending from a depth of ≥20 km below mean sea level (MSL) up to the summit of the volcano. Combined with results of petrological studies, our seismic tomography data allow us to propose: (1) the existence of a shallow zone of intense fluid percolation, directly below the summit of the volcano; (2) a main, pre-eruptive magma reservoir at ≥ 10 to 20 km below MSL that is orders of magnitude larger than erupted magma volumes; (3) a deep magma reservoir at MOHO depth which supplies the main reservoir; and (4) an extensive, subvertical fluid-magma-transfer zone from the mantle to the surface. Such high-resolution spatial constraints on the volcano plumbing system as shown are an important advance in our ability to forecast and to mitigate the hazard potential of Merapi’s future eruptions.
Information and our later discussion), it is important to highlight that petrological estimates cannot unequivocally identify main storage zones or indeed constrain the full spatial or volumetric extent of magma storage zones in the crust. They can only identify depth ranges from which magma or magmatic cumulates have erupted.
Previous geophysical studies have either focused on the shallow system below Merapi at depths of <10 km [13][14][15][16][17] , or relied on low-resolution (~15 km) seismic arrival time and ambient noise tomographic imaging to identify potential magma reservoirs [18][19][20] . This imaging revealed a strong and extensive low-velocity anomaly about 25 km NE of Merapi that extends from the surface to the mid-crust, and merges into a deeper anomaly inclined southwards towards the subducting slab. It is possible to interpret the mid-crustal part of this anomaly as a magma reservoir consisting of a solid matrix with pockets of partial melt 19 , but such a complicated reservoir involving considerable lateral transport begs the question of how large volumes of volatile-rich magma can be rapidly delivered to the surface to sustain the type of explosive eruption that occurred in 2010. Clearly, accurately imaging Merapi's magma plumbing system throughout the crust is critical for forecasting and mitigating the hazard potential of future eruptions.
New, high-resolution seismic tomograms
DOMERAPI, a French-Indonesian collaborative project, deployed a seismograph network of 46 broad-band seismometers in the period from October 2013 to mid-April 2015, with an inter-station distance of ~4 km providing by far the densest coverage of seismographic stations ever used on Merapi (Fig. 1a). The DOMERAPI data were combined with data of the permanent seismographic network of the Indonesian Agency for Meteorology, Climatology and Geophysics (BMKG) to provide better constraints on hypocenter estimates by extending spatial coverage of the data. This was crucial in achieving high precision hypocenter determinations 21 , since the DOMERAPI stations were placed around Mt. Merapi, while most seismic events occurred along the Java trench to the south of the study region (Fig. 1b). All seismic events were relocated using a double-difference earthquake location algorithm 22 . The jointly processed DOMERAPI and BMGK data produced a new, high-quality catalog 21 comprising 358 events used to undertake the high-resolution tomographic imaging of Merapi presented here.
We have performed joint inversion of the arrival time data to image the Vp and Vp/Vs structure below Merapi in exceptional detail, from below the volcano's summit to a depth of ~20 km below MSL. We have used the program SIMULPS12 23 , which applies an iterative, damped least squares algorithm to simultaneously calculate the 3-D Vp and Vp/Vs structures and hypocentral adjustments. The Vp/Vs structure was inverted for using S-P times instead of separate estimates of Vs and Vp, which is considered a more robust approach 24 given that the timing errors for S waves are usually larger than those for P waves. For our joint inversion of P-wave velocity and Vp/Vs ratios, we have used comparable ray coverage for P and S waves with 5042 phases each, to minimize the possibility that dissimilarities in resulting images are caused by effects of regularization related to differences in data sampling 23 . Figure 1b shows the grid nodes employed in the inversions, i.e. 10 km by 10 km around Merapi, while the vertical grid spacing is 5 km down to 30 km depth and coarser for deeper parts. For an initial reference velocity model we have used a 1D Vp model for central Java 18 with Vp values ranging from 4.3 km/sec at a depth of 3 km to 8.3 km/sec at a depth of 210 km (see Supplementary Fig. 1). The associated 1D Vs model was derived using a Vp/Vs ratio of 1.73 obtained from the Wadati diagram constructed using the combined DOMERAPI and BMKG data sets 21 .
Merapi's magma plumbing system
Our tomographic inversions reveal two pronounced anomalies directly beneath Merapi. One anomaly is located at <4 km depth where we observe low Vp, high Vp/Vs and very low Vs (Fig. 3a-c), which we term the Shallow Anomaly. A second anomaly is located at ~10-20 km depth, where we observe high Vp, very high Vp/Vs and very low Vs (Figs 2a-c and 3a-c), which we term the Intermediate Anomaly. Interestingly, the Vp/Vs tomogram suggests that another anomaly may exist near the MOHO at ≥25 km depth with low Vp, high Vp/Vs and low Vs ( Fig. 3a-c), which we term the Deep Anomaly. However, we note that the resolution of this anomaly is not well constrained by our current tomographic imaging ( Fig. 3d-e) due to a lack of ray sampling (see Supplementary Fig. 5).
While relocated earthquake hypocenters at 15-25 km depth to the south of Merapi are interpreted to be related to the Opak Fault, the hypocenters at 0-10 km depth are likely to be related to volcanic activity. We note that these shallow earthquakes cluster either between the Shallow and Intermediate Anomalies, or in the low Vp/Vs anomaly to the north of our proposed Merapi magma plumbing system (Fig. 4). We speculate that these earthquakes, as , while magma deeper in the system may be significantly more volatile-rich and hazardous in case of ascent and eruption. We have assumed an average crustal density of 2242 kg/cm 3 (cf. 15 ) for the upper 10 km of the section, while we have estimated an average crustal density of 2900 kg/cm 3 for the crust below (cf. 33 well as the low Vp/Vs itself, may be related to the presence of aqueous fluids exsolved from the magmatic system that have migrated into the country rock. Combining this new high-resolution Vp/Vs tomography with results from petrological studies leads us to propose a magma plumbing system with two main magma reservoirs that are connected by subvertical, crustal-scale fluid-rich zones (Fig. 4b). The shallow (<4 km), high Vp/Vs, low Vp anomaly within and below Merapi's edifice could outline the presence of magma and/or fluids in intensely fractured/porous media 14,16,17 . Our seismic data cannot determine the type of liquid present, but we concur with published geophysical and petrological studies that have provided overwhelming evidence for the presence of fluids and the absence of stored magma 5,6,13,14 . Short-term ponding of magmas -i.e. for hours, days or weeks -at shallow (<3 km) depth prior to eruptions has, however, been proposed by 9,25,26 . The intermediate, high Vp/Vs anomaly concurs with several petrological studies that locate Merapi's pre-eruptive magma reservoir in the upper-to middle crust, while the exact location of the reservoir remained highly debated [5][6][7]12,27,28 (details are reported in the Supplementary Information) and the size of the reservoir unconstrained. Amphibole and clinopyroxene mineral barometry has been used to estimate the depth of Merapi's main pre-eruptive magma reservoir 5,8,27,29 , but the reliability of these estimates has recently been called into question 6,12,30,31 (cf. Supplementary Fig. 6). Phase-equilibrium experiments 6 provide more robust constraints, and suggest that most magma erupted in 2010 and in other eruptions of the last ~100 years was sourced from a depth of 4-15 km (Fig. 4b). Melt inclusion hygrobarometric estimates similarly indicate intermittent magma storage depths of 6-14 km. GPS ground deformation data were used to suggest that magma erupted in 1996-1997 was sourced from a similar, but possibly shallower depth at 8.5 ± 0.5 km below and ~2 km east of Merapi's summit 13 .
The main magma source depth (4-15 km) inferred from petrological studies thus coincides with the uppermost part of the Intermediate Anomaly at 10-20 km depth inferred from our tomography (Fig. 4b), which we interpret as a melt-rich zone that serves as Merapi's main, pre-eruptive magma reservoir. While the size of this anomaly is close to the level of resolution of our tomographic imaging, its volume is almost certainly orders of magnitudes larger than the total volume of erupted products in 2010 (and prior eruptions) 4 , and the magma source size inferred for Merapi's 1996-1997 eruption using GPS ground deformation data which are on the order of 1-10 × 10 6 m 3 (cf. 4,13 ; close to the yellow ellipse in Fig. 4b). This highlights that only a small part of the magma system has been tapped during historic eruptive events including the 2010 eruption, approximately at the top of the intermediate reservoir.
The Deep Anomaly is less well-constrained in extent, but nevertheless also provides the first evidence for the location of this reservoir. The high Vp/Vs signal suggests that melt and/or fluids are present in this zone, while the weakness of the Vp anomaly may reflect poor ray path coverage. Petrological and geochemical studies had suggested that such a deep magma reservoir exists 5,6,11,12 , but previous estimates on its depth remained unconstrained 6 or were based on untenable amphibole barometric constraints 5,11,12 (details are shown in the Supplementary Information).
The subvertical, high Vp/Vs signal from the surface to around MOHO depth may highlight that magma storage zones are present throughout the crust as has been invoked by some studies (e.g. [8][9][10][11]. Such a distribution of magma parcels throughout the crust is possible, but most of them would have to be inactive reservoirs, as most magma erupted in 2010, but also in other eruptions of previous decades, has a crystal cargo that is texturally and compositionally strongly bimodal, indicating evacuation from one or two main zones (e.g. 5,6 ). We therefore suggest instead that the crustal-scale, subvertical anomaly outlines an extensive fluid-rich zone and thus fluid fluxing in the system 6,11,29 . This interpretation is in keeping with petrological studies that have highlighted that Merapi's system is H 2 O-and CO 2 -rich, and that deep to shallow degassing during magma ascent plays a key role in the system. If it is correct that the subvertical, high Vp/Vs anomaly outlines fluid-rich zones, it would provide unequivocal evidence that melts sourcing the system reached volatile saturation around MOHO depth, where the anomaly starts (Fig. 4b). To our knowledge, this is the first time that a fluid-fluxed zone has been seismically imaged from the mantle to the surface in great detail, i.e. showing an offset from below to above the Intermediate Anomaly and side branches above the northern edges of the intermediate and the deep reservoir, respectively. Compared with previous models based on lower-resolution seismic tomographic imaging (e.g. 18 ), our model highlights that magma has a much more direct path from reservoirs at depth to the surface, which may facilitate the type of rapid ascent that led to the explosiveness of the 2010 eruption.
Spatial constraints to reinforce forecasting and hazard assessment of future eruptions at Merapi
Unequivocal spatial and volumetric constraints on magma reservoirs throughout the crust and the connections between them is crucial for understanding the explosivity of the major eruption of Merapi on 26 October 2010 and its future hazard potential. Petrological studies 5-7 of the 2010 eruption products all agree that its unusual explosivity was due to a much larger and much more rapid supply of magma than in previous eruptions. Our results suggest that the magma involved in the 26 October 2010 eruption evacuated the system at or near the top of the Intermediate Anomaly, while we follow others (e.g. 5,6,10 ) in the suggestion that other eruptions at least within the last ~100 years were also sourced from this depth (as their eruptive products have equivalent mineral assemblages and closely comparable mineral and glass inclusion compositions), and thus that the magma erupted in 2010 had similar initial volatile contents as magmas of previous eruptions, but was less efficiently degassed in the reservoir and en route to the surface 5,7,26,29 . Our imaging, however, highlights that a large reservoir extends for a further ~10 km below historic magma source levels (Fig. 4b).
A key implication of this is that a large volume of magma with a higher volatile content than that which explosively erupted in the 2010 VEI-4 event is present in Merapi's plumbing system.
We presume that the size and the location of the main reservoir (i.e. the Intermediate Anomaly) is a long-term feature, which may be as old as or older than volcanic activity at Merapi. We highlight that we have no direct SCIEnTIFIC REPORtS | (2018) 8:13656 | DOI:10.1038/s41598-018-31293-w evidence or constraints for this hypothesis, but posit that pre-historic eruptions, which were commonly explosive 27 , could have been fueled by magmas from deeper levels, which should be studied in detail. Magma derived from deeper levels of the Intermediate Anomaly in the future could cause considerably more explosive and more destructive future eruptions than that from the shallowest levels if it is rapidly transported to the surface. Merapi's basaltic andesitic magma from the top of the intermediate reservoir is moderately H 2 O-and CO 2 -rich (~3-4 wt% melt H 2 O, 1000 ppm melt CO 2 ) 6,28,29 . The volatile composition of magma stored at deeper levels of the intermediate reservoir remains unconstrained, but it may be CO 2 -rich (e.g. with >2000 ppm melt CO 2 ) if the magma follows an open-system degassing path (e.g. as proposed by Nadeau et al. 11 and Preece et al. 29 ) and/or H 2 O-rich (with up to ~6-8 wt% melt H 2 O) if the magmas follow a closed-system or disequilibrium degassing path (cf. 6,32 ) in which case it could fuel extremely hazardous eruptions.
Our work demonstrates that high-resolution geophysical surveys are extremely powerful tools for spatially characterizing active volcanic systems such as Merapi's, and that they are crucial in assessing hazard potential and targets for specific monitoring. Our study was carried out within the multi-disciplinary DOMERAPI project, which was designed to intimately couple geophysical and petrological insights on Merapi's magma plumbing system; our interpretation of data shows how important this approach is for robustly characterizing such systems. | 3,498.6 | 2018-09-12T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Efficient Coverage Hole Detection Algorithm Based on the Simplified Rips Complex in Wireless Sensor Networks
The appearance of coverage holes in the network leads to transmission links being disconnected, thereby resulting in decreasing the accuracy of data. Timely detection of the coverage holes can effectively improve the quality of network service. Compared with other coverage hole detection algorithms, the algorithms based on the Rips complex have advantages of high detection accuracy without node location information, but with high complexity. This paper proposes an efficient coverage hole detection algorithm based on the simplified Rips complex to solve the problem of high complexity. First, Turan’s theorem is combined with the concept of the degree and clustering coefficient in a complex network to classify the nodes; furthermore, redundant node determination rules are designed to sleep redundant nodes. Second, according to the concept of the complete graph, redundant edge deletion rules are designed to delete redundant edges. On the basis of the above two steps, the Rips complex is simplified efficiently. Finally, from the perspective of the loop, boundary loop filtering and reduction rules are designed to achieve coverage hole detection in wireless sensor networks. Compared with the HBA and tree-based coverage hole detection algorithm, simulation results show that the proposed hole detection algorithm has lower complexity and higher accuracy and the detection accuracy of the hole area is up to 99.03%.
Introduction
The Internet of things is deeply applied to social life in the form of smart cities and Internet of Vehicles. As an underlying technology of the Internet of things, wireless sensor networks (WSNs) consist of numerous sensor nodes deployed in the monitoring area for comprehensively sensing, acquiring, and transmitting information about objects, which is suitable for intelligent transportation [1], event detection [2], environmental monitoring [3], etc. These practical applications have high requirements for the service quality of WSNs. The coverage rate is an important metric for evaluating WSNs' service quality [4], and many scholars have contributed to its improvement and optimization. In [5], an insightful and comprehensive summarization and classification on the data fusion-based coverage optimization problem and techniques are provided aiming at overcoming the shortcomings existing in current solutions. The scaling laws between coverage, network density, and SNR are derived in [6], and data fusion is shown to significantly improve sensing coverage by exploiting the collaboration among sensors. However, some reasons lead to the loss of coverage in the network inevitably, such as random deployment, location modification, and energy exhaustion of the nodes. Thus, data lost or information undelivered will occur in the uncovered area of the original network, which degrades the service quality of WSNs. The uncovered areas are called coverage holes [7]. The appearance of holes not only destroys the communication link and reduces the data accuracy [8] but also aggravates the transmission burden of the boundary nodes near the holes, resulting in the expansion of the hole range [9]. Therefore, discovering and locating coverage holes in the network are crucial to ensure the quality of network services.
The existing hole detection algorithms can be roughly divided into the following three categories: geometric methods, probabilistic methods, and topological methods. The geometry method uses the location information of nodes and the corresponding geometry tools such as the Voronoi diagram and the Delaunay triangulations to detect holes [10][11][12][13][14][15][16]. Although this method can identify the coverage holes accurately, obtaining accurate location information of the sensor nodes is very expensive and difficult. Therefore, this method is not practical. The location information is unimportant in the probabilistic method [17][18][19]; however, uniform distribution of nodes or the high node density is an essential condition. Meanwhile, it is difficult to detect the coverage holes accurately. The topological method [20][21][22] often uses the connectivity information between nodes to detect holes without the location information of nodes and guarantees the detection accuracy. However, the complexity of these algorithms is always high and the efficiency is low. Thus, this paper proposes an efficient coverage hole detection algorithm based on the simplified Rips complex which belongs to the topology method. The proposed algorithm reduces the complexity of the algorithm and guarantees the detection accuracy. The main contributions of this paper are as follows: (1) Redundant node sleeping. Combining Turan's theorem and the concept of the degree and clustering coefficient in a complex network, we divide the internal nodes in the network into two categories, namely, deterministic nodes and nondeterministic nodes; a redundant node determination rule is designed for sleeping redundant nodes in a distributed manner (2) Redundant edge deletion. Combining the concept of the complete graph, we propose a method for identifying redundant edges, which can simplify the edge set in the network. By means of the simplification of nodes and edges, the network model that is the Rips complex can be simplified efficiently and quickly (3) Hole detection and boundary loop identification. Based on the simplified network structure, a method for detecting holes from the perspective of the loop is proposed. Simultaneously, the method of boundary edge identification, definition of false boundary edges, boundary loop identification, and reduction rules are given in turn The remainder of this paper is organized as follows: Section 2 presents the related work. Section 3 introduces the system model and related concept definitions. The simplified process of the Rips complex will be executed in Section 4, including the identification of redundant nodes and redundant edges. Section 5 identifies the holes. Section 6 evaluates the accuracy of the algorithm through some simulation experiments. Finally, Section 7 concludes the paper.
Related Works
The research on coverage holes is divided into two parts, coverage hole detection and coverage hole repair, respectively. This paper mainly researches on the coverage hole detection algorithm. The existing coverage hole detection algorithms can be roughly divided into three categories: geometric methods, probabilistic methods, and topological methods.
2.1. Geometric Methods. The geometric methods use the location information of the nodes or the relative distance between nodes and combine the corresponding geometric tools (such as Voronoi diagram and Delaunay triangulations) to identify the holes. In [12], the concept of the tree is introduced to locate and describe coverage holes; thus, the location and shape of the corresponding hole can be determined, as well as the size of the hole. However, the relative location information of the neighbor nodes must be known. In [13], an algorithm based on the Delaunay triangulations is proposed, which is combined with virtual edge-based methods to help detect coverage holes in wireless sensor networks. Compared with the existing tree-based method, this algorithm helps detect the exact size of the coverage hole with the coordinate information of the nodes, which generates extra cost. In [23], each node is recognized whether it is on a hole boundary on the basis of a local Voronoi diagram; however, mobile sensors must be employed to construct a hybrid sensor network with static sensors. In [24], the Voronoi diagram-based screening strategy is proposed to screen out the boundary nodes and the exact location of the coverage holes is obtained according to the virtual edge-based hole location strategy. High detection accuracy is guaranteed by considering irregularities of the shape of the coverage holes, and more accurate information is provided for repairing the coverage holes, but the specific location information of each node must be known in this method. In [25], a method based on Delaunay is proposed to detect coverage holes without the nodes' coordinate information; however, a global view of holes cannot be given.
Probabilistic Methods.
With uniform distribution of nodes and high node density, coverage holes can be detected in the network from statistical attributes. An algorithm is presented for determining the boundary node structure of a region in [26], but high-density nodes are required. Assume that the connectivity between nodes is determined by the unit disk graph model in [27], and a linear-time algorithm is proposed to identify the boundary of the holes. But the algorithm cannot distinguish two holes that are close to each other. In [28], the coverage of mobile and heterogeneous wireless sensor networks is studied, and the coverage problem under the Poisson deployment scheme with a 2-D random walk mobility model is discussed. The coverage rate will be improved by bringing in mobility. However, the coverage range is ignored.
Topological Methods.
Topological methods use topological attributes (such as connectivity information) to identify the boundaries of a hole without the exact location of nodes. In [29], the combinatorial Laplacians are the right tools to compute distributed homologous groups. Although distributed hole detection can be performed, the holes cannot be located accurately. In [20], a distributed algorithm is proposed by using the simplicial complex and the combinatorial Laplacians to obtain the topological properties of the network, which verifies the existence of coverage holes in the sensor network without any metric information. But the hole boundary cannot be found accurately, because the Rips 2 Journal of Sensors complex cannot always detect all coverage holes. In [21], the holes are first defined as triangular and nontriangular holes to study the accuracy of using the Rips complex when detecting holes, and a connectivity-based distributed hole detection algorithm is proposed for nontriangular holes, which is more suitable for nondense sensor networks. On the basis of the literature [21], the percentage of a triangular hole area under different ratios of communication radius and sensing radius is researched in [22], and the conditions for accurately detecting holes with the Rips complex are given. Simultaneously, a homology-based distributed coverage hole detection method is proposed for nontriangular holes, which cannot detect all the holes, and the complexity is high. The more planar the Rips complex is, the more favorable it will be for the detection of holes; therefore, the rapid and efficient simplification of the Rips complex can effectively reduce the complexity of the algorithm. Most of the above coverage hole detection methods utilize the binary sensor model; meanwhile, some other models are also proposed to reflect the sensing capability and detect the coverage holes. In [30], a new confident information coverage (CIC) model is proposed for field reconstruction, whose objective is to obtain reconstruction maps of some physical phenomena's attribute with a given reconstruction quality for the whole sensor field, including points that were sampled and not sampled. In [31], LCHD and LCHDRL schemes are proposed to address and study the localized confident information coverage hole detection issue (LCICHD) with the goal of finding the locations and number of the emerged coverage holes in IoT on the basis of the CIC model. Two effective heuristic CIC hole detection algorithms including the CHD without considering the nodes' residual energy and the other CHDRE taking the nodes' residual energy into account are proposed in [32] to address and study the confident information coverage hole detection problem (CICHD) on the basis of the CIC model. An energy-efficient CIC hole detection scheme, EECICHD, which fully exploits the inner spatial correlation of the radionuclides and sensors' cooperative sensing ability for improving the CIC hole detection efficiency, is proposed in [33]. However, the abovementioned algorithms always work in a centralized manner, which are not suitable for large-scale monitoring fields. Meanwhile, these algorithms partition a continuous sensing area into a series of reconstruction grids to check for the existence of holes. The boundaries of the detected holes are determined by image processing in those algorithms.
According to the above analysis, this paper proposes an efficient coverage hole detection algorithm based on the simplified Rips complex, which designs the node and edge deletion rules firstly to simplify the Rips complex efficiently and makes the Rips complex closer to planarization, and then identifies the holes in the network from a loop perspective on the basis of the simplified Rips complex structure. The proposed algorithm in this paper detects holes in a continuous sensing area and gets the boundaries of the network accurately without image processing; thus, a binary sensor model is adopted in the network for each node.
System Model and Related
Concept Definitions 3.1. System Model. N sensor nodes are deployed in a 2-D plane. The nodes located inside the target area are internal nodes which are randomly distributed, and the other nodes which are evenly distributed on the outer boundary of the target area to ensure the full coverage are border nodes. Each node does not know the specific location information, and a node can be determined as an internal node or is not based on an initial setting. Some other conditions are set as follows: (1) The nodes are isomorphic, the communication range (R c ) and the sensing range (R s ) of each node are equal, and R c = 2R s (2) A binary sensor model is adopted in the network for each node (3) Each node has a unique ID (4) The network is connected, as shown in Figure 1
Definitions Related to Homology Theory
Definition 1 (simplex). Given a vertex set V and a positive integer k, a k-simplex S is a random subset of k + 1 points k is set as the dimension of a simplex [22]. As shown in Figure 2, a 0-simplex is a vertex, a 1-simplex is an edge, a 2-simplex is a triangle, and a 3-simplex is a tetrahedron containing the interior. All k − 1 simplices consisting of k + 1 vertices form the surface of the k-simplex.
Definition 2 (simplicial complex). A series of simplices are parts of a simplicial complex and form the surface of the simplicial complex. The simplices must satisfy two conditions as follows: (1) each surface of the simplex in the simplicial complex must be a surface of the simplicial complex and (2) the intersection of any two simplices s1 and s2 is both the surfaces of s1 and s2. The dimension of a simplicial complex is defined as the largest dimension of any simplex contained in the simplicial complex.
Definition 3 (Rips complex). Given a finite set of points V and a fixed radius ε in R n , the Rips complex (R ε ðVÞ) of V is an abstract simplicial complex whose k-simplices are composed of k + 1 points in V. The distance between any two nodes in V is less than the fixed radius ε. Suppose that P = fp i g is a set of sensor node locations and S = fs i g is a set of sensor node sensing ranges, where p i represents the location of the i-th node and s i = fx ∈ R 2 , kx − p i k ≤ R s g. As shown in Figure 3, the Rips complex can be formed on the basis of the above definition when the vertex set V contains six vertices.
Journal of Sensors
Definition 4 (triangular holes and nontriangular holes [21]). The hole existing in the triangle which is not completely covered by the sensor nodes is called the triangular hole; the other holes are called nontriangular holes.
As shown in Figure 3, there are two coverage holes in the network formed by nodes 2, 3, 5, and 6 and nodes 3, 4, and 5, respectively. However, only the hole formed by nodes 2, 3, 5, and 6 can be detected by the Rips complex.
Definitions Related to Graph
Theory. An undirected graph composed of a vertex set V = fv 1 , v 2 , ⋯, v n g and a undirected edge set E = fe 1 , e 2 , ⋯, e m g is denoted as G = ðV, EÞ. Some definitions are given on the basis of the undirected graph as follows.
Definition 5 (adjacency matrix). Two nodes have an adjacent relationship if an edge between the two nodes exists; otherwise, they are not adjacent. For the convenience of calculation, an adjacency matrix A can be used to describe the relationship between the two nodes for the undirected graph G = ðV, EÞ. Supposing that there are n vertexes in graph G, then A G = ðam ij Þ n * n .
Definition 6 (subgraph). Graph G′ can be expressed as ðV ′, E′Þ. G′ is a subgraph of G, and G is called the parent graph of G′ if V ′ ⊆ V and E′ ⊆ E and called G′ ⊆ G.
Definitions Related to Complex Network Theory.
The parameters that the degree and clustering coefficient are involved in this study are defined as follows.
Definition 7 (degree). The degree k i of the node v i is the number of its 1-hop neighbor nodes and determined by its adjacency matrix, which can be defined as where am ij is equal to 1 when node i and node j are directly connected; otherwise, am ij is equal to 0. N is the total number of nodes in the network.
Definition 8 (clustering coefficient). The clustering coefficient C i of the node v i is the ratio between the number of edges E i and the total number of possible edges between the directly v 0 Journal of Sensors connected k i neighbor nodes, and it characterizes the tightness and aggregation of nodes in the network. C i can be defined as where The node degree reflects the ability of a node to establish a direct connection with the surrounding nodes, that is, the number of the neighbor nodes, whereas the clustering coefficient presents the edges connected among neighbor nodes, that is, reflecting the tightness between the nodes.
Rips Complex Simplification
An efficient distributed hole detection algorithm for nontriangular holes is designed in this section. As for triangle holes, the area ratio of triangle holes in the network is less than 0.06% when the ratio of the node communication radius to the coverage radius is between√3 and 2 (including√3 and 2); that is, the triangular holes in the network can be ignored [14]. The hole detection will be more efficient and easier when the Rips complex tends to planarity; therefore, it is critical to simplify the Rips complex efficiently for reducing the complexity of the algorithm. In this paper, the redundant node determination rules and redundant edge deletion rules are constructed to make the Rips complex more planar, and then, the holes can be detected effectively. The proposed algorithm includes the following three parts: (1) redundant node sleeping, (2) redundant edge deletion, and (3) hole detection and boundary identification. The first two steps are viewed as the simplified process of the Rips complex, and the process is shown in Figure 4.
and v k is the neighbor node of the edge ½v i , v j . EðvÞ is a set of edges which contains node v, and TðvÞ is a set of 2-simplices which contains node v.
Definition 9 (loop). A loop C is a subgraph of graph G if each node on the loop C has only two neighbors. The length of loop C is the number of its edges, denoted by jEðCÞj. All of the loops in graph G are denoted by CðGÞ, and the set of triangle loops in graph G is denoted by C T ðGÞ. The length of the triangle loop is three.
Definition 10 (neighbor graph). The neighbor set of a node v in graph G is denoted by N G ðvÞ, and the neighbor graph Г G ðvÞ of node v in graph G is denoted by G½N G ðvÞ. The node set of the neighbor graph is composed of neighbor nodes of the node v.
Definition 11 (Turan's theorem). Graph G has at most EðrÞ edges if graph G is a simple graph with n nodes, and there is no K r+1 complete graph in graph G as a subgraph, where K r+1 represents the ðr + 1Þ-complete graph: when r = 2 and Eð2Þ = n 2 /4.
Inference 12.
There must be a K r+1 complete graph in graph G as a subgraph if a simple graph G with n nodes contains at least EðrÞ + 1 edges. That is, there must be a K 3 complete graph in graph G as a subgraph when r = 2 and graph G contains at least Eð2Þ + 1 edges.
Inference 13. If each internal node in the network meets the following conditions: Then, there must be a triangle loop in the neighbor graph of this node; otherwise, it is uncertain whether a triangle loop is in the neighbor graph of this node. k i is the degree of the node i, C i is the clustering coefficient of the node i, and n = k i .
Proof. According to Definition 8, E i ≥ bn 2 /4c + 1 if the degree of a node v is greater or equal to 3 (that is, there is at least three nodes in the neighbor graph of the node v), and the clustering coefficient of the node v meets C i ≥ ðbn 2 /4c + 1Þ/ C 2 k i . Therefore, there must be K 3 in the neighbor graph of the node v as a subgraph on the basis of Inference 12.
Journal of Sensors
Definition 14 (determined and nondetermined nodes). The internal node is a determined node if there is a triangle loop in its neighbor graph; otherwise, the internal node is a nondetermined node.
Definition 15 (redundant node determination rule). Each internal node v in the network is determined as a redundant node if its neighbor graph satisfies the following two conditions: (1) the neighbor graph of node v is connected and (2) all loops can be triangulated (that is, the length of each loop is three).
Proof. In order to prove the correctness of the redundant node determination rule, it is necessary to verify that there are no new holes created or no holes merged in the process of executing the redundant node deletion.
(1) If the deletion of the node v leads to the appearance of a new hole, the loop formed by the boundary edge of the new hole must be in the neighbor graph Г G ðvÞ of the node v which means that there is a loop that cannot be triangulated in Г G ðvÞ. The above situation is contrary to the rule. Therefore, the deletion of the redundant node does not create new holes (2) If the deletion of the redundant node v leads to the merging of two holes, the neighbor graph Г G ðvÞ of the node v must not be connected, which is contrary to the rule. Therefore, the deletion of the redundant node does not cause the two holes to merge 4.1. Redundant Node Sleeping. Since the border nodes are deployed at the border of the target area manually, only the internal nodes in the network execute the process of redundant node sleeping. The process of redundant node sleeping is as follows: Step 16. Each node broadcasts two hello messages to construct its 1-simplices, 2-simplices, and 3-simplices to form the Rips complex. Each node broadcasts the first broadcast message with its ID, and every node obtains all the IDs of its 1-hop neighbors. Each node continues to broadcast the second hello message containing the IDs of its 1-hop neighbors. All nodes obtain EðvÞ (1-simplices) when they receive their neighbor node list. TðvÞ (2-simplices) will be obtained when those nodes receive the neighbor list of their neighbors, and 3-simplices will be formed when a neighbor node of each simplex is obtained.
Step 17. Compute k i and C i of the node. The internal nodes in the network are divided into two categories and obtained the determined node set V 1 and the nondetermined node set V 2 on the basis of the relationship between k i and C i of the node.
Step 18. Determine whether all nodes in the determined node set V 1 satisfy the redundant node determination rule. If the node v i satisfies the rule, each determined node in Г G ðv i Þ should be judged whether the redundant node determination rule is satisfied and whether the clustering coefficient is greater than that of node v i ; if it has, sleep the determined node v j whose clustering coefficient is largest in Г G ðv i Þ and satisfy the redundant node determination rule and move all neighbor nodes of node v j out of sets V 1 and V 2 , respectively. Otherwise, sleep the node v i and move all neighbor nodes of node v i out of sets V 1 and V 2 , respectively.
Step 19. Determine whether all nodes in the nondetermined node set V 2 satisfy the redundant node determination rule. If the node v i satisfies the rule, each nondetermined node in Г G ðv i Þ should be determined whether the redundant node determination rule is satisfied and whether the clustering coefficient is greater than that of node v i ; if it has, sleep the nondetermined node v j whose clustering coefficient is largest in Г G ðv i Þ and satisfy the redundant node determination rule and move all neighbor nodes of node v j out of V 2 . Otherwise, sleep the node v i and move all neighbor nodes of node v i out of V 2 .
Step 20. Repeat Step 17 to Step 19 until no nodes in the network need to sleep.
The deleted node which satisfies the redundant node determination rule will not affect the network structure. Simultaneously, the clustering coefficient of the node characterizes the tightness and aggregation of nodes in the network, and the larger the clustering coefficient of the node, the closer the connection between the node and the neighbors. Thus, the nodes with large clustering coefficients will be preferentially deleted without affecting the network structure. Proof. If all 2-simplices generated by the edge v a v c are in the K 4 complete graph, then the other vertices of all 2-simplices including the edge v a v c can generate at least one another 2-simplex; that is, no hole will be created when the edge v a v c is deleted. As shown in Figure 6 Figure 7. First, all K 4 complete graphs are found in the network; then, the diagonal edges of each complete graph are identified and put into the queue in turn. Finally, determine whether each diagonal edge can be deleted according to Inference 22. The simplification of the Rips complex is continued if the redundant nodes exist in the network after the redundant edges are deleted. Since K 3 is 2-simplex which is not necessary to be simplified, thus the K 4 complete graphs are only considered the maximum simplicial complex.
Hole Detection and Boundary Identification
The hole can be identified as a boundary loop composed of several boundary edges in the simplified network. Therefore, the hole identification work can be divided into the following two steps: boundary loop identification and boundary Finding the boundary loops is favorable to identify the boundary edges. The Rips complex is planar after the network is simplified, and a boundary edge has at most one neighbor node. Thus, the boundary edges of holes can be identified by the number of neighbor nodes of an edge in the network. For example, the number of neighbor nodes of the edge v i v j is the number of the same neighbor nodes of nodes v i and v j . It is different between internal nodes and border nodes when identifying boundary edges. Thus, the node is weighted by the case, the boundary nodes form a boundary edge whose weight is 2, and the nodes that compose the hole are called boundary nodes. The nodes with a weight of 2 form an edge which is determined as a boundary edge. However, some edges are not the boundary edges that form a hole, and these edges are called false boundary edges which need to be deleted. After the false boundary is deleted, the loop formed by the remainder boundary is the boundary loop. However, not all boundary loops are coverage holes. Therefore, it is necessary to filter the boundary loop. Proof. (1) The loop is a triangle when the length of the loop is 3. Triangle holes are ignored in this study, and nontriangular holes will be identified. (2) Two nonadjacent nodes on the loop are neighbor nodes which are in a triangulated cycle when the length of the loop is 4; on the basis of the first rule, the boundary satisfying condition 2 needs to be deleted. (3) As shown in Figure 8(a), the length of the boundary loop is 5; v c and v e and v b and v d are neighbor nodes that are not adjacent. All the nodes in the cycle are distributed on the side of the connection such as in Figure 8(b); if there is a direct connection between v c and v e , it is uncertain that the identified loop is covered with the internal node; all the nodes in the cycle are distributed on the opposite side of the connection such as in Figure 8(c); if there is a direct connection between v b and v d , it is certain that the cycle is covered with a node; that is, the cycle is not the hole's boundary.
Boundary Loop Reduction.
After the boundary loop is filtered, the remainder boundary loops may still not be the shortest path of the hole or several cycles contain the same hole. Therefore, it is necessary to define a loop reduction rule to shorten the boundary loop. smaller than that of the original loop if the nodes v i and v j are neighbor nodes but not adjacent on the loop. If yes, directly connect node v i and v j to shorten the boundary loop, as shown in Figure 9(a). (2) v i , v j , and v k are three adjacent nodes on the loop, and the node v m is the same neighbor node of the three nodes. Check whether the area and circumference of the loop are both smaller than those of the original loop when the node v j is replaced by the node v m . If yes, the node v m is used instead of the node v j to form a new loop, as shown in Figure 9(b). Each border node has two neighbor border nodes, and the distance between two adjacent border nodes is 20 m. Each sensor node has a coverage radius of 10 m and a communication radius of 20 m.
Simulation and Analysis
6.1.1. Redundant Node Sleeping. Sleeping redundant nodes not only simplifies the Rips complex but also is helpful for detecting the holes in the network without merging and creating new holes. Figure 1 is the original network diagram, and Figure 10(a) is the network diagram after the first round of redundant node sleeping. The rate of node sleeping is 9%. The original holes are expanded in Figure 10(a); however, no new hole merges or appears. The algorithm proposed in this paper will shorten the boundary loops later.
Redundant Edge
Deletion. An edge deletion rule is proposed to delete the redundant edges and simplify the Rips complex without the mergence and the appearance of holes. Usually, the redundant nodes still exist in the network after the redundant edges are deleted, such as the red node in Figure 10(b); then, the simplification of the Rips complex continues. The final simplified Rips complex is shown in Figure 10(c), in which the number of holes remains but the total area increases. Generally, the simplest network structure can be achieved after two rounds of simplification and the condition of redundant node sleeping is shown in Figure 11; 10 nodes sleep after two rounds of redundant node sleeping, and one node sleeps in the second round.
6.1.3. Hole Detection and Boundary Identification. The hole detection is divided into two steps: boundary loop identification and boundary loop reduction. The proposed algorithm first finds the boundary nodes and forms the boundary edge which is composed of boundary nodes, as shown in Figure 12(a). However, some boundary edges found are not really on the boundary of holes such as the red line segment in Figure 12(b). Therefore, some boundary edges are deleted by defining the false boundary edge, as shown in Figure 12(c). But some boundary loops are not real boundaries of holes. As shown in Figure 12(d), the length of the loop with red color is 4. Two nonadjacent nodes belonging to the red loop are neighbor nodes, which meets the second rule of the redundant loop reduction rule. Therefore, some boundary loops need to be deleted according to the redundant loop rule, and the result is shown in Figure 12(e). Finally, the boundary loops are shortened as shown in Figure 12(f).
Algorithm Performance Evaluation
6.2.1. Algorithm Complexity. The detection of holes in largescale scenarios requires lower complexity and higher efficiency. Therefore, the complexity of the algorithm is an important indicator for evaluating the efficiency of coverage hole detection. This section analyzes the complexity of the proposed algorithm. In the stage of redundant node sleeping, check whether the node meets the redundant node determination rule firstly and then check whether the clustering coefficient of the node is the largest among the neighbor nodes which can be deleted, so the algorithm complexity of this stage is OðnÞ, where n represents the number of nodes in the network. In the stage of redundant edge deletion, find all K 4 complete graphs composed of each node firstly and then judge whether all the 2-simplices composed of each diagonal edge are in the K 4 complete graph. Therefore, the complexity of this stage is OðcnÞ, where c represents the number of neighbor nodes of each edge. The hole detection process is divided into two steps: boundary loop identification and boundary loop reduction. In the stage of boundary loop identification, the nodes are weighted to form the boundary edge firstly according to the number of neighbor nodes of an edge, and the algorithm complexity is OðcnÞ. Then, false boundary edges are deleted according to the distribution of the neighbor nodes of the boundary edge and the boundary loops are formed, so the worst complexity is less than OðcnÞ, too. Finally, in the process of filtering the boundary loop, checking whether there are two nonadjacent nodes in the loop which are neighbor nodes is needed. If it has, it is necessary to check whether other nodes in the loop are distributed in the opposite side of the line segment formed by these two nodes, so the worst complexity of the algorithm is OðmHÞ, where H is the number of the holes, m is the number of nodes in the loop, H ≪ n, and m ≪ n. In the stage of boundary loop reduction, check whether there are two nonadjacent nodes in the loop which 11 Journal of Sensors are neighbor nodes and whether the common neighbor nodes of three adjacent nodes exist, so the worst complexity is OðmHÞ. Thus, the algorithm complexity of hole detection is OðcnÞ.
In summary, the complexity of the algorithm is OðcnÞ, and the details are shown in Table 1, where c represents the number of neighbor nodes of each edge. It is known that the HBA [22] has a complexity of Oðn 3 Þ and the tree-based hole detection algorithm [12] has a complexity of OðbnÞ, where b is the number of neighbor nodes of each node. Thus, the proposed algorithm has the lowest complexity.
6.2.2. Detection Accuracy. The detection accuracy is another important indicator for evaluating the algorithm. The HBA algorithm defines the detection accuracy as the ratio of the number of detected holes to the total number of holes; meanwhile, the tree-based hole detection method defines it as the ratio of the estimated hole size to the actual hole size. Compared with the detection accuracy of the hole number, the detection accuracy of the hole area is more favorable to repair holes. Thus, the detection accuracy is defined as Equation (5) in this study.
where S represents the actual total area of the holes, S ′ represents the total area of the holes detected by the proposed algorithm, and r represents the hole detection accuracy. On the basis of Equation (5), the accuracy of the proposed algorithm is 99.03%. On the basis of the above two indicators, Table 2 concludes that the performance of the proposed algorithm is superior to that of the HBA and tree-based hole detection method.
Conclusions
The Rips complex constructed in this paper tends to be planar by a simplification process, which makes the coverage hole detection algorithm have low complexity and high accuracy. In the process of simplification, first, combine Turan's theorem and the concept of the degree and clustering coefficient in the complex network to define two kinds of nodes (determined nodes and nondetermined nodes) and sleep redundant nodes according to the redundant node determination rule; then, use the concept of the complete graph to derive the edge deletion rule for deleting the redundant edges. Finally, the holes in the network are detected from the perspective of the loop. Simulation results show that the detection accuracy of the hole area is 99.03% and the complexity is OðcnÞ. Detecting coverage holes in a 3-D space will be focused in our further research for detection efficiency improvement.
Data Availability
The test data, simulation data and the proposed method used to support the findings of this study are available from the corresponding author upon request. The proposed algorithm is in the state of patent-pending, thus, the access of the data used in the proposed algorithm is restricted now.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper. 12 Journal of Sensors | 8,815.8 | 2020-02-24T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Blind Remote Sensing Image Deblurring Using Local Binary Pattern Prior
: In this paper, an algorithm based on local binary pattern (LBP) is proposed to obtain clear remote sensing images under the premise of unknown causes of blurring. We find that LBP can completely record the texture features of the images, which will not change widely due to the generation of blur. Therefore, LBP prior is proposed, which can filter out the pixels containing important textures in the blurry image through the mapping relationship. The corresponding processing methods are adopted for different types of pixels to cope with the challenges brought by the rich texture and details of remote sensing images and prevent over-sharpening. However, the existence of LBP prior increases the difficulty of solving the model. To solve the model, we construct the projected alternating minimization (PAM) algorithm that involves the construction of the mapping matrix, the fast iterative shrinkage-thresholding algorithm (FISTA) and the half-quadratic splitting method. Experiments with the AID dataset show that the proposed method can achieve highly competitive processing results for remote sensing images.
Introduction
Affected by many factors, such as the imaging environment and camera shake, an acquired image may face the problems of image quality degradation and loss of important details caused by blurring during the imaging process. Therefore, it is necessary to study how to restore the image without prior knowledge, i.e., blind image deblurring. Ideally, the image quality degradation model can be expressed as: where G, H, U, and N represent the blurry image, the blur kernel, original clear image, and noise, respectively. * represents convolution operator. Obviously, this is a typical ill-posed problem with countless conditional solutions. In order to find the optimal solution, we need to use the image feature information to change an ill-posed equation into a benign equation. Traditional image deblurring algorithms are generally divided into two parts: blind image deblurring algorithm, which can estimate an accurate blur kernel, and non-blind image deblurring that uses the estimated blur kernel to obtain a clear image.
Early image restoration algorithms used parameterized models to estimate blur kernels [1,2]. Still, the real blur kernels rarely follow the parameterized models, which leads to the lack of universality of such methods. After the total variation model proposed by Rudin et al. [3] in 1992, the theory of partial differential equations has become more and more popular in blind image restoration algorithms. Since then, researchers have designed two algorithmic frameworks based on theories of probability and statistics: Maximum a posteriori (MAP) [4][5][6][7] and Variational Bayes (VB) [8][9][10][11][12]. Although VB-based algorithms have good stability, their rapid development is permanently restricted by the complexity and vast calculation. On the contrary, MAP-based methods are relatively simple. However, Levin et al. [13] pointed out that the naive MAP approach, which is based on the sparse derivative prior, cannot achieve the expected effect. Therefore, it is necessary to introduce an appropriate blur kernel prior and delay the blur kernel normalization. Analyzing probability, the issue is recovering the clear image U and the blur kernel H simultaneously. This is equivalent to solving the standard maximum posterior probability. It can be expressed as: (U, H) = arg max P(U, H|G) ∝ arg max P(G|U, H)P(U)P(H) (2) where P(G|U, H) is the noise distribution; P(U) and P(H) are the prior distributions of the latent clear image and blur kernel, respectively. After taking the negative logarithm of each item in (2), it is equivalent to the following regular model: where ψ(:) is the fidelity term, Φ(U) and Φ(H) are regularization functions about U and H. α and β are the corresponding parameters. Under the MAP framework, some algorithms seek the optimal solution by utilizing sharp edges. However, when sharp edges are missing in some images, solving the problem requires prior knowledge that can distinguish between clear images and blurry images. Pan et al. [14] noted the difference in dark channel pixel distribution between the clear and blurry images. They proposed the dark channel prior, which performed well in processing natural, text, facial, and low-light images. To avoid the algorithm failure caused by insufficient dark channel pixels in the image, Yan et al. [15] introduced both dark channels and bright channels into the image-blind deblurring model. Ge et al. [16] pointed out that the above two methods will fail when there are insufficient extreme pixels. Therefore, they constructed a non-linear channel (NLC) prior and introduced it into a blind images deblurring algorithm. However, these methods may not perform well in processing remote sensing images.
In this paper, we notice that the similarity of LBP for clear and blurry images can be applied to blind image deblurring. A new optimization algorithm is proposed, inspired by algorithms based on extreme channels and local priors. We use the local binary pattern (LBP) [17] as the threshold to filter critical pixels containing texture features by establishing their mapping relationship to the image, the intensities of which will be accumulated by the strong convex L 1 -norm. The gradients of different types of pixels will be processed correspondingly using the L 0 -norm and L 2 -norm. It is complicated to solve the restoration model directly, so the original problem needs to be decomposed into multiple sub-problems using the half-quadratic splitting method to facilitate results. In addition, we indirectly optimize the LBP prior by constructing a linear operator and adopt fast iterative shrinkagethresholding algorithm (FISTA) [18] to solve the related equations. The contributions of this paper are as follows: (1) We note that LBP can completely extract the texture features of the images, which will not change significantly due to the presence of blur. Therefore, the LBP of the image can be used to locate the pixels that contain important texture information in the image by mapping. (2) A new remote sensing image deblurring algorithm based on LBP prior is proposed, which can remove the blur in the image and prevent over-sharpening by classifying all pixels and processing them in different ways. (3) As shown in the results, our proposed method, which has good stability and convergence, achieves extremely competitive results for remote sensing images.
The outline of the paper is as follows: Section 2 summarizes the related work in the field of image deblurring in recent years. Section 3 introduces the LBP prior and establishes the image-blind deblurring model and the corresponding optimization algorithm. Section 4 is the display of algorithm processing results. Section 5 makes a quantitative analysis of the performance of the proposed algorithm and discusses our algorithm after sufficient analysis and obtaining experimental results. Section 6 is the conclusion of this article.
Related Work
Blind image deblurring is now roughly divided into three categories: edge selectionbased algorithms, image prior knowledge-based algorithms and deep learning-based algorithms. This part summarizes the achievements of blind image restoration in recent years.
Edge Selection-Based Algorithms
As one of the key features of recorded image information, edge information is widely used in image deblurring. Joshi et al. [19] used sub-pixel differences to locate important edges to estimate the blur kernel. Cho and Lee [20] proposed bilateral filters and impulsed filters to extract edge information in images. Xu and Jia [21] found that when the size of the edge is smaller than the blur kernel's, it will affect the estimation of the kernel. Therefore, they proposed a new two-stage processing scheme based on the edge selection criteria. Sun et al. [22] used a patch method to extract edge information. However, this algorithm has a lot of calculation and time-consuming image processing. When there are insufficient sharp edges in an image, edge selection-based algorithms will fail. However, in the algorithms based on prior knowledge, the image edge information does not disappear, which is hidden in the regular term or prior knowledge, i.e., the low-rank characteristics of the gradient [23], L 0 -norm of the gradient of the latent image [24] and the local maximum gradient prior [25], etc. In the restoration of hyperspectral images, gradient information has also received much attention. Yuzuriha et al. [26] took into account the low-rank nature of the gradient domain into the restoration model to make it better able to deal with anomalous variations.
Image Priors-Based Algorithms
Observing the clear and blurry images, much prior conducive to image restoration is applied to the algorithm. Shan et al. [27] used a probability model to process natural images with noise and blur. Krishnan et al. [28] proposed the L 1 /L 2 norm with sparse features after analyzing the statistical characteristics of the image. Levin et al. [10] designed the maximum a posterior (MAP) framework based on the characteristics of image pixel distribution. Kotera et al. [29] improved the MAP method using image priors, which are heavier tail than Laplace, and applied a method of augmented Lagrangian. Michaeli and Irani [30] used the recursive characteristics of image patches at different scales to restore images. Ren et al. [23] adopted a method of minimizing the weighted nuclear norm, which combined the low rank prior of similar patches the blurry image and its gradient map, to enhance the effectiveness of the image deblurring algorithm. Zhong et al. [31] proposed a high-order variational model to process blurred images with impulse noise. After Pan et al. [14] used a dark channel prior for image deblurring and achieved excellent results, the sparse channel has attracted much attention in blind image deblurring. Yan et al. [15] combined dark channel and bright channel and designed an extreme channel prior algorithm. Since then, Yang [32] and Ge [16] have made further improvements to the problems faced by the extreme channel prior algorithms. At the same time, the blind image deblurring algorithms, based on local prior information, have also made significant achievements, i.e., the method based on the local maximum gradient (LMG) prior proposed by Chen et al. [25] and the method based on the local maximum difference (LMD) prior proposed by Liu et al. [33]. The algorithm we proposed also belongs to this category. Recently, Zhou et al. [34] established the image deblurring model of the luminance channel in YCrCb colorspace based on the dark channel prior, which expands a new idea for better processing color images. The image deblurring algorithm proposed by Chen et al. [35], which takes advantage of both saturated and unsaturated pixels in the image, effectively removes the blur of night scene images and has excellent inspiration for the processing of night remote sensing images.The restoration of hyperspectral images generally uses low-rank priors [26,36,37].
Deep Learning-Based Deblurring Methods
With the development of deep learning technology, related image deblurring algorithms have also been developed. Early learning networks, such as those proposed by Sun et al. [38] and Schuler et al. [39], were still designed based on the alternating directions method of multipliers used in traditional algorithms. Li et al. [40] combined deep learning networks with traditional algorithms, which use neural network learning priors to distinguish images. However, it is inferior when dealing with some complex and severely motion-blurred images. Some methods do not need to estimate the blur kernel, which can obtain clear images directly through training. For example, Nah et al. [41] trained a multi-scale convolutional neural network (CNN). Cai et al. [42] introduced the extreme channel prior to CNN. Zhang et al. [43] and Suin et al. [44] use multi-patch networks to improve performance. In order to reduce the computational complexity of the algorithm and obtain sufficient image information, the feature pyramid has become the focus of multi-scale learning. Lin et al. [45] proposed a feature pyramid network (FPN) that can fuse mapping information of different resolutions. In the field of deep learning, new networks are constantly being proposed to deal with different imaging situations and take shorter processing time, i.e., GAMSNet [46], ID-Net [47], DCTResNet [48] and LSFNet [49]. However, these methods have many model parameters and require a large number of data sets for long-term training to achieve good processing results.
Local Binary Pattern Prior Model and Optimization
In this part, we briefly introduced the local prior, i.e., LBP, and constructed an optimization model based on the LBP prior for blind image deblurring.
The Local Binary Pattern
LBP, the characteristics of intensity and rotation invariance, can extract local texture features in an image. Its principle is shown in Figure 1. In a window with the size of 3 × 3, the center pixel is compared with its 8-neighbor pixels. When a peripheral pixel is smaller than the center pixel, the location of the peripheral pixel is marked as 0; otherwise, it is recorded as 1. After that, we encode the 8-bit binary number generated to obtain the LBP value corresponding to the center pixel of the window. Its formula is expressed as: where (a c , b c ) is the center pixel, g c is the intensity of the central pixel, g p is the intensity of the peripheral pixel, s(t) is the sign function.
The Local Binary Pattern Prior
By observing the histogram of the LBP distribution of the clear image and the blurred image, it can be noticed that the blur does not change the distribution of the LBP value in an extensive range. Thus, LBP can be used to get the key pixels in the image restoration process. Different types of pixels will be processed accordingly to sharpen the image and remove fine textures ( Figure 2).
For the key pixels, we use the convex L 1 -norm for accumulation and define the LBP prior as: At the same time, as shown in Formula (7), we constrain the gradient of key pixels with the L 0 -norm and use the L 2 -norm to constrain the gradients of other pixels, where TL and TM are the upper and lower limits of the threshold, respectively, LBPU is the LBP value of the image and ∇ is the gradient operator. According to the above analysis, we use MAP as the framework and introduce LBP prior to design an effective optimization algorithm. The optimization function is defined as: where α, β and γ are the weights of the corresponding regularization terms. The items in the objective function are the fidelity term, the related term of the LBP prior, the image gradient regular term and the constraint term to keep the blur kernel H smooth. The projected alternating minimization (PAM) algorithm is used to decompose the objective function into sub-problems to solve the clear image U and the blur kernel H.
where (i, j) represents the coordinates of the blur kernel element. Note that all elements of h are greater than zero and sum to 1. After iteratively estimating the blur kernel, we can restore a clear image through the existing non-blind restoration methods.
Estimating the Latent Image
Because of LBP prior and regularization term φ(∇U), it is necessary to decompose (9) into three sub-problems by half-quadratic splitting method, which can be expressed as: where λ 1 and λ 2 are the penalty parameters, w and z are auxiliary variables. When λ 1 and λ 2 tends to infinity, (9) and (12) are equivalent. First, we solve the parameter w via: where L(U) is a non-linear operator, which cannot be directly solved linearly. Therefore, we need to construct a sparse mapping matrix C from key pixels to the original image, which is defined as: Same as [16], C will be calculated explicitly, and (14) is equivalent to the following formula: This is a classic convex L 1 -regularized problem, which can be solved by FISTA [18]. The contraction operator is defined as: The solution process is shown in Algorithm 1.
The next step is to solve for z: When ϕ(z) = ||z|| 0 , then When ϕ(z) = ||z|| 2 2 , then Given w and z, we finally solve U by the following formula: Although (21) is a least-squares problem, we cannot directly solve it by using Fast Fourier Transform (FFT). Therefore, it is necessary to introduce a new auxiliary variable d: where λ 3 is a penalty parameter. The alternating directions method of multipliers is used to decompose (22) into two sub-problems: Both (23) and (24) have closed-form solutions. The solution of (23) is as follows: We can use FFT to solve Equation (24): where F,F and F −1 represent the FFT, the complex conjugate operator of FFT and the inverse FFT, respectively. The process of solving U is shown in Algorithm 2.
Estimating the Blur Kernel
Referring to a variety of restoration algorithms [14][15][16]24,25,38], we replace the image intensities in (10) with the image gradients to make the estimated blur kernel more accurate. The formula for solving the blur kernel is: The blur kernel can be obtained by using FFT: After obtaining the blur kernel, it needs to be non-negative and normalized: where m is the weight. The solution process of the blur kernel is shown in Algorithm 3. Figure 3 is a brief flow-chart of this algorithm.
Algorithm Implementation
This section describes the relevant details that need to be paid attention to in the implementation of this algorithm. In order to get a more accurate blur kernel, we construct a multi-scale image pyramid from coarse-to-fine with a down-sampling factor of √ 2/2, and the total number of cycles of each layer of the pyramid is 5. After estimating the potential clear image U and the blur kernel H, we up-sample the blur kernel H and pass it to the next layer. We usually set α = 0.004 − 0.012, β = 0.004 − 0.014, γ = 2, ThL = 0.5, ThM = 1 and m = 0.8. The number of iterations of the FISTA algorithm is empirically set to 500. These parameters can be adjusted as needed. Finally, a clear image can be obtained after applying the obtained blur kernel to the existing non-blind deblurring method.
Experiment Results
In this paper, the experimental results are divided into simulated and real remote sensing image data. Our method is tested on the AID dataset (http://www.captain-whu. com/project/AID/, accessed on 26 November 2021). The following shows the comparison results of our method with four algorithms, which use the heavy-tailed prior (HTP) [29], the dark channel prior (Dark) [14], L 0 -regularized intensity and gradient prior (L 0 ) [24] and the non-linear channel prior (NLCP) [16].
Simulate Remote Sensing Image Data
AID is an aerial image data set composed of sample images collected from Google Earth. The data set consists of the following 30 types of aerial scenes, all of which are marked by experts in the field of remote sensing image interpretation, totaling 10,000 images. In the test experiment of simulated remote sensing images, we selected four images from the AID data set to verify the effectiveness of the algorithm, as shown in Figure 4. Taking into account the types of blur in the actual remote sensing images, we added motion blur, Gaussian blur, and defocus blur to the image, respectively, and adopted Peak-Signal-to-Noise Ratio (PSNR), Structural-Similarity (SSIM) [51] and Root Mean Squard Error (RMSE) as judgment indexes.
Motion Blur
We added motion blur with an angle of 0 • and the displacement of 10 pixels to the remote sensing images. The processing results of each method are shown in Table 1. HTP can sharpen the main contour edges of the image very well, but its ability to retain the details of the image is poor. Furthermore, the images processed by HTP have artifacts remaining. Dark and L 0 use L 0 -norm processing for the prior items of images, which leads to the problem of the over-sharpening of the image. This phenomenon is grave when facing remote sensing images with complex texture details. NLCP and our method use convex L 1 -norm accumulation for the prior terms of the image. The processing results have good visual effects and will not appear to have over-sharpening like Dark and L 0 , but the PSNR, SSIM and RMSE of our method are better. However, for Figure 4b with too many details, NLCP sharpens the tiny details in the image and makes the resulting image quality degraded. In general, our method performs well in processing remote sensing images with motion blur. Figures 5 and 6 show representative images of the processing results.
Gaussian Blur
For Gaussian blur, we set its size to 20 × 20 and standard deviation to 0.5. The evaluation results of each method are shown in Table 2. Through observation, it can be seen that the performance of each method in processing Gaussian blur is similar to that in processing motion blur. The images processed by HTP lack detailed information. Dark and L 0 are more serious damage to images with rich details. NLCP performs well in most cases and achieves results consistent with the subjective vision of our method. However, when processing Figure 4b with too many details, it still sharpens the tiny details in the image, which greatly reduces the evaluation of visual effects and objective indicators. In general, in the process of removing the Gaussian blur of the remote sensing image, the restored images of our method achieve good visual effects and objective indicators. Figures 7 and 8 show representative images of the processing results.
Defocus Blur
We added defocus blur with a radius of 2 to the remote sensing image. The evaluation results of each method are shown in Table 3. Through observation, it can be seen that HTP cannot retain the detailed information of the image. Except for Figure 4b, the results of Dark processing have no obvious over-sharpening phenomenon. The result has excellent evaluation indexes, especially when Dark deals with Figure 4c with scarce detail information. The performance of NLCP is very stable and can achieve good results in most cases. Overall, for processing remote sensing images with defocus blur, the images processed by our method and NLCP achieve excellent visual effects compared to other methods, but our method achieves higher objective metrics. Figures 9 and 10 show a representative image of the processing result.
Real Remote Sensing Image Data
In the real remote sensing images test experiment, we selected four blurry images from the AID data set (Figure 11a-d) and a target image was taken in the experiment (Figure 11e), as shown in Figure 11, to test the effectiveness of our algorithm. In this case, there are no reference images, so we need to apply the no-reference image sharpness assessment indexes. The evaluation indexes include image Entropy (E) [52], Average Gradient (AG) and Point sharpness (P) [53]. Through the comparison of Tables 4 and 5, Figures 12-16, it is clear that HTP restores the overall outline of the image very well but lacks the ability to retain detailed information. The processing results of Dark and L0 algorithms face the problem of over-sharpening. When processing Figure 11a,d with very little texture information, the over-sharpening effect is not obvious, and the resulting image has good visual effects and objective evaluation indicators. However, this effect is particularly serious when dealing with images with many details and scenes (Figure 11b,c,e). The images processed by our algorithm and NLCP can retain more image details, and the visual effects of them are almost the same. However, our algorithm has achieved higher objective evaluation indicators. Figure 11. Selected remote sensing image (Real). Table 4. Quantitative Comparisons on Real Remote Sensing Image.
Analysis and Discussion
In this part, we will test the effectiveness of the LBP prior, the influence of hyperparameters and the convergence of the algorithm, and discuss our algorithm after sufficient analysis and obtaining experimental results. In all testing experiments, we used the Levin dataset [13] containing four images and eight different blur kernels. All blind image deblurring algorithms, finally, use the same non-blind restoration method to obtain clear images to ensure the reliability of the experiment. In addition, Error-Ratio [13], Peak-Signalto-Noise Ratio (PSNR), Structural-Similarity (SSIM) [51] and Kernel Similarity [54] are used as quantitative evaluation indicators. All experiments are run on a computer with Intel Core i5-1035G1 CPU and 8 GB RAM.
Effectiveness of the LBP Prior
Although the image deblurring algorithm based on LBP prior can effectively obtain a clear image in theory, it needs to be quantitatively evaluated and verified in the actual situation. As shown in Figure 17, the proposed algorithm is compared with HTP [29], Dark [14], L 0 [24] and NLCP [16]. The performance of our algorithm in error accumulation rate is second only to MAP, and it is better than other methods in the two indicators of average PSNR and average SSIM. Overall, LBP prior can effectively remove the blur in the image and improve the image quality.
Effect of Hyper-Parameters
The objective function (8) proposed in this paper mainly contains five main hyperparameters, i.e., α, β, γ, ThL, and ThM. To explore the influence of the five hyperparameters on the proposed algorithm, we only changed one parameter at a time, ensuring that other parameters remained unchanged, and then calculated the kernel similarity between the estimated blur kernel and the ground truth kernel. Suppose the kernel similarity does not change on a large scale with the change of hyper-parameters. In that case, the estimated blur kernel is relatively stable, i.e., the algorithm is not sensitive to hyper-parameters. As shown in Figure 18, our method is not sensitive to changes in a wide range of hyper-parameters.
Convergence Analysis
PAM is widely used in various algorithms as an effective method to make the objective function converge to the optimal solution. For the convergence of the PAM method applied to the total variational algorithm, reference [6] gives an explanation of delay scaling (normalization) in the iterative step of the blur kernel. Our algorithm also applies the PAM method and uses half-quadratic splitting method to decompose the objective function into several sub-problems. It is known that each sub-problem has a convergent solution [18,55], but the overall theoretical research on convergence is minimal. The convergence of the algorithm can be quantitatively verified by calculating the average energy function (8) and the average kernel similarity that changes with iteration under the optimal scale of the Levin dataset. As shown in Figure 19, our method converges after approximately 19 iterations, and the kernel similarity stabilizes after 40 iterations, proving the method's convergence.
Run-Time Comparisons
To explore the running efficiency of each algorithm, we tested the average running time of different restoration algorithms. The test results are shown in Table 6. By comprehensively analyzing the data in Table 6 and the conclusions in Section 5.1, it is clear that our method achieves very competitive processing results while spending relatively less time.
Limitations
Although our method excels in processing remote sensing images, it has limitations. The hyper-parameters used by our algorithm are not fixed due to the introduction of LBP priors. To achieve the best processing effect, it is necessary to select corresponding parameters for different processed images. In addition, the presence of noise in the image will also affect the processing effect. Significant noise, especially the strip noise that often appears in remote sensing imaging, will make our method misjudge important pixels and amplify the noise. At the same time, to get a clear image, our algorithm uses PAM, which includes the half-quadratic splitting method and FISTA, to solve the objective function, and updates the parameters iteratively. This way of solving will undoubtedly increase the amount of calculation and make the whole process more time-consuming. Therefore, for future work, we will focus on exploring the factors affecting the hyper-parameters to improve the LBP prior further, using the LBP prior to remove noise and blur at the same time and reducing the amount of calculation to shorten the processing time.
Conclusions
Unlike the existing methods dedicated to exploring the prior that can clearly distinguish between a clear image and the blurry image, the prior we introduce is used to select critical pixels to restore images by focusing on the similarities. The proposed algorithm uses LBP, to filter pixels because of remote sensing images with small scenes and numerous details. It processes different types of pixels, respectively, to prevent over-sharpening while restoring the images. By introducing the LBP prior, we established an optimization model based on PAM, which involves the construction of the mapping matrix, the fast iterative shrinkage-thresholding algorithm (FISTA) and the half-quadratic splitting method. It can be seen that the proposed algorithm has excellent convergence and stability. The experimental results show that our method is better than the existing algorithms for deblurring remote sensing images. Moreover, we believe that the proposed algorithm can provide a new idea for further developing remote sensing image processing.
Acknowledgments:
The author is very grateful to the editors and anonymous reviewers for their constructive suggestions. At the same time, The author also thanks Ge Xianyu for their help, which is of great significance.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 6,717.2 | 2022-03-05T00:00:00.000 | [
"Mathematics",
"Environmental Science"
] |
A Simplified CNN Classification Method for MI-EEG via the Electrode Pairs Signals
A brain-computer interface (BCI) based on electroencephalography (EEG) can provide independent information exchange and control channels for the brain and the outside world. However, EEG signals come from multiple electrodes, the data of which can generate multiple features. How to select electrodes and features to improve classification performance has become an urgent problem to be solved. This paper proposes a deep convolutional neural network (CNN) structure with separated temporal and spatial filters, which selects the raw EEG signals of the electrode pairs over the motor cortex region as hybrid samples without any preprocessing or artificial feature extraction operations. In the proposed structure, a 5-layer CNN has been applied to learn EEG features, a 4-layer max pooling has been used to reduce dimensionality, and a fully-connected (FC) layer has been utilized for classification. Dropout and batch normalization are also employed to reduce the risk of overfitting. In the experiment, the 4 s EEG data of 10, 20, 60, and 100 subjects from the Physionet database are used as the data source, and the motor imaginations (MI) tasks are divided into four types: left fist, right fist, both fists, and both feet. The results indicate that the global averaged accuracy on group-level classification can reach 97.28%, the area under the receiver operating characteristic (ROC) curve stands out at 0.997, and the electrode pair with the highest accuracy on 10 subjects dataset is FC3-FC4, with 98.61%. The research results also show that this CNN classification method with minimal (2) electrode can obtain high accuracy, which is an advantage over other methods on the same database. This proposed approach provides a new idea for simplifying the design of BCI systems, and accelerates the process of clinical application.
INTRODUCTION
Motor imagery electroencephalography (MI-EEG) is a self-regulated EEG without an external stimulus, which can be detected by electrodes. It was suggested in a literature survey that MI is consistent with changes caused by actual exercise in the motor cortex region (Jenson et al., 2019;Kwon et al., 2019). A brain-computer interface (BCI) is a communication channel between the brain and the outside world, and various types of thinking activities in the brain can be detected through EEG (Atum et al., 2019;Mebarkia and Reffad, 2019;Meziani et al., 2019). The application of BCI in rehabilitation training can help normal thinking patients with paralysis of the neuromuscular system interact with the outside world (Leeb et al., 2015;Rupp et al., 2015;Müller-Putz et al., 2017;Wang L. et al., 2019). In addition, EEG studies were conducted on the control of an intelligent wheelchair Pinheiro et al., 2018), robotic arm (Meng et al., 2016), and other external devices (He et al., 2015;Edelman et al., 2019). A major challenge of the BCI is to interpret movement intention from brain activity. Efficient neural decoding algorithm can significantly improve the decoding accuracy, which can improving the performance of BCI. The low signal-to-noise ratio of EEG leads to low classification accuracy. Therefore, effective feature extraction and classification methods have become an important research topic of MI-EEG . Commonly used feature extraction algorithms include wavelet transform (WT) (Xu et al., 2018), common spatial patterns (CSP) (Kumar et al., 2016), variations of CSP (Kim et al., 2016;Sakhavi and Guan, 2017), empirical mode decomposition (EMD) (Kevric and Subasi, 2017), and so on.
Deep learning (DL) has attracted attention in many areas for its superior performance. DL can effectively deal with nonlinear and non-stationary data, and learn underlying features from signals. Some deep learning methods are employed for the classification of EEG signals (Cecotti and Graser, 2010;Bashivan et al., 2015;Corley and Huang, 2018). Convolutional neural networks (CNNs) have been widely used in MI-EEG classification on account of their ability to learn features from local receptive fields. Because the trained detector can be used to detect abstract features by convolutional layer repetition, CNNs are suitable for complex EEG recognition tasks, and have achieved good results and been widely used by many scholars (Amin et al., 2019;Hou et al., 2019;Jaoude et al., 2020;Zhang et al., 2020).
Preprocessing raw EEG signals can improve the signal-tonoise ratio of EEG and the classification accuracy, but it is not necessary. CNNs are the biologically inspired variants of multilayer perceptrons designed to use minimal preprocessing (LeCun et al., 1998). For example, Dose et al. (2018) and Tang et al. (2017) used CNN to directly classify raw EEG signals. Shen et al. (2017) combined RNNs with CNN to enhance the feature representation and classification capabilities of raw MI-EEG, which was inspired by speech recognition and natural language processing. Schirrmeister et al. (2017) established a deeper layer of the neural network to decode imagine or perform tasks from raw EEG signals. Hajinoroozi et al. (2016) proposed an improved CNN with raw EEG signals to predict a driver's cognitive state related to driving performance, which achieved good results. It can be seen that using the original signals can also obtain a good MI-EEG classification effect. CNNs can take multidimensional data as input directly, avoiding the complicated artificial feature extraction process, which can extract distinctive feature information.
The number of electrodes affects the classification accuracy. In general, higher accuracy can be achieved with more electrodes based on the comparison results of Yang et al. (2015) and Cecotti and Graser (2010). Karácsony et al. (2019) further explained that increasing the number of electrodes can improve the accuracy of classification and recognition without changing the data set and classification method. However, the increase of the number of electrodes will increase the complexity of BCI systems. Although some BCIs have better recognition accuracy, the system structures are complex (Chaudhary et al., 2020;Tang et al., 2020).
In this paper, we proposed a CNN architecture with separated temporal and spatial filters, which classifies the raw MI-EEG signals of the left and right brain symmetric electrodes, without any preprocessing and artificial feature extraction operations. It has a 5-layer CNN, in which four layers are convoluted along the temporal axis, and the other layer is convoluted along the spatial axis. It uses 4-layer max pooling to reduce dimensionality, and a fully-connected (FC) layer to classify. Dropout and batch normalization are used to reduce the risk of overfitting.
CNNs have made remarkable achievements in the field of image classification. Multi-channel EEG data are also twodimensional, but the time and channel of EEG have different units. Different from other CNN methods used EEG data as images for classification, our method uses separate time and space filters, and focuses on the detection of time-related features in EEG signals, which helps to improve the accuracy.
Deep learning usually provides better classification performance by increasing the size of training data. On the basis of Physionet database, we also set up a hybrid dataset including 9 pairs of electrode samples of 100 subjects. Each sample only contains information from a single pair of electrodes from a single subject. So the dimension of a sample and the processing difficulty are reduced.
The remainder of this paper is organized as follows: section 2 briefly introduces the dataset. In section 3, the CNN theory, construction and classification are described. Details of the experimental results and analysis are discussed in section 4. Finally, section 5 concludes this paper.
The Framework
The system framework of the proposed method is demonstrated in Figure 1.
(1) We downloaded the data of each subject, shuffled randomly according to the trial, and then divided the data into 10 pieces. For each subject's data, our operation process was like this. (2) We took one piece as the test set and the other nine as the training set. In the test set and training set, we pieced together the data from multiple subjects. The MI-EEG raw signals of nine pairs of symmetrical electrodes over the motor cortex region were extracted from each trial, and the signals of each pair constituted a sample. (3) We trained our proposed CNN model using the training set.
The 5-layer CNN learned EEG features, and the 4-layer max pooling reduced the dimensions. The FC layer divided MI into four types: left fist, right fist, both fists, and both feet. Then, comparing with the four types of labels, the optimal training model can be obtained. Finally, we verified the validity of the model on the test set. (4) Adopting 10-fold cross validation, model training and testing were carried out 10 times, thus providing us with 10 results. Their average values are used as the global average accuracy.
Dataset
This paper used the Physionet MI-EEG database, which was recorded by the developers of the BCI2000 system (Goldberger et al., 2000;Schalk et al., 2004). According to the international 10-10 system (excluding electrodes NZ, F9, F10, FT9, FT10, A1, A2, TP9, TP10, P9, and P10), the original data are extracted from 64 electrodes, including 4 MI tasks. The database contains more than 1,500 one-minute and two-minute EEG records from 109 different subjects, with a sampling frequency of 160 Hz. EEG data acquisition typically uses 32 or 64 electrodes. There are many reasons for reducing the number of electrodes used (Tam et al., 2011). First, fewer electrodes can save more time on preparation for electrode placement. Second, fewer electrodes will reduce the cost of acquisition hardware. Third, and most importantly, when running the BCI systems, the overfitting risk of classifiers and spatial filters will increase with the number of irrelevant electrodes.
It is important to select proper electrodes and their locations in BCI systems. Fewer electrodes but incorrect locations may lose important information, while too many electrodes may produce redundant information, thereby reducing system performance. Therefore, electrode selection is of great significance for EEG analysis. In this paper, nine pairs of symmetrical electrodes (FC5-FC6, FC3-FC4, FC1-FC2; C5-C6, C3-C4, C1-C2; CP5-CP6, CP3-CP4, CP1-CP2) over the motor cortex region were selected as the research objects, which are displayed in Figure 2A.
Each subject conducted four MI tasks: left fist, right fist, both fists, and both feet, which are called T1, T2, T3, and T4, and 21 trials were performed for each MI task. The timing diagram of the trial is shown in Figure 2B. The trial start time is t = −2 s, the subject relaxes for 2 s. At t = 0 s, the target appears on the screen: (1) L indicates motor imagination of opening and closing left fist; (2) R indicates motor imagination of opening and closing right fist; (3) LR indicates motor imagination of opening and closing both fists; (4) F indicates motor imagination of opening and closing both feet.
The subject was cued to execute corresponding MI task for 4 s. At t = 4 s, the target disappeares, and this trial finished. After 2 s rest, a new trial begins (Dose et al., 2018). Because the motor imagination were performed around 4 s each time, and the sampling frequency is 160 Hz, then the effective data size of each electrode per trial is 640. A sample contains a pair of symmetrical electrodes, and their data are combined in series, so the data size of a sample is 1,280.
Each subject carried out 21 trials on each MI task, a total of 84 trials. In this paper, 10-fold cross validation was carried out on the datasets. We divided all trials of a subject into 10 parts. For each task class, we used 2 trials for test, and the rest for training. Therefore, there are 8 trials in the test set, and 76 trials in the training set. There are 840 trials in 10 subjects dataset (S1∼S10), 760 for training and 80 for testing. There are 1,680 trials in 20 subjects dataset (S1∼S20), 1,520 for training and 160 for testing. There are 5,040 trials in 60 subjects dataset (S1∼S60), 4,560 for training and 480 for testing. There are 8,400 trials in 100 subjects dataset (S1∼S100), 7,600 for training, and 800 for testing. In addition, we extracted 9 samples in each trial. Ten subjects dataset with 7,560 samples, 20 subjects dataset with 15,120 samples, 60 subjects dataset with 45,360 samples, and 100 subjects dataset with 75,600 samples were selected for model training and generalization performance verification.
CNN Theory
CNN structure can imitate the complex cerebral cortex of the human brain. It only relies on a large training dataset to train a complex model, which uses backpropagation and gradient descent optimization algorithm to learn features, and uses a series of filtering, normalization, and nonlinear activation operations to extract features (Wu et al., 2019;Mohseni et al., 2020).
Each convolutional layer in CNNs consists of multiple convolutional kernels of the same size for feature extraction. Each kernel is a two-dimensional matrix with weights. The value of each neuron in the convolutional layer is the result obtained by multiplying the data of the previous input layer with a convolution kernel, and then adding the corresponding offset. When performing the feature extraction operation, the kernel sequentially scans the input data of the upper layer according to a certain step and mode setting. In addition, the kernels and the data in the previous layer are dot multiplied to obtain the convolution subgraph Zheng et al., 2020).
In the operation of the convolutional layer, two important characteristics of local connection and weight sharing are used (Dai et al., 2019;Sun et al., 2019). The local connection is similar to the local receptive area. It is mainly used to extract features with appropriate granularity and reduce the number of CNN parameters. Weight sharing means that all neurons in the same convolution subgraph have the same weight and bias value, which can reduce the number of network parameters, the amount of calculation and the risk of overfitting (Acharya et al., 2018;Podmore et al., 2019;.
The mathematical expression of the convolutional layer is: is the bias; and f is the activation function. CNNs use the multidimensional original signals as the input of the network, and rely on the backpropagation learning algorithm to turn the hidden layers into suitable feature extractors so as to avoid the complex artificial feature extraction process. CNNs are suitable for signals such as EEG that change greatly over time Zuo et al., 2019).
CNN Structure
We selected 10 subjects as the dataset, with a total of 7,560 samples, in which the training set was 6,840, and the test set was 720. We performed a series of experiments to determine the number of layers and their parameters in the structure. Leaky ReLU (Dose et al., 2018;Macdo et al., 2019) was chosen as the activation function to avoid the vanishing gradient problem. The optimizer adopted the Adam algorithm (Dose et al., 2018;Chang et al., 2019), which updated the weights and bias through the backpropagation algorithm, and the learning rate was 1 × 10 −5 .
In the experiments, each network structure was repeated 10 times, and the number of iterations was 2,000 each time. Finally, we have identified 5-layer CNN and 4-layer max pooling. This model also used dropout and batch normalization to reduce the risk of overfitting.
The selected CNN architecture is shown in Table 1: the first layer is the input layer; the second, third, fifth, seventh, and ninth layers are the convolutional layers; the fourth, sixth, eighth, and tenth layers are the max pooling layers; and the eleventh layer is the FC layer.
The input data format of CNN is: T × N, where T refers to the sampling amount of each channel and N is the number of electrodes used. In this paper, T = 640, N = 2.
The block diagram of CNN is given in Figure 3. This paper mainly uses a one-dimensional convolution, which is helpful for extracting important local features between adjacent element values of the feature vector (Schirrmeister et al., 2017). In convolutional layer 1, one-dimensional convolution in the direction of the temporal axis is carried out, with 25 kernels of [11,1,1,25]. After convolution, the data size becomes (1, 630, 2, 25), and 25 is the channel. In convolutional layer 2, convolution is performed along the spatial axis. The size of kernels is [1,2,25,25], the first 25 is the channel, and the last 25 is the number of kernels. After convolution, the size becomes (1, 630, 1, 25). In pooling layer 1, max pooling is carried out with the core of [1, 3, 1, 1], the stride of [1, 3, 1, 1], and the output size of (1, 210, 1, 25). In convolutional layer 3, convolution is conducted along the temporal axis, and there are 50 kernels with a size of [11,1,25,50]. After convolution, the data size becomes (1, 200, 1, 50). The parameters of pooling layer 2 are the same as those of pooling layer 1, and the output size is (1, 66, 1, 50). In convolutional layer 4, there are 100 kernels with the sizes of [11,1,50,100]. After convolution carried out along the temporal axis, the data size becomes (1, 56, 1, 100). In pooling layer 3, the parameters are the same as above, and the output size is (1, 18, 1, 100). In convolutional layer 5, convolution is performed along the temporal axis. There are 200 kernels with the size of [11,1,100,200], and the data size after convolution becomes (1,8,1,200). In pooling layer 4, max pooling is performed with the core of [1, 2, 1, 1], the stride of [1, 2, 1, 1], and the output size of (1, 4, 1, 200). The essence of the pooling operation is downsampling. We chose the max pooling, which is realized by taking the maximum value of the features in the neighborhood. It can suppress the phenomenon that the network parameter error causes the shift of the estimated mean value and extract the feature information better.
After feature extraction, the FC layer is applied to enhance the nonlinear mapping capability of the network. It perceives the global information and aggregates the local features learned from the convolutional layer to form the global features for classification. Each neuron in this layer is connected to all neurons in the previous layer, and there is no connection between neurons in the same layer. The formula is where n is the number of neurons in the previous layer, l is the current layer, w (l) ji is the connection weight of neurons j in this layer and neurons i in the previous layer, b (l) is the bias of neurons j, and f is the activation function.
The output of the FC layer is generated by a softmax layer, which contains four neurons [y 1 , y 2 , y 3 , y 4 ], representing the four categories. It maps the output of multiple neurons to the (0, 1) interval, which can be considered as the probability of multiclassification. The formula is as follows: In this paper, all activation functions of the networks adopted the leaky ReLU function: We used the Adam algorithm as the optimizer to minimize the loss function and update the weight and bias through a backpropagation algorithm. It is a stochastic gradient descent (SGD) algorithm based on the adaptive learning rate of the firstorder and second-order moments of the gradient average. This method usually speeds up the convergence of the model and is more robust in the presence of noise and sparse gradients. The proposed CNN architecture includes the spatial dropout, and the batch normalization (BN) algorithms to improve classification accuracy. Dropout refers to the "temporarily discarding" some neuron nodes with a certain probability during the training of a deep network. For any neuron, each training is optimized together with a randomly selected set of different neurons. This process weakens the joint adaptability among all neurons, reduces the risk of overfitting, and enhances the generalization ability (Srivastava et al., 2014).
The forward propagation formula corresponding to the original network is After applying dropout, the forward propagation formula becomes: The function of the Bernoulli function above is to randomly generate a vector with the probability coefficient p (value 0 or 1), indicating whether each neuron needs to be discarded. If the value is 0, the neuron does not calculate gradients or participate in subsequent error propagation. In this paper, we used a 50% dropout to reduce overfitting. Spatial dropout is implemented after the convolutional layer. Deleting the entire feature map rather than a single element helps improve the independence between feature maps. The essence of the neural network training process is the learning data distribution. If the distribution of the training data and the test data is different, it will greatly reduce the generalization ability of the network. Therefore, we need to normalize all input data before the training starts.
The batch normalization (Dose et al., 2018;Wang J. et al., 2019) method is for each batch of data, adding normalization processing (mean value is 0, standard deviation is 1) before each layer of the network input. That is, for any neuron in this layer (assuming the k-th dimension), x (k) uses the following formula: where x(k) is the original input data of the kth neuron in this layer, E[x(k)] is the mean of the input data in the kth neuron, and Var[x (k) ] is the standard deviation of the data in the kth neuron. Batch normalization imposes additional constraints on the distribution of the data, which enhances the generalization ability of the model. The input distribution after normalization is forced to 0 mean and 1 standard deviation. To restore the original data distribution, transformation reconstruction, and learnable parameters γ and β are introduced in the specific implementation: where γ (k) and β (k) are the variance and deviation of the input data distribution, respectively. In the batch normalization operation, γ and β become the learning parameters of this layer, which are decoupled from the parameters of the previous network layer. Therefore, it is more conducive to the optimization process and improves the generalization ability of the model. The formula of the complete forward normalization process of the batch normalized network layer is as follows: In this paper, the global averaged accuracy and ROC curve were used to evaluate the classification model. The global averaged accuracy is the ratio of the number of correctly classified samples to the total number of samples. The area under the ROC curve is expressed in AUC and ranges from 0.5 to 1. The closer the AUC is to 1.0, the higher the authenticity of the method. The performance of the model on the recognition of four types of MI was measured by precision, recall and F-score.
RESULTS
In this paper, 10-fold cross validation was carried out for the dataset. Ninety percent of the dataset was used as the training set for training the CNN model to verify its robustness to data changes. Ten percent of the dataset was used as the test set to verify the validity of the model. The training set and test set were normalized, and then sent to CNN for operation.
Accuracy of Electrode Pairs
On 10 subjects dataset, we conducted 9 groups of single pair experiments to test their global averaged accuracy and the accuracy of four MI tasks (T1, T2, T3, T4). Each group was tested 10 times and then averaged. The average value is taken as the global average accuracy of the electrode pair, as shown in Table 2. The global averaged accuracy of the test set is the average accuracy of 9 electrode pairs. In Table 2, the upper 2 rows are the accuracies of training set and test set, and the following 9 rows are the results of each pair of electrodes. The highest global averaged accuracy of single pair was FC3-FC4, reaching 98.61%, and the accuracies of four MI tasks are also relatively high, at 99.00, 97.27, 98.03, and 100.00%, respectively. FC5-FC6 has the lowest global averaged accuracy of 88.73%, and four MI accuracies are 94. 86, 94.76, 94.69, and 76.80%, respectively. The first three classification accuracies are relatively high, and the effect of T4 (both feet) MI task is general.
Accuracy Within Individual Subjects
To obtain the global averaged accuracy within individual subjects, we divided all trials of a specific subject into 10 parts, nine for training and one for testing. This ensured that no blocks of data are split across training and test sets.
From Figure 4A, it can be seen that S7 has the highest global average accuracy, and S1 has the lowest accuracy. The four MI accuracies of an individual subject are shown in Figure 4B, T1 achieves the highest accuracy on S5, T2 on S8, T3 on S8, T4 on S10. In addition, T1 has the lowest accuracy on S8, T2 on S5, T3 on S10, and T4 on S1.
Classification on Different Dataset
Our proposed method has also been trained and evaluated on different amounts of participants. Ten subjects (7,560 samples), 20 subjects (15,120 samples), 60 subjects (45,360 samples), and 100 subjects (75,600 samples) from the Physionet dataset were used.
The loss function curves of different subjects are detailed in Figure 5. We can observe the convergence of the models under different subjects. The abscissa represents the number of iterations, and the ordinate represents the loss value. Figure 5A shows the loss curves of the training set. From the comparison of four loss curves, it can be observed that the loss value decreases with the increase in iterations, and then remains basically stable to achieve the best training effect of the model. At this time, their training losses are almost 0, and the trained models are the optimal classification models. Figure 5B shows the loss curves of the test set on the optimal models, which decrease to about 0.04 as the number of iterations increase. Therefore, the models are convergent in training set and test set.
Four types of dataset were used for model training, and four classification models were obtained. Table 3 shows the global average accuracy of CNN models in different datasets. The accuracies of all training sets are 100%, and the accuracies of four test sets are different. Among them, the accuracy of 20 subjects is 97.28%, and the corresponding model has the best classification performance. The ROC curve is given in Figure 5C, AUC of 10, 60, and 100 subjects are 0.992, 0.995, and 0.993, respectively, and the AUC of 20 subjects stands out at 0.996, so its corresponding model classification performance is the best.
The confusion matrices of the four test sets illustrate their group-level classification results, as shown in Figure 6. The numbers in the diagonal lines represent the percentage of correct classification, and the other numbers represent the percentage of misclassification. The results showed that the confusion matrix of 20 subjects performed best, with the correct classification rates for T1, T2, T3, and T4 being 98.29, 97.28, 98.67, and 91.92%, respectively. The classification results of the four types of MI by CNN were measured by precision, recall and F-score. We compared the classification effect of different test sets on left fist, right fist, both fists, and both feet. At a glance of Figure 7, we can find that the models of the different test sets have achieved good classification performance.
To show the quantitative results for using the models on subjects not included in the training sets, we conducted the relevant experiments on the different dataset, respectively. We selected the data of subjects who had never participated in the training, such as the data of subjects S 101 , S 105 , and S 109 as the test set. The test accuracy for this subject-independent case is given in Table 4. The highest test accuracy is 73.80% achieved by S 101 on the model of 100 subjects dataset, and the lowest test accuracy is 63.84% achieved by S 105 on the model of 10 subjects dataset. For a single subject, we can see that better classification performance can be obtained with larger training datasets.
Electrode Pair Accuracy Comparison
On 10 subjects dataset, we carried out 9 groups of single pair experiments to test their global average accuracy. The experiments use 10-fold cross validation, each group is tested 10 times, and then the average value is taken as the global average accuracy of each pair.
The placements of the electrode pairs is shown in Figure 2A, two electrodes of each pair are symmetrical to the Z sagittal line (Fpz-Cz-Iz). From the global average accuracy of each pair shown in Table 2, it can be roughly inferred that the accuracy of the electrode pair on the C coronal line is higher than that on the CP coronal line and the FC coronal line. Moreover, the closer the electrode pair is to the Z sagittal line, the higher the accuracy, and vice versa. Due to the physiological and psychological differences between individuals, the spatial origin and amplitude change of brain signals show specific patterns, which will cause high individual variability. It will affect the performance of the model and the electrode pairs to varying degrees. Therefore, the accuracy of each pair cannot fully meet certain rules. As shown in Table 2, the accuracy of FC3-FC4 is higher than that of FC1-FC2 and C3-C4. In the design of the BCI systems, a large number of experiments should be carried out according to the database established by users to avoid selecting electrode pairs with low accuracy as far as possible, which is also the focus of our next work.
Classification Comparison on Individual Subject
In this paper, 10-fold cross validation was carried out for the dataset. On the 10 subjects dataset, we conducted 10 groups of individual subject experiments. Each group of experiments has been cycled 10 times. We divided all trials of a specific subject into 10 parts on average, took one of them in turn for testing, and the rest for training. The average of 10 results is the global averaged accuracy, which reduces the randomness brought by data partitioning and helps to improve the stability of the model. As indicated in Figure 4, the average accuracy of 10 subjects is 95.41%. S7 achieved the best classification effect, with the average accuracy of 96.83%. Its 4 MI accuracies are 97.4% (T1), 97.5% (T2), 96.4% (T3), and 95.9% (T4), respectively. The average accuracy of S1 is the lowest at 93.08%, and the MI accuracies are 97.1% (T1), 94.2% (T2), 94.50% (T3), 86.3% (T4), respectively. The accuracy of T4 is the lowest, indicating that the classification effect of S1 on T4, that is, both feet is the worst. The average accuracies of 10 subjects on 4 MI are 95.88% (T1), 95.36% (T2), 96.36% (T3), 94.05% (T4). Among the four types of MI tasks, the best is both fists and the worst is both feet.
Model Comparison
In the CNN model construction based on the dataset of 20 subjects, we used spatial dropout and BN to reduce the risk of overfitting. Table 5 shows the accuracy comparison of the CNN models under anti-overfitting measures, including global averaged accuracy and the accuracy of each of the four tasks. The data analysis is described in detail later.
From Figure 8A, we can see that the AUC of the CNN model stands out at 0.996, followed by 0.991 without dropout, 0.973 without BN, and 0.951 without dropout and BN.
According to Figure 8B, the order of the models to reach the steady state from fast to slow is the model without dropout, our proposed CNN model, the model without dropout and BN, and the model without BN. The curve of our proposed model reached a stable state after 500 iterations and achieved the highest accuracy. At this time, the accuracy of the model without dropout and BN and the model without BN are still slowly increasing. The model without dropout reached a stable state as soon as possible. However, the model is prone to overfitting without dropout operation, resulting in sharp curve burr and unstable performance during the whole iteration. By observing the smoothness of the four curves, our proposed model has the smoothest curve, the least furr and the most stable performance. Then, we refer to the values in Table 5 to compare the global average accuracy. In the final stable state, the accuracy of our proposed model is the highest at 97.28%, followed by 95.30% without dropout, 89.74% without BN, and 83.92% without dropout and BN. The proposed model is 1.98% higher than the model without dropout, 7.54% higher than the model without BN, and 13.36% higher than the model without dropout and BN. Figures 8C-F illustrate the accuracy of our proposed model on the four tasks in detail. The growth trend and performance of T1, T2, and T3 curves are similar to those in Figure 8B. The main difference is the poor performance of the T4 task, that is, both feet. Even the proposed model has uneven curves throughout the iteration, and fluctuates above and below a certain value. The curve performance of the other models is worse, especially the model without dropout. This will be the focus of our future research. With reference to the values in Table 5, the accuracy of our proposed model on four tasks (T1, T2, T3, T4) reached the peak at 98.78, 97.28, 98.13, and 94.71%, respectively. Our model is 1.71% (T1), 2.71% (T2), 1.86% (T3), and 1.67% (T4) higher than the model without dropout, 9.05% (T1), 6.79% (T2), 8.26% (T3), and 5.85% (T4) higher than the model without BN, and 15.89% (T1), 14.13% (T2), 11.46% (T3), and 11.70% (T4) higher than the model without dropout and BN.
Taking the 20 subjects dataset as an example, we compared our proposed CNN model, the model without dropout, the model without BN and the model without dropout and BN from the ROC curve, the global average accuracy curve and the accuracy curve of four MI tasks. The experimental results illustrate that our proposed CNN model has the smoothest curve, the least furr, the most stable performance. In general, the use of spatial dropout and BN in CNN can effectively reduce the risk of overfitting and improve the generalization ability and classification effect of the model.
Classification Comparison
Based on the same database and the same number of MI tasks, our proposed method has been compared with other works in Table 6. We can observe that CNN method is indeed effective in MI classification, which can greatly improve the classification accuracy. In the same method, compared with Dose et al. (2018) and Karácsony et al. (2019), our work achieved superior performance, and used two electrodes to greatly reduce the sample size and data dimensions.
Although, Hou et al. (2019) used multiple electrodes whereas this work used only two electrodes. Hou et al. used the Colin27 template brain for Physionet database, the boundary element method (BEM) implemented in the OpenMEEG toolbox for a realistic-geometry head model, and a Morlet wavelet approach utilized for feature extraction. Its preprocessing process is very complicated. However, this work is based on the original data as the input of CNN, without any preprocessing or artificial feature extraction operations. Therefore, this work can simplify the BCI design. Moreover, Hou et al. mainly used data sets of 10 subjects with an accuracy of 94.54%, while our work achieved 95.76, 97.28, 96.01, and 94.8% on the data sets of 10, 20, 60, and 100 subjects, respectively. So this work has larger data sets and higher global average accuracy.
CONCLUSION
In this paper, we proposed a CNN architecture and design the network structure and parameters. Without any preprocessing and artificial feature extraction operations, our model can classify the raw MI-EEG signals of the left and right brain symmetric electrodes.
Using the Physionet database as the data source, the model was trained and tested on 10, 20, 60, and 100 subjects, respectively. The experimental results indicate that our models are convergent on both the training set and the test set. It can reach the uppermost accuracy on group-level classification, with 95.76% accuracy for 10 subjects, 97.28% for 20 subjects, 96.01% for 60 In future work, we will build a real-time EEG signals acquisition system, and use self-built database to verify the validity and robustness of the proposed method.
DATA AVAILABILITY STATEMENT
All datasets presented in this study are included in the article/supplementary material.
AUTHOR CONTRIBUTIONS
XL was responsible for neural network design and paper writing. ZY was in charge of the design of the overall framework of the paper. TC was in charge of reference reading. FW was in charge of data processing. YH was responsible for the accuracy of the grammar of the article. All authors contributed to the article and approved the submitted version. | 8,493.8 | 2020-09-15T00:00:00.000 | [
"Computer Science",
"Engineering",
"Medicine"
] |
Set-Blocked Clause and Extended Set-Blocked Clause in First-Order Logic
Due to scale and complexity of first-order formulas, simplifications play a very important role in first-order theorem proving, in which removal of clauses and literals identified as redundant is a significant component. In this paper, four types of clauses with the local redundancy property were proposed, separately called a set-blocked clause (SBC), extended set-blocked clause (E-SBC), equality-set-blocked clause (ESBC) and extended equality-set-blocked clause (E-ESBC). The former two are redundant clauses in first-order formulas without equality while the latter two are redundant clauses in first-order formulas with equality. In addition, to prove the correctness of the four proposals, the redundancy of the four kinds of clauses were proved. It was guaranteed eliminating clauses with the four forms has no effect on the satisfiability or the unsatisfiability of original formulas. In the end, effectiveness and confluence properties of corresponding clause elimination methods were analyzed and compared.
Introduction
Simplifications have been diffusely recognized as an indispensable component of both propositional SAT solving and first-order theorem proving.Clause elimination has always been crucial among simplification techniques.In 2010, blocked clause elimination in propositional logic was proposed for reducing the size of formulas and speeding up SAT solvers as a preprocessing technique [1].After that, there have been published further research work related to blocked clause.Hidden blocked clause and asymmetric blocked clause were created by combining hidden literal addition and asymmetric literal addition with blocked clause at the same year [2,3].Then, abcdSAT [4] based on blocked clause decomposition [5] won the SAT-Race 2015 competition.In 2016, the extension of blocked clause was widened further.Super-blocked clause and set-blocked clause in propositional logic were produced, in which super-blocked clause had the most general local redundancy property [6].After that, Blocked clause was successfully lifted to first-order logic in 2017 [7], which as a preprocessing technique of Vampire boosted the efficiency of Vampire [8].
In the paper, we generalize the conception of blocked clause further in first-order logic.Set-blocked clause (SBC) and extended set-blocked clause (E-SBC) in first-order logic without equality are presented, which are generalizations of blocked clause in first-order logic.The evolution is that SBC is blocked by a subset of its literals while blocked clause (BC) is blocked by one of its literals.Similarly, E-SBC can be considered as a further generalization based on SBC.Informally, if a clause C is an E-SBC, for any assignment β over the external ground atoms of the clause C, there exists a subset S β of literals of C, C is a SBC upon S β in F|β.Assignments over the external ground atoms of C may transform resolution environment of C into a different one from the original resolution environment, in addition, the subset S β is able to be various along with the diversity of the assignment β, which makes the requirement for a clause to be an extended set-blocked clause is quite flexible compared with the requirement for an SBC.To guarantee those clauses are capable to be eliminated from formulas without influencing the satisfiability or unsatisfiability, the redundancy of the two categories of clauses are proved.The proof is not given directly under the circumstance of first-order logic but lower it to propositional logic and connect with the variant of Herbrand's Theorem: if a first-order formula F is satisfiable if and only if it is satisfiable for all the finite ground instances of F. After that, their abilities to simplify formulas are contrasted with blocked clause elimination (BCE) by comparing their effectiveness.Furthermore, set-blocked clause elimination (SBCE) and extended set-blocked clause elimination (E-SBCE) have the confluence properties just the same as blocked clause elimination.
The paper is not only relevant to first-order formulas without equality but also first-order formulas with equality.Because of the peculiarities of equality, there will be some mistakes if we remove clauses according to the definitions of SBC and E-SBC.The solution is to combine SBC and E-SBC with flattening and flat resolution, developing the new two categories of clauses: equality-set-blocked clause (ESBC) and extend equality-set-blocked clause (E-ESBC).The combination can erase the "confusion jamming" caused by the characteristics of equality that some different literals with distinct items have the same truth values under any assignment.Similarly, redundancy, effectiveness and confluence properties of ESBC and E-ESEB are also demonstrated, analyzed and compared.
The contribution of the paper mainly is: (1) Establish the concepts of SBC and E-SBC in formulas without equality and ESBC and E-ESBC in formulas with equality; (2) Prove all the four types of clauses are redundant; (3) Contrast the four corresponding clause elimination methods' effectiveness; (4) Illuminate the confluence properties of the four clause elimination methods.
The rest of the paper is organized as follows.After some necessary preliminaries are introduced in Section 2, we propose SBC and E-SBC, and prove the redundancy of the two types of clauses in Section 3. In Section 4, we present ESBC and E-ESBC, show how they can deal with formulas with equality and prove the redundancy of the two types of clauses.Finally, we compare and analyze those clause elimination methods' effectiveness and confluence properties in Section 5.
Preliminaries
In the section, we introduce some necessary notations, definitions and theorems for the paper.
Here we only consider first-order formulas in conjunctive normal form (CNF).A formula is a conjunction of clauses.A clause is a disjunction of literals.A literal is an atom or the negation of an atom.An atom is made up of a predicate symbol and items.Items can be the mixture of variables, constants and functions or just single variable, constant or function.Variables are usually represented by x, y . .., constants are represented by a, b, c . . .and functions are represented by f , g, h . . .[9].
A propositional assignment is a mapping from ground atoms to the truth values 1 (true) and 0 (false).Accordingly, a set V of ground clause is propositionally satisfiable if there exists a propositional assignment which can assign every ground clause in V to the truth value 1.A clause is valid when it is true under any assignment, tautology is one case of valid clause.Let F be a formula and α be an assignment, F|α is defined as {C|C ∈ F and α does not satisfy C}.Two formulas F and F are satisfiability equivalent if they are either both satisfiable or unsatisfiable.A clause C is redundant w.r.t.F if F and F\{C} are satisfiability equivalent.A substitution is a mapping from variables to terms.A ground substitution is a mapping of which the range consists only of ground terms.For a literal, clause or formula F, atom{F} denotes the atoms in F and Gatom{F} denotes the ground atoms in F.
A clause is a blocked clause upon L ∈ C in a first-order formula F without equality, if all its L-resolvents are tautologies.L-resolvent is defined below [7]: that there exists a substitution σ which can unify L, ¬N 1 . . .¬N m , the clause (C 1 ∨ D 1 )σ is called as the L-resolvent of C and D.
For a clause C in a formula F, the resolution environment, env F (C), of C in first-order formula F without equality is the set of all the clauses in F\{C} which can be resolved with C: env F (C) = {C ∈ F\{C}|∃¬L in C such that L ∈ C and L and L can be unified} And for those atoms in env F (C) but not appearing in C and not able to be unified with atom{C}, they are called as the external atoms of C. The definition is: ∈ atom{C} and there is no unification between A and atom{C}}.The set of external ground atoms is defined as extG F (C).
Since first-order CNF formulas with equality will be discussed in this paper, equality axioms ε L is introduced here [10 Flattening and flat resolution [11] are the constituent parts of E-SBC and E-ESBC.The definitions are showed below [7].A clause C is an equality-blocked clause (EBC) upon L ∨ C in a formula F with equality, if all its flat L-resolvents are valid.
The resolution environment of a clause C in a formula F with equality is different from the definition in formulas without equality.The existence of equality makes the clauses own literals, which can be resolved with some literal in C, cannot consist of the whole resolution environment of C, but extend to clauses contain literals have the same predicate but the contrary polarity with some literal in C. env F E = {C ∈ F\{C}|∃¬L ∈ C such that L ∈ C and L and L have the same predicate symbol}.
Meanwhile, the external atoms in env F E (C), of C in first-order formulas with equality are also distinguished from the definition of external atom in first-order formulas without equality: ∈ atom{C} and A has no same predicate symbol as atom{C}}, in which the set of ground external atoms is defined as extG F E (C).
In a formula F with equality, flipping the truth value of a ground literal L = P(s 1 , . . ., s n ) under a propositional assignment β, it should flip the truth value of all the ground literals with the form L = P(t 1 , . . ., t n ) such that β(s i = t i ) = 1 for all 1 ≤ i ≤ n rather than simply flip the truth value of L [7].Definition 4. Given a propositional assignment β and a ground literal L = P(t 1 , . . ., t n ) of which the predicate symbol is not equality symbol.The definition of a propositional assignment β is obtained from β by equivalence flipping the truth value of L = P(t 1 , . . ., t n ) is below: The following are two variants of Herbrand's Theorm we adopt in the rest part of the paper.One is suitable for first-order CNF formulas without equality, while the other is suitable for first-order CNF formulas with equality [10].
Theorem 1.For a first-order formula F without equality predicate, it is satisfiable if and only if every finite set of ground instances of clauses in F is propositionally satisfiable.
Theorem 2. For a first-order formula F with equality predicate, it is satisfiable if and only if every finite set of ground instances of clauses in F ∪ ε L is propositionally satisfiable.
Set-Blocked Clause and Extended Set-Blocked Clause in First-Order Logic without Equality
In this section, we demonstrate set-blocked clause and extended set-blocked clause in formulas without equality.First, we give the definition of L S i -resolvent as follows, which is different from the definition of L-resolvent of a clause C in the paper [7].The former includes substituted literals from negations of some literals in the clause C while the latter has no such characteristics in it.
Next is the definition of set-blocked clause, a generalization from blocked clause by extending one blocking literal of clause C to multiple blocking literals of clause C. Definition 6.A clause C is a set-blocked clause (SBC) in the formula F if there exists a set S = {L 1 , L 2 , . . ., L n } ⊆ C such that L i (1 ≤ i ≤ n) and L j (1 ≤ j ≤ n) cannot be resolved with each other and all its L S i -resolvent (1 ≤ i ≤ n) are tautologies.
which is a tautology and Q(a) {Q(a),P(b)} -resolvent is R(c) ∨ P(b) ∨ ¬P(b) which is also a tautology.Then, all the L S i -resolvents (1 ≤ i ≤ n) are tautologies.Therefore, C is set-blocked upon S w.r.t.F.
In Example 2, the clause C is an SBC; however, the clause C is not blocked no matter upon Q(a), P(b) or R(c).
To justify it has no impact on the satisfiability or unsatisfiability of formulas by removing SBCs in formulas, the redundancy property of SBC is proved subsequently.Lemma 1.Given a clause C is set-blocked upon S = {L 1 , L 2 , . . ., L n } ⊆ C in a formula F. Let α be an assignment that propositionally satisfies all the ground instances of clauses in F\{C} but falsifies a ground instance Cλ of C.Then, the assignment α , obtained from α by flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ, still satisfies all the ground instances in F\{C}.
Proof.Let Dτ be a ground instance of D in F\{C}.Dτ may be falsified according to the assignment α by flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ, only if Dτ contains a set of literals {¬L i1 λ, ¬L i2 λ, . . ., ¬L im λ} ⊆ {¬L 1 λ, ¬L 2 λ, . . ., ¬L n λ}.Without loss of generality, assume that Dτ contains the literal ¬L 1 λ (if Dτ contains more than one literal from {¬L 1 λ, ¬L 2 λ, . . ., ¬L n λ}, the proof process is the same) and let N 1 , . . ., N k be all the literals in D such that N i τ = ¬L 1 λ for 1 ≤ i ≤ k.Then, the substitution λτ (note that C and D are variable disjoint by assumption) is a unifier 1 -resolvent of C, which is a tautology due to the fact that the clause C is a set-blocked clause.
As the substitution σ is the most general unifier, there exists a substitution θ such that σθ = λτ.We can obtain the following equation: L 1 , so it is a tautology of which the truth value is 1.Because α falsifies Cλ and α is acquired by flipping all the truth values of L 1 λ, L 2 λ, . . ., L n λ to be true while the truth values of all the other ground literals are maintained.Therefore, α falsifies (C\S)λ and ( S\{¬L 1 })λ.Meanwhile, since the truth value of (C\S)λ ∨ (D\{N 1 , . . ., N k })τ ∨ ( S\{¬L 1 })λ is 1, then the truth value of (D\{N 1 , . . ., N k })τ must be 1.Therefore, α satisfies (D\{N 1 , . . ., N k }), and of course α satisfies Dτ.Hence, flipping the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ will not falsify any ground instances in F\{C}, so α satisfies all the ground instances in F\{C}.
Lemma 1 demonstrates that flipping the truth values of ground literals in S has no influence on the truth values of ground instances of clauses in F\{C}.However, it may falsify other ground instances of C.Here is an example.Even though flipping the truth values of ground literals of the set S 1 , one ground instance of S in a ground instance C 1 of C, may falsify other ground instances of C, it has no severe consequence.Assume that flipping the truth values of ground literals in S 1 falsifies another ground instances C 2 of C, then there are no identical ground literals in S 2 ⊆ C 2 , one ground instance of S in the ground instance C 2 .Now that S 2 contains totally different ground literals from S 1 , and ground literals in S 2 are not capable to be resolved with ground literals in S 1 according to the definition of SBC, then it will not falsify C 1 by flipping the truth value of ground literals in S 2 .We can conclude that both C 1 and C 2 can keep their truth values as true, by flipping the truth values of ground literals as true in S 1 and S 2 .
Assume that F\{C} is satisfiable.Let F and F C be finite sets of ground instances of clauses in F\{C} and {C}.Since F\{C} is satisfiable, there exists a propositional assignment α satisfies F .Assume that it falsifies some ground instances {C 1 , C 2 , . . ., C k } of C which are contained in F C .Flipping all the truth value of ground literals of ground instance S i of S in C i (1 ≤ i ≤ k), has no influence on ground instances in F according to Lemma 1, but it may falsify some other ground instances of F C .Nevertheless, since F C is finite, we can satisfy those ground instances of F C by flipping orderly the truth value of ground literals of the ground instances of S in those falsified ground instances of C. Eventually, all the ground instances of F C are satisfied.According to Theorem 1, F is satisfiable, which means F and F\{C} are satisfiability equivalent.Therefore, the clause C is redundant w.r.t.F.
Apparently the redundancy property of set-blocked clause is local.A clause can be assessed whether it is an SBC, by only considering its resolution environment rather than the whole formula.If a clause C is an SBC in a formula F and the clause C has the same resolution environment in another formula F , then the clause C is also an SBC in F .
Next, we introduce extended set-blocked clause (E-SBC), a generalization of SBC.With respect to E-SBC, another factor, external ground atoms, is added into consideration.external ground atoms of a clause C are ground atoms in the resolution environment of C but not occurring in C and not able to be unified with atoms in atom{C}, which means variations of truth values of external ground axioms and truth values of C are not relevant.In other words, if a clause C is true because of the truth values of several external ground atoms of C in C , then the clause C will not be falsified under any change of the truth values of literals in C.
Definition 7.
A clause C is an extended set-blocked clause (E-SBC) in a formula F if, for every assignment β over the external ground atoms extG F (C), there exists a subset Apparently, the clause C is not a set-blocked clause, because not all the L S i -resovlents (1 ≤ i ≤ n) are tautologies no matter which subset of C has been chosen to be S.The external ground atoms of C is extG ∨ ¬R(a)) = 1 and it is not covered in F|λ.We only need to consider ¬P(a) ∨ Q(a) ∨ T(z) ∨ R(a) and ¬Q( f (y)) ∨ P( f (y)) as the resolution environment of C in F|λ.If we choose S as S = {P(x), Q(x)}, the P(x) {P(x),Q(x)} -resolvents Q(a) ∨ T(x) ∨ R(a) ∨ ¬Q(a) and Q(x) {P(x),Q(x)} -resolvent ¬P( f (y)) ∨ P( f (y)) both are tautologies by resolving C with ¬P(a) ∨ Q(a) ∨ T(z) ∨ R(a) and ¬Q( f (y)) ∨ P( f (y)).Therefore, clause C is set-blocked upon S = {P(x), Q(x)} in F|λ.Because α and λ cover all the possible assignments over R(a), then we can conclude that C is set-blocked in F|β for any assignment β over the external ground atoms extG F (C). Therefore, the clause C is an E-SBC.
Evaluating whether a clause C is an E-SBC when there are no external ground atoms, it only needs to evaluate if the clause C is an SBC.No external ground atoms means no way to influence the resolution environment of C, therefore, F|β is equal to F all the time for any assignment β.Lemma 2. Given a clause C is an E-SBC in a formula F. Let α be an assignment that propositionally satisfies all the ground instances of clauses in F\{C} but falsifies a ground instance Cλ of C.Then, there exist a subset Sλ = {L 1 λ, L 2 λ, . . ., L n λ} of Cλ such that the assignment α , obtained from α by flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ, still satisfies all the ground instances in F\{C}.
Proof.Since flipping the truth values of ground literals of C can only affect the truth values of ground clauses in the resolution environment of C, we only consider the ground instances in the resolution environment of C whether they will be falsified by flipping the truth values of ground literals in C. It is analyzed in two cases: Case 1: In the resolution environment env F (C) of C, there are no external ground atoms.Since C is an E-SBC and there is no ground atoms in the resolution environment of C, it means that C is set-blocked in the formula F. Now that C is set-blocked in the formula F, there exists a subset Sλ = {L 1 λ, L 2 λ, . . ., L n λ}of Cλ such that the assignment α , obtained from α by flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ, still satisfies all the ground instances in F\{C} according to Lemma 1.
Case 2: In the resolution environment env F (C) of C, there exist external ground atoms {A 1 , A 2 , . . ., A m }.Since α is an assignment which covers all the ground instances of clauses in F\{C}, it also assigns the truth values of those external ground atoms {A 1 , A 2 , . . ., A m } to be true or false.Assume that the assignment α to the external ground atoms {A 1 , A 2 , . . ., A m } make the truth values of clauses {C 1 , C 2 , . . ., C k } in the resolution environment of C are true.Since clause C is extended set-blocked in a formula F, there exist a subset S = {L 1 , L 2 , . . ., L n } of C, C is set-blocked upon S in F|α, which means C is set-blocked upon S in the formula F\{C 1 , C 2 , . . ., C k }.Now that C is set-blocked upon S in the formula F\{C 1 , C 2 , . . ., C k }, flipping all the truth values of ground literals L 1 λ, L 2 λ, . . . ,L n λ in Sλ will not falsify any ground instances of F\{C, C 1 , C 2 , . . . ,C k }.Furthermore, α already satisfies all the ground instances of {C 1 , C 2 , . . ., C k } by its assignment to those external ground atoms {A 1 , A 2 , . . ., A m } according to the assumption, flipping all the truth values of ground literals L 1 λ, L 2 λ, . . . ,L n λ in Sλ will not falsify any ground instances of {C 1 , C 2 , . . ., C k }.Hence, flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ will not falsify any ground instances of F\{C}.Therefore, the assignment α , obtained from α by flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ, still satisfies all the ground instances in F\{C}.
Equality-Set-Blocked Clause and Extended Equality-Blocked Clause in First-Order Logic Formulas with Equality
In the last section, SBC and E-SBC in formulas without equality are discussed.In fact, the conceptions of SBC and E-SBC can only be adopted in first-order formulas without equality, if they are utilized in first-order formulas with equality, some clauses will be removed mistakenly.A counter-example is given as follows: Example 5. Let C = R(a) ∨ P(a) and the formula F = {R(a) ∨ P(a), ¬R(b), ¬P(b), a = b}.According to the definition of SBC in Section 3, clause C is trivially set-blocked upon {R(a), P(a)}, because there is no clauses in F\{C} can be directly resolved with clause C upon {R(a), P(a)}, which means clause C can be removed from formula F without influencing the satisfiability or unsatisfiability.However, F is unsatisfiable while F\{C} is satisfiable.
The reason why this situation happens is because the definitions of SBC and E-SBC are not involved with equality.For example, the truth values of R(a) and R(b) are always the same even though their forms are diverse in Example 5, nevertheless, this situation is not considered in the definitions of SBC and E-SBC.But if we combine the definitions of SBC and E-SBC in Section 3 with flattening and flat resolution, this problem can be solved.
The definition of flat L S
i -resolvent of clause C in first-order logic with equality is given below.It deletes a subset of substituted literals from literals in clause C but adds the subset of substituted literals from the negations of literals in clause C compared with flat L-resolvent of clause C.
do not have the same predicate symbol but the opposite polarity, and Compared with equality-blocked clause, equality-set-blocked clause is obtained by generalizing one blocking literal in equality-blocked clause to multiple blocking literals.Definition 9.For a clause C in a formula F, if there exists a subset S = {L 1 , L 2 , . . ., ESBC is also redundant in first-order formulas with equality.Before proving its redundancy, a Lemma is introduced first.Lemma 3. Suppose a clause C is an ESBC upon S = {L 1 , . . ., L m } ⊆ C in a formula F. β is a propositional assignment satisfying all the ground instances of equality axioms and all the ground instances of clauses in F\{C}, but falsifies a ground instance Cλ of C. β is a propositional assignment obtained from β by equivalence flipping all the truth values of ground literals L 1 λ, . . ., L m λ in the set Sλ, then β satisfies all the ground instances of clauses in F\{C} and all the ground instances of equality axioms.
Similarly, there is a generalization of ESBC in first-order logic with equality, by adding a new factor, the assignments over the external ground atoms.Definition 10.A clause C is an extended equality-set-blocked clause (E-ESBC) in a formula F if, for every assignment β over the external ground atoms extG F E (C), there exists a subset both are valid.For Q(a), there is no Q(a) S λ -resolvents.As a result, the clause C is equality-set-blocked upon S λ = {P(a), Q(a)} in F|λ.Hence, the clause C is an E-ESBC.Lemma 4. Suppose a clause C is an E-ESBC in a formula F. β is a propositional assignment satisfying all the ground instances of equality axioms and all the ground instances of clauses in F\{C}, but falsifies a ground instance Cλ of C.There exist a subset S = {L 1 , . . ., L n } such that β , obtained from β by equivalence flipping all the truth values of ground literals L 1 λ, . . ., L n λ in the set Sλ, satisfies all the ground instances of clauses in F\{C} and all the ground instances of equality axioms.
Proof.We only consider whether the ground instances of clauses will be falsified in the resolution environment of C. It is analyzed in two cases.
Case 1: In the resolution environment env F E (C) of C, there is no external ground atoms.Since there are no external ground atoms in the resolution environment of C and C is an extended equality-set-blocked clause in the formula F, it means that C is equality-set-blocked in the formula F. In addition, because C is equality-set-blocked in the formula F, there must exist a subset Sλ = {L 1 λ, L 2 λ, . . ., L n λ} of Cλ and the assignment β , obtained from β by flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ, still satisfies all the ground instances in F\{C} and all the ground instances of equality axioms according to Lemma 3.
Case 2: In the resolution environment env F E (C) of C, there exist external ground atoms {A 1 , A 2 , . . ., A m }.Since β is an assignment which covers all the ground instances of clauses in F\{C}, it also assigns the truth values of those external ground atoms {A 1 , A 2 , . . ., A m }.Assume that the assignment to the external ground atoms make the truth values of clauses {C 1 , C 2 , . . ., C k } in the resolution environment of C are true.Therefore, there exist a subset S = {L 1 , L 2 , . . ., L n } of C, C is equality-set-blocked upon S in the formula F\{C 1 , C 2 , . . ., C k }.Now that C is equality-set-blocked upon S in the formula F\{C 1 , C 2 , . . ., C k }, flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ will not falsify any ground instances of F\{C, C 1 , C 2 , . . ., C k }.Furthermore, α satisfies all the ground instances of {C 1 , C 2 , . . ., C k } by its assignment to those external ground atoms {A 1 , A 2 , . . ., A m }, as a result, equivalence flipping all the truth values of ground literals L 1 λ, L 2 λ, . . . ,L n λ in Sλ will not falsify any ground instances of {C 1 , C 2 , . . ., C k }.Hence, equivalence flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ will not falsify any ground instances of F\{C}.Therefore, the assignment β , obtained from β by flipping all the truth values of ground literals L 1 λ, L 2 λ, . . ., L n λ in Sλ, still satisfies all the ground instances in F\{C} and all the ground axioms of equality axioms.Theorem 6.If a clause C is an E-ESBC in a formula F, it is redundant w.r.t.F.
Proof.For every finite ground instances of F\{C} ∪ {C} ∪ ε L and any assignment β propositionally satisfies finite ground instances of F\{C} ∪ ε L , there exists a subset S ⊆ C, equivalence flipping the truth values of ground literals of the subset S in some ground instances of C will not affect the truth values of those finite ground instances in F\{C} ∪ ε L according to Lemma 4. Besides, since the ground instances of C are finite, those falsified ground instances of C can become true by equivalence flipping the ground literals in ground instances of S in those falsified ground instances of C successively.Therefore, F\{C} ∪ {C} ∪ ε L is satisfiable according to Theorem 2. Hence, the clause C is redundant w.r.t.F.
Effectiveness and Confluence Property
In this section, we evaluate effectiveness and confluence properties of the four corresponding clause elimination methods set-blocked clause elimination, extended set-blocked clause elimination, equality-set-blocked clause elimination and extended equality-set-blocked clause elimination.
Comparison of Effectiveness
Effectiveness is a significant evaluation standard for a clause elimination method.It reflects the capability of clause elimination methods to simplify formulas.The more effective a kind of clause elimination method is, the more clauses can be removed from the formulas.Below is the definition of effectiveness [2].Definition 11.Given two clause elimination methods CE 1 and CE 2 , and a CNF formula F. After the clause elimination method CE 1 or CE 2 is implemented, the new formula is CE 1 (F) or CE 2 (F).If CE 1 (F) ⊆ CE 2 (F), it is called that CE 1 is at least as effective as CE 2 .In addition, if CE 1 (F) ⊆ CE 2 (F) exists and there exists a formula F , CE 1 (F ) ⊂ CE 2 (F ), then CE 1 is more effective than CE 2 .
First, we compare the effectiveness between blocked clause elimination (BCE) and set-blocked clause elimination (SBCE).This situation can generalize to all the L S -resolvent of C, so all the L S -resolvent of C are tautologies.Hence, the clause C is an SBC upon S = {L}.On the contrary, the situation is not the same vice versa.For example, the clause C is an SBC but it is not a BC in Example 2. Therefore, we can conclude that SBCE is more effective than BCE.
Proof.If a clause C is a set-blocked clause upon S = {L 1 , L 2 , . . ., L n } ⊆ C in a formula F, it must be an extended set-blocked clause.It is analyzed by two cases.
Case 1: There is no external ground atoms of the clause C. When there are no external ground atoms, the clause C is trivially an E-SBC when it is an SBC.
Case 2: Some external ground atoms {A 1 , A 2 , . . ., A m } exist in the resolution environment of C. For any assignment α over {A 1 , A 2 , . . ., A m }, it may assign some clauses {C 1 , C 2 , . . ., C k } in env F (C) as true and it only needs to consider env F (C)\{C 1 , C 2 , . . ., C k } as the resolution environment of C in F|α.Since C is a SBC upon S ⊆ C, all the L S i -resovlents (1 ≤ i ≤ n) obtained by resolving C with clauses in env F (C) are tautologies, then there is no doubt that all the L S i -resovlents (1 ≤ i ≤ n) obtained by resolving C with clauses in env F (C)\{C 1 , C 2 , . . ., C k } are tautologies.Therefore, C is an SBC upon S in F|α.As a result, C is an E-SBC.
However, if a clause C is an E-SBC, it may not be an SBC.For example, the clause C is an E-SBC but it is not an SBC in Example 4.
Since effectiveness has the property of transitivity, then E-SBCE is more effective than BCE.Similarly, situation is analogous among those clause elimination methods dealing with first-order formulas with equality.Theorem 9. Equality-set-blocked clause elimination (ESBCE) is more effective than equality-blocked clause elimination (EBCE).
Proof.If a clause is an EBC upon L ∈ C in a formula F, it must be an ESBC upon S = {L} ⊆ C in the formula F. The reason is because all the flat L S -resolvents of C are the same as all the flat L-resolvents of C, then it is no doubt all the flat L S -resolvents of C are also valid.Therefore, the clause C is also an ESBC.However, if a clause is an EBC, it may not be an EBC.For example, in Example 7, the clause C is an ESBC but it is not an EBC.Hence, ESBC is more effective than EBCE. Figure 1 shows the effectiveness among those clause elimination methods, separately in the circumstance of first-order logic without equality and first-order logic with equality.
Confluence Property
Confluence property is also a crucial evaluation standard for a clause elimination method.It illuminates whether the difference of the sequence of eliminating clauses in a formula will cause the final obtained new formula different.If a clause elimination method has the confluence property, the final obtained new formula is the same no matter what the elimination sequence is.However, the final obtained formula will vary according to the distinction of elimination sequence if the clause elimination method has no confluence property, as a result, making a good strategy for elimination sequence determines how much clauses can be removed.In this subsection, we discuss the four clause elimination methods' confluence properties.Below is the definition of diamond property [12]: Definition 12.If a relation R has the diamond property, for ∀x, y, z with xRy and xRz, there exists a v with yRv and zRv.
If a relation has the diamond property, it also has the confluence property [12].Clause elimination methods can be seen as relations, which is between original formulas and new formulas after the elimination.Next, we prove all the four clause elimination methods have the confluence properties.
Proof.Assume that there are two SBCs C 1 and C 2 in a formula F.Here we prove the other clause still can be removed no matter which clause will be removed first.Without loss of generality, assume that C 1 is removed earlier and C 2 is not an SBC in F\{C 1 } after that.Since C 2 is an SBC in the formula F, there exists S = {L 1 , L 2 , . . ., L n } ⊆ C 2 such that all the L S i -resovlents (1 ≤ i ≤ n) obtained by resolving C 2 with clauses in F\{C 2 } are tautologies.Now that C 2 is not an SBC after removing the clause C 1 , then there exists at least one L S j -resovlents (1 ≤ j ≤ n) acquired by resolving C 2 with clauses in F\{C 1 , C 2 } is not a tautology which is contradictory with the fact that C 2 is an SBC in the formula F. Therefore, C 2 is still a SBC after removing the clause C 1 .As a result, we can conclude that SBCE is confluent.
Proof.Here we prove E-SBC is confluent by proving a clause C is still an E-SBC in a subset F of the formula F if it is an E-SBC in the formula F. Assume that the clause C is not an E-SBC in the subset F , then there exists at least one assignment α over the external ground atoms of C, satisfying that there exists a subset S α = {L 1 , L 2 , . . ., L n } ⊆ C, such that C is not a set-blocked clause upon S α in F |α, but C is a set-blocked clause upon S α in F|α.Then there is at least one L S α i -resovlents (1 ≤ i ≤ n) is not a tautology by resolving C with the clauses in F |α\{C}, which is a contradiction against the fact C is a set-blocked clause upon S α in F|α.E-SBCE is confluent.
Proof.Assume that there are two ESBCs C 1 and C 2 in a formula F. We prove that the order of eliminating clauses has no effect on the redundancy of ESBCs.Without loss of generality, we assume that the clause C 1 is removed first and C 2 is not an ESBC in F\{C 1 } after eliminating C 1 .Since C 2 is not an ESBC after removing C 1 , then for arbitrary subset S = {L 1 , L 2 , . . ., L n } ⊆ C 2 , there exists at least one flat L S i -resovlents (1 ≤ i ≤ n) is not valid obtained by resolving C 2 with clauses in F\{C 1 , C 2 }, which is contradictory to the fact C 2 is an ESBC in F. Therefore, C 2 is also an ESBC in F\{C 1 }.Hence, equality-set-blocked clause is confluent.Theorem 14. E-ESBCE is confluent.
Proof.Assume that a clause C is an E-ESBC in a formula F, then the clause C must be an E-ESBC in any subset of F. Assume that C is not an E-ESBC in a subset F of F, then there exists an assignment α over the external ground atoms such that C is equality-set-blocked upon subset S α = {L 1 , L 2 , . . ., L n } ⊆ C in F|α while C is not equality-set-blocked upon S in F |α, which means there exists a flat L S α i -resovlents (1 ≤ i ≤ n) is not valid obtained by resolving C with clauses in F |α\{C}.Apparently it is a contradiction that C is an equality-set-blocked clause upon S α = {L 1 , L 2 , . . ., L n } ⊆ C in F|α .Therefore, C is also an E-ESBC in F .Hence, E-ESBC is confluent.
Table 1 shows that all the novel clause elimination methods have the confluence property.
Conclusions
In the paper, we generalized blocked clause in first-order logic further, proposing four types of redundant clauses set-blocked clause, extended set-blocked clause, equality-set-blocked clause and extended equality-set-blocked clause, of which the former two were suitable for formulas without equality while the latter two were suitable for formulas with equality.Besides, we proved the redundancies of the four types of clauses and they could be removed from formulas without influencing the satisfiability or unsatisfiability of the original formulas.Finally, we discussed and analyzed their effectiveness and confluence properties.It shows that the four clause elimination methods are more effective compared with blocked clause elimination and equality-blocked clause elimination, and all the four clause elimination methods have the confluence properties.
The paper is a theoretical work about the properties of the four types of clauses.Even though all the four clause elimination methods are more effective than blocked clause elimination and equality-blocked clause elimination, identification of the four types of clauses will be more complicated and more time-consuming in the specific implementation.In future work, we will implement those clause elimination methods as preprocessing techniques of first-order theorem provers by considering the balance between effectiveness and time consumption, expecting to improve the performance of first-order theorem provers.
and L, ¬N 1 , . . ., ¬N m have the same predicate symbol but the opposite polarity.After flattening L in C and N 1 , . . ., N m in D, the new flattened clauses C = L ∨ C 1 and D = N 1 ∨ . . .∨ N m ∨ D 1 are obtained.In addition, the resolvent (C \{L })σ ∨ (D \{N 1 , . . ., N m })σ of C and D is called as a flat L-resolvent of C and D.
Example 2 .
Let the clause C = Q(a) ∨ P(b) ∨ R(c), S = {Q(a), P(b)} and the formula F = {Q(a) ∨ P(b) ∨ R(c), ¬P(x) ∨ Q(a), ¬Q(a) ∨ P(b), P(x) ∨ ¬R(x)}.We can see that there is only one P(b) {Q(a),P(b)} -resolvent and one Q(a) {Q(a),P(b)} -resolvent of C and D, separately obtained by resolving C with ¬P(x) ∨ Q(a) and resolving C with ¬Q(a) ∨ P(b).P(b) {Q(a),P( The assignment P(a)¬P( f (a))Q(a)¬Q( f (a))¬P( f (b))¬Q( f (b)) falsifies C 1 , but we can satisfy C 1 by flipping the truth values of ground instances P( f (a)) and Q( f (a)) in S 1 ⊆ C 1 and we can obtain the new assignment P(a)P( f (a))Q(a)Q( f (a))¬P( f (b))¬Q( f (b)), apparently the new assignment satisfies C 1 ; however, it falsifies the other ground instance C 2 of C.
will not be included in F|α according to the definition of F|α.Therefore, we only consider ¬Q(b) ∨ ¬R(a) and ¬Q( f (y)) ∨ P( f (y)) as the resolution environment of C in F|α.If we choose S as S = {P(x)}, C cannot be resolved with either ¬Q(b) ∨ ¬R(a) or ¬Q( f (y)) ∨ P( f (y)) upon S, clause C is trivially set-blocked upon S = {P(x)} in F|α.If another assignment λ satisfies λ(R(a)) = 0 and λ(¬R(a)) = 1, then λ(¬Q(b)
Theorem 4 .
If a clause C is an E-SBC in a formula F, it is redundant w.r.t.F.Proof.Given finite ground instances F and F C are sets of finite ground instances of F\{C} and {C}.Assume that F\{C} is satisfiable, then there exists a propositional assignment α satisfies all the ground instances of F , but it may falsify some ground instances in F C .According to Lemma 2, there exists a subset S = {L 1 , L 2 , . . ., L n } of C, it will not affect the truth values of ground instances in F by flipping the truth values of ground literals of S in those falsified ground instances of C.Even though flipping those truth values of ground literals in those falsified ground instances of C may falsify other ground instances of C in F C ; however, we can flip the truth values of ground literals in ground instances of S in those falsified ground instances successively, until all the truth values of falsified ground instances become true.According to Theorem 1, F is satisfiable.Therefore, the clause C is redundant w.r.t.F.
¬N 1 , . . ., ¬N m have the same predicate symbol and polarity, by flattening L i , N 1 , . . ., N m , new literals obtained from L i , N 1 , . . ., N m and new flattened clauses C f and D f can be obtained from C and D. If the mgu of L
Example 7 .
not have the same predicate symbol but the opposite polarity.If all the flat L S i -resolvents (1 ≤ i ≤ n) of C are valid, then the clause C is called as an equality-set-blocked clause (ESBC) upon S in the formula F. Let C = P(b) ∨ Q(b), S = {P(b), Q(b)}, and the formula F = {P(b) ∨ Q(b), ¬Q(c) ∨ P(b), ¬P(z) ∨ Q(z)}.About Q(b), there is only the clause ¬Q(c) ∨ P(b) in F\{C} contains the literal ¬Q(c) has the same predicate symbol and the opposite polarity with the literal Q(b) in S. Hence, the only flat Q(b) {P(b),Q(b)} -resolvent of C is x = b ∨ x = c ∨ P(b) ∨ ¬P(b) which is valid.In addition, with respect to P(b), there is only the clause ¬P(z) ∨ Q(z) in F\{C} contains the literal ¬P(z) has the same predicate symbol and the opposite polarity with the literal P(b) in the set S, then the only one flat P(b) {P(b),Q(b)} -resolvent is y = b ∨ Q(y) ∨ ¬Q(b) and it is valid.According to Definition 9, the clause C is ESBC upon S = {P(b), Q(b)} in the formula F.
Theorem 5 .
If a clause C is an ESBC upon S = {L 1 , . . ., L m } ⊆ C in a formula F, it is redundant w.r.t.F.Proof.Given there are some finite ground instances F C , F and F E of C ∪ F\{C} ∪ ε L and assume β is a propositional assignment which satisfies F and F E .Suppose the assignment β falsifies some ground instances in F C , those ground instances can be satisfied by equivalence flipping the truth values of ground literals of ground instances of S in those ground instances without influencing the satisfiability of F and F E according to Lemma 3.Even though it may falsify other ground instances in F C by equivalence flipping, those ground instances can be satisfied by equivalence flipping the truth values of ground literals of ground instances of S in those falsified ground instances of C in F C without affecting the satisfiability of the previous ground instances.Because those ground instances are finite in F C , all the ground instances in F C can be satisfied by orderly equivalence flipping the truth values of ground literals of ground instances of S in those ground instances which have been falsified.Therefore, for any satisfying propositional assignment β of ground instances F and F E of F\{C} ∪ ε L , there exists a satisfying propositional assignment β , obtained by orderly equivalence flipping the truth values of ground literals of ground instances of S in those falsified ground instances of C from β.
Example 8 .
Suppose C = P(a) ∨ Q(a) and the resolution environment of C: env F E (C) = {¬P(x) ∨ x = a ∨ R(a), Q(a) ∨ ¬P(y), ¬P(a) ∨ ¬R(a)}.We can see that external ground atoms extG F E (C) = {R(a)}.If an assignment α satisfies α(R(a)) = 1 and α(¬R(a)) = 0, then α(¬P(x) ∨ x = a ∨ R(a)) = 1.Therefore, Q(a) ∨ ¬P(y) and ¬P(a) ∨ ¬R(a) are the clauses in the resolution environment in C in F|α.Since if the subset {Q(a)} is chosen as the S α , Q(a) ∨ ¬P(y) and ¬P(a) ∨ ¬R(a) cannot be resolved with the clause C, hence the clause C is trivially equality-set-blocked upon S α = {Q(a)} in F|α.If the assignment
Theorem 7 .
SBCE is more effective than BCE.Proof.If a clause C is a BC upon L ∈ C in a formula F, it must be a SBC upon S = {L} ⊆ C. Assume that there is a clause D = N 1 ∨ N 2 ∨ . . .∨ N m ∨ D in the formula and L, ¬N 1 , . . ., ¬N m can be unified by an mgu σ, then L S -resolvent of C and D is C σ ∪ D σ ∪ ( S\{¬L i })σ = C σ ∪ D σ,which is the same as the L-resolvent C ∪ D σ of C and D. Therefore, L S -resolvent of C and D is a tautology.
Theorem 10 .
Extended equality-set-blocked clause elimination (E-ESBCE) is more effective than equality-set-blocked clause elimination (ESBCE).Proof.If a clause C is an ESBC upon S = {L 1 , L 2 , . . ., L n } ⊆ C in a formula F, then for all possible assignments over the external ground atoms of C, C must be an ESBC upon S = {L 1 , L 2 , . . ., L n } ⊆ C. Therefore, we can conclude that C is also an E-ESBC.Nevertheless, if a clause is an E-ESBC, it may not be an ESBC.For example, in Example 8, the clause C is an E-ESBC but it is not an ESBC.
Figure 1 .
Figure 1.Effectiveness among those clause elimination methods.A arrow from A to B means A is more effective than B.
Table 1 .
Confluence properties of clause elimination methods. | 11,788 | 2018-10-28T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Impact of the Adipokine Adiponectin and the Hepatokine Fetuin-A on the Development of Type 2 Diabetes: Prospective Cohort- and Cross-Sectional Phenotyping Studies
Background Among adipokines and hepatokines, adiponectin and fetuin-A were consistently found to predict the incidence of type 2 diabetes, both by regulating insulin sensitivity. Objective To determine to what extent circulating adiponectin and fetuin-A are independently associated with incident type 2 diabetes in humans, and the major mechanisms involved. Methods Relationships with incident diabetes were tested in two cohort studies: within the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study (628 cases) and the Nurses' Health Study (NHS; 470 cases). Relationships with body fat compartments, insulin sensitivity and insulin secretion were studied in the Tübingen Lifestyle Intervention Program (TULIP; N = 358). Results Circulating adiponectin and fetuin-A, independently of several confounders and of each other, associated with risk of diabetes in EPIC-Potsdam (RR for 1 SD: adiponectin: 0.45 [95% CI 0.37–0.54], fetuin-A: 1.18 [1.05–1.32]) and the NHS (0.51 [0.42–0.62], 1.35 [1.16–1.58]). Obesity measures considerably attenuated the association of adiponectin, but not of fetuin-A. Subjects with low adiponectin and concomitantly high fetuin-A had the highest risk. Whereas both proteins were independently (both p<1.8×10−7) associated with insulin sensitivity, circulating fetuin-A (r = −0.37, p = 0.0004), but not adiponectin, associated with insulin secretion in subjects with impaired glucose tolerance. Conclusions We provide novel information that adiponectin and fetuin-A independently of each other associate with the diabetes risk. Furthermore, we suggest that they are involved in the development of type 2 diabetes via different mechanisms, possibly by mediating effects of their source tissues, expanded adipose tissue and nonalcoholic fatty liver.
Introduction
Among several pathways involved in the pathogenesis of the epidemically spreading disease type 2 diabetes, an altered secretory pattern of the expanded and inflamed adipose tissue is thought to be important for the regulation of insulin sensitivity and subclinical inflammation in various tissues [1]. In this respect adiponectin has gained much attention in the past years because the circulating levels of this adipokine are not only markers of type 2 diabetes risk, but because adiponectin is strongly involved in its progression [2]. In analogy to dysregulated adipose tissue [2][3][4], there is increasing evidence that nonalcoholic fatty liver disease (NAFLD), which predictive of metabolic diseases [5][6][7][8][9][10], is also associated with an altered secretory pattern of proteins, which can be referred to as hepatokines, and which are both markers of the disease, and are involved in its pathophysiology [11]. Among them fetuin-A gained much attention during the recent years because of its association with type 2 diabetes and cardiovascular disease risk [12][13][14][15][16][17]. and its important role in the pathogenesis of insulin resistance and subclinical inflammation [18][19][20][21][22][23].
In the present study we now asked two questions: first, to what extent are circulating levels of these proteins related to incident type 2 diabetes independently of each other? Second, because the circulating levels of these two proteins strongly reflect the dysregulation of their source tissues, adipose tissue and liver, can they be used to estimate the contribution of expanded and inflamed adipose tissue and NAFLD to the pathogenesis of insulin resistance and impaired beta cell function?
For this we investigated associations of circulating adiponectin and fetuin-A with incident type 2 diabetes by applying a head to head comparison of these proteins in two large cohort studies, the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study and the Nurses' Health Study (NHS). In addition, we studied the independent relationships of the circulating levels of these proteins with precisely measured body fat mass and distribution, liver fat content, insulin sensitivity and insulin secretion in subjects of the Tübingen Lifestyle Intervention Program (TULIP).
EPIC-Potsdam study
The EPIC-Potsdam Study is part of the multi-centre prospective cohort study EPIC [24]. In Potsdam, Germany, 27,548 subjects (16,644 women and 10,904 men) were recruited from the general population between 1994 and 1998. The age range was 35-65 years in women and 40-65 years in men. The baseline examination included anthropometric measurements, a personal interview including questions about prevalent diseases, and a questionnaire about socio-demographic and lifestyle characteristics [13]. Follow-up questionnaires were sent out every 2 to 3 years to identify incident cases of type 2 diabetes. All incident cases of diabetes identified were verified by treating physicians. Biochemical measurements were carried out in a case-cohort design nested within the cohort, details of which have been published previously [13]. Briefly, we randomly selected a subcohort of 2,500 individuals of whom 2,095 were non-diabetic at baseline (based on self-report and blood glucose determinations) and had anamnestic, anthropometrical and metabolic data for analysis. Of the 849 incident diabetes cases identified in the full cohort during an average follow-up of 7 years 628 remained for analyses after similar exclusions. Informed consent was obtained from all participants, and approval was given by the Ethical Committee of the State of Brandenburg, Germany.
NHS
A total of 121,700 registered nurses living in one of 11 populous U.S. states composed the NHS when they responded to a questionnaire inquiring about their medical history and lifestyle characteristics in 1976 [15]. In 2000-2001, 18,717 NHS participants aged 53-79 years provided blood samples. Among these participants, a prospective, nested, case-control study was conducted to examine plasma biomarkers in relation to type 2 diabetes risk. After excluding women with self-reported prevalent diabetes, cardiovascular disease, or cancer at baseline, 470 cases of type 2 diabetes cases from the date of blood draw through June 2006 were prospectively identified and confirmed. Risk-set sampling was used to randomly select one control for each case from the rest of population who remained free of diabetes when the case was diagnosed; the probability of being selected as a control is proportional to the length of follow-up. Cases and controls were further matched for age at blood draw (61 year), date of blood draw (63 months), fasting status (fast for 8 h or not), and race (white or other races) [15]. The study protocol was approved by the institutional review board of the Brigham and Women's Hospital and the Human Subjects Committee Review Board of Harvard School of Public Health.
TULIP
A total of 358 Caucasians, who participated in the Tübingen Lifestyle Intervention Program (TULIP) [24], were included in the present analyses because they fulfilled at least one of the following criteria: a family history of type 2 diabetes, a BMI.27 kg/m 2 , previous diagnosis of impaired glucose tolerance or gestational diabetes. Informed written consent from subjects participating in the Tübingen studies was obtained and the Ethical Committee of the University of Tübingen, Germany had approved the protocols.
Anthropometrics and metabolic parameters were measured as previously described [13,25,26]. Glucose tolerance was determined according to the 1997 World Health Organization diagnostic criteria [27]. Insulin sensitivity from the OGTT was estimated as proposed by Matsuda and DeFronzo [28]. In a subgroup (N = 244) insulin sensitivity was also measured during a euglycemic, hyperinsulinemic clamp [26]. The insulinogenic index, a precise estimate of glucose-induced insulin secretion, was assessed from the OGTT as follows: (insulin at 30 min-insulin at 0 min)/(glucose at 30 min-glucose at 0 min).
Measurement of adiponectin and fetuin-A
In the EPIC-Potsdam study and in the TULIP study adiponectin levels were determined with enzyme-linked immunosorbent assays (ELISA, Linco Research, Inc., St Charles, MO). Fetuin-A levels were measured using an immunoturbidimetric method (BioVendor Laboratory Medicine, Modreci, Czech Republic). In the NHS, both, adiponectin and fetuin-A levels were measured by enzyme immunoassays from R&D Systems (Minneapolis, MN).
Statistical analyses
In the EPIC Study and the NHS adiponectin and fetuin-A levels were categorized into quintiles based on subcohort or control participants. Hazard ratios as a measure of relative risk (RR) were computed using a weighted Cox proportional hazards model in EPIC-Potsdam, modified for the case-cohort design according to the Prentice method. Age was the underlying time variable in the counting processes, with entry defined as the subjects' age at the time of recruitment and exit defined as age at the diagnosis of diabetes, or censoring. In NHS, odds ratios were calculated using unconditional logistic regression to evaluate strength of associations [15].
We computed RRs/ORs for each quintile of adiponectin and fetuin-A compared with the lowest quintile. The significance of linear trends across quintiles of adiponectin and fetuin-A was tested by assigning each participant the median value for the quintile and modeling this value as a continuous variable. Because this analysis indicated no department from linearity, we also considered adiponectin and fetuin-A as continuous variable estimating the RR/OR associated with an increment of 1 SD. We used information on covariates obtained from the baseline examination in multivariate analyses, namely sex, education, physical activity, smoking, and alcohol intake. Analyses in NHS were further adjusted for other matching factors beyond age (race, fasting status, time of blood drawing), as well as for body mass index (BMI) and waist circumference.
In the TULIP study, data are given as means6SE. Data that were not normally distributed (e.g. liver fat, insulin sensitivity, body fat distribution; Shapiro-Wilk W test) were logarithmically transformed. A p-value#0.05 was considered statistically significant. # Adiponectin and fetuin-A included simultaneously in the model. In addition, multivariate model was adjusted for matching factors, including age at blood draw (yrs), race (white or not), fasting status (yes, no), and time of blood drawing, as well as body mass index (kg/m2), waist circumference (cm), smoking status (current smoker, past smoker, non-smoker), physical activity (in tertiles), alcohol use (abstainer, ,5.0 g/day, 5.0-14.9 g/day, $15.0 g/day), and education (registered nurse, bachelor, master and higher). doi:10.1371/journal.pone.0092238.t001
Association of circulating adiponectin and fetuin-A with diabetes incidence in the EPIC-Potsdam Study and the NHS
associations for fetuin-A (Table 2). Adjustment for BMI and waist circumference attenuated the associations of adiponectin (by 29% in EPIC-Potsdam and 28% in NHS). However, the association of fetuin-A with diabetes risk was largely unaffected by adjustment for BMI and waist circumference ( Table 2). We then examined the joint effect of circulating adiponectin and fetuin-A by cross classifying participants by both variables using sex-specific medians as cut-offs. The RR/OR for the combination of a high fetuin-A-and a low adiponectin level compared with the opposite extreme was 3.54 (95% CI: 2.54-4.95) in EPIC-Potsdam and 4.61 (2.87-7.38) in NHS (Figure 1, A/B). Tests for interaction were non-significant in both cohorts.
Cross-sectional relationships in the TULIP
The 358 TULIP subjects had a mean age of 46 years and a mean BMI of 30 kgNm 22 . To investigate relationships of circulating adiponectin and fetuin-A levels with metabolic traits we performed analyses in the total population and in subjects with normal glucose tolerance (NGT) and impaired glucose tolerance (IGT) separately ( Table 1 in File S1).
Relationships of circulating adiponectin and fetuin-A with body fat content and distribution and with insulin sensitivity. In the total population circulating adiponectin, adjusted for age and gender, correlated negatively with BMI (r = 20.19, p = 0.0004) and with waist circumference (r = 20.26, p,0.0001). Negative correlations were also found with total body fat mass (r = 20.15, p = 0.008) and, more strongly, with visceral fat mass (r = 20.40, p,0.0001) and with liver fat content (r = 20.28, p,0.0001). Circulating fetuin-A correlated only very weakly with BMI (r = 0.11, p = 0.04) and not statistically significant with waist circumference (r = 0.10, p = 0.06), total body fat mass-(r = 0.07, p = 0.25), and visceral fat (r = 0.07, p = 0.20) mass. However, a positive correlation with liver fat content was found (r = 0.12, p = 0.04). In multivariate models including age and sex, both circulating adiponectin and fetuin-A were strongly and independently associated with insulin sensitivity estimated from the OGTT and measured during the clamp (Table 3, models 1). Inclusion of BMI and waist circumference in the models attenuated the associations of circulating adiponectin (change of the b; OGTT: 237%, clamp: 231%) on insulin sensitivity, but less so of circulating fetuin-A (OGTT: 217%, clamp: 218%) ( Table 3, models 2). Similar relationships were found when these analyses were performed in subjects with and without NAFLD (OGTT: total N = 291, NAFLD = 93; clamp: total N = 203, NAFLD = 63). Here inclusion of BMI and waist circumference in the models attenuated the associations of circulating adiponectin with insulin sensitivity, particularly in subjects without NAFLD (change of the b; OGTT: 232%, clamp: 234%), while this association was only slightly attenuated or even became stronger in the smaller group of subjects with NAFLD (OGTT: 27%, clamp: +18%). The respective relationships of fetuin-A with insulin sensitivity were less strongly affected, both, in subjects without (OGTT: 212%, clamp: 213%) and with (OGTT: 26%, clamp: 24%) NAFLD. After additional inclusion of liver fat content in the models 2, the associations of adiponectin and fetuin-A with insulin sensitivity were further attenuated and adiponectin, fetuin-A and liver fat content independently determined insulin sensitivity (Table 3, models 3). When we then divided non-obese subjects (N = 207) by the median insulin sensitivity estimated by the OGTT (13.78 arb.u.) we found circulating fetuin-A (OR for 1 SD: When we divided participants by the medians of circulating adiponectin and fetuin-A levels, individuals with low adiponectin and high fetuin-A levels had the lowest insulin sensitivity compared to the other groups (Figure 1, C/D).
Relationships of circulating adiponectin and fetuin-A with
insulin secretion. We next investigated whether circulating fetuin-A may be associated with glucose-induced insulin secretion in humans. In the total population circulating fetuin-A did not correlate with the insulinogenic index (r,0.01 p = 0.99) when adjusted for age, gender and insulin sensitivity measured during the OGTT. However, this association depended on glucose tolerance status: while fetuin-A did not correlate with the insulinogenic index in subjects with NGT, a strong negative correlation between fetuin-A levels and the insulinogenic index was found in subjects with IGT (r = 20.37, p = 0.0004) (p for interaction = 0.024) (Figure 2, A/B). No significant relationships were found for circulating adiponectin with the adjusted insulinogenic index (all p.0.055).
Discussion
During the last decade much effort has been made to identify important pathways involved in the natural history of type 2 diabetes. Thereby, several candidates were described, predominantly based on animal and on in-vitro studies [29][30][31]. However, often it was not possible to prove these pathways to be of high relevance for human metabolism. In human studies, on the other hand, several blood, genetic or phenotypic markers were found to predict incident type 2 diabetes [1][2][3][4]. Nevertheless, no precise mechanisms of action for several of these parameters are known and/or their predictive effect on the development of type 2 diabetes was either small or absent, which so far limits their potential in the prevention and the treatment of the disease.
Because these limitations largely do not apply to the adipokine adiponectin and the hepatokine fetuin-A, we here investigated to what extent circulating adiponectin and fetuin-A determine incident type 2 diabetes, independently of each other. Towards Groups with high/low adiponectin and fetuin-A levels were defined based on sex-specific medians. RRs were adjusted for age, sex, education, occupational activity, sport activity, cycling, smoking, alcohol intake, BMI, and waist circumference in EPIC-Potsdam and ORs for matching factors, including age at blood draw, race, fasting status, and time of blood drawing, as well as smoking status, physical activity, alcohol use, and education in the Nurses' Health Study. Relationship of circulating adiponetin and fetuin-A with insulin sensitivity estimated from the oral glucose tolerance test (OGTT; N = 358, C) and measured during the euglycemic, hyperinsulinemic clamp (N = 244, D). Subjects were divided by the medians of circulating adiponectin and fetuin-A in groups with high and low levels. P for trend after adjustment for age and sex. doi:10.1371/journal.pone.0092238.g001 this aim, we first chose an epidemiological approach and investigated the associations of circulating adiponectin and fetuin-A with incident type 2 diabetes by applying a head to head comparison of these proteins in two large cohort studies, the EPIC-Potsdam study and the NHS. In both studies we found that circulating adiponectin and fetuin-A were associated with risk of incident diabetes, independently of several confounders, and of each other. The consistency of the association suggests that it might be generalizable to healthy populations, at least to those with Caucasian origin. Because the strength of association of adiponectinemia, but not of circulating fetuin-A, was considerably attenuated after accounting for estimates of overall and visceral obesity, our data support that the adiponectin levels confer at least in part the effect of obesity on the type 2 diabetes risk. In contrast, the association of fetuin-A with diabetes risk does not appear to considerably depend on body fatness. Rather the available information from our and other studies suggest that the increase in circulating fetuin-A and the resulting decrease in insulin sensitivity may be a result of inflamed NAFLD [11,[32][33][34][35].
We then focused on the relationship of both circulating proteins with anthropometrics and metabolic traits in precisely phenotype subjects of TULIP. We confirmed the strong correlations of adiponecinemia with measures of body fat mass and distribution in these subjects as well as the absence of such relationships for fetuin-A levels. Based on the known properties of adiponectin and fetuin-A to regulate insulin sensitivity, we confirmed that the circulating levels of these proteins were independently of each other associated with insulin sensitivity, estimated from the OGTT or measured by a euglycemic, hyperinsulinemic clamp. In agreement with the findings from the EPIC-Potsdam study and the NHS, the relationship of circulating adiponectin, but not of fetuin-A, was considerably attenuated after accounting for measurements of body fat content and distribution. Consequently we asked the question whether circulating fetuin-A may be a better predictor of insulin sensitivity than circulating adiponectin in subjects who are non-obese and could confirm this hypothesis in our study.
Having found strong independent associations of circulating adiponectin and fetuin-A, the two proteins that regulate insulin sensitivity, on the diabetes risk, we then asked whether they may differentially impact on insulin secretion, and thereby have distinct effects in the pathogenesis of type 2 diabetes. For adiponectin we have previously shown that this protein does not influence glucoseinduced insulin secretion in humans [36]. In the present study we could show that fetuin-A levels are not associated with insulin secretion in our subjects. Based on the knowledge that subjects with IGT have an impaired beta cell function [37,38], we then tested the hypothesis that fetuin-A is particularly relevant specifically in this population that is at very high risk for the disease. Indeed, when we separated the individuals in those with NGT and IGT, a strong negative relationship of fetuin-A with insulin secretion was found in subjects with IGT.
What is the relevance of our data for clinicians and researchers? Because the relative risk of incident diabetes was much higher for the combination of a high fetuin-A-and a low adiponectin level, than for the single circulating level of each protein, it may be important for clinicians to measure both proteins when it comes to the prediction of the risk of future type 2 diabetes. Whether fetuin-A and adiponectin improve prediction of diabetes risk beyond waist circumference and other classical risk factors remains, however, uncertain. Furthermore, fetuin-A may become an important determinant of insulin resistant states, particularly in non-obese subjects where adiponectin and hs-CRP levels lost their strong predictive power in our study.
For researchers we provide novel information that adiponectin and fetuin-A are independently involved in the pathogenesis of type 2 diabetes. Both proteins impact on the development of the disease predominantly be the regulation of subclinical inflammation. Furthermore, we have support for the hypothesis that they mediate the effects of their source tissues, expanded adipose tissue and inflamed nonalcoholic fatty liver on glucose metabolism and cardiovascular disease. In addition, we provide explorative information about a putatively newly identified cross-talk of the liver with the endocrine function of the pancreas. Whether there are direct effects of fetuin-A on signalling cascades in beta cells or whether fetuin-A induces a chronic pro-inflammatory process in the human islets needs to be investigated in future studies.
Some possible limitations of our findings have to be considered. The potential of residual confounding applies to our study as it does to observational studies in general. We adjusted for a large variety of known risk factors. Although fetuin-A and adiponectin remained significantly associated with diabetes risk, we cannot rule out that other unmeasured factors or that imprecision in the measurement of covariates explain this observation. Also, we only had a single blood drawing which might have introduced random measurement errors in determining fetuin-A and adiponectin. The lack of repeated measurements may have led to an underestimation of the observed associations. In conclusion, our findings support that the adipokine adiponectin and the hepatokine fetuin-A are reliable predictors of incident type 2 diabetes and insulin resistance, and that they are strongly and independently of each other involved in their pathogenesis. Moreover, we could provide indirect support that adiponectin mediates, at least in part, the impact of dysregulated and expanded visceral fat on metabolism, whereas increased fetuin-A levels are largely a result of NAFLD. Finally, we provided novel exploratory information that fetuin-A may play a role in the pathogenesis of type 2 diabetes by affecting insulin secretion.
Supporting Information
Table S1 Characteristics of the TULIP study participants. (DOC) Figure 2. Relationship of circulating fetuin-A with the insulinogenic index, adjusted for age, sex and insulin sensitivity estimated from the oral glucose tolerance test, in subjects with normal glucose tolerance (N = 267; A) and impaired glucose tolerance (N = 91, B). doi:10.1371/journal.pone.0092238.g002 | 4,996.6 | 2014-03-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
Noise insensitive volumetric fusion method for enhanced photoacoustic microscopy
Abstract. Significance Photoacoustic imaging is an emerging imaging modality that combines the high contrast of optical imaging and the high penetration of acoustic imaging. However, the strong focusing of the laser beam in optical-resolution photoacoustic microscopy (OR-PAM) leads to a limited depth-of-field (DoF). Aim Here, a volumetric photoacoustic information fusion method was proposed to achieve large volumetric photoacoustic imaging at low cost. Approach First, the initial decision map was built through the focus detection based on the proposed three-dimensional Laplacian operator. Majority filter-based consistency verification and Gaussian filter-based map smoothing were then utilized to generate the final decision map for the construction of photoacoustic imaging with extended DoF. Results The performance of the proposed method was tested to show that our method can expand the limited DoF by a factor of 1.7 without the sacrifice of lateral resolution. Four sets of multi-focus vessel data at different noise levels were fused to verify the effectiveness and robustness of the proposed method. Conclusions The proposed method can efficiently extend the DoF of OR-PAM under different noise levels.
Introduction
Photoacoustic imaging, which combines the advantages of optical imaging and acoustic imaging to provide high-resolution and non-invasive imaging with deep penetration depth, [1][2][3][4][5][6] has been widely applied in biomedicine, such as breast cancer diagnosis, 7 thyroid imaging, 8 and brain imaging. 9As an important branch of photoacoustic imaging, optical-resolution photoacoustic microscopy (OR-PAM) satisfies the criterion of high-resolution imaging in biomedical research. 10Raster scanning is utilized in OR-PAM to capture three-dimensional (3D) information.However, the reliance on a focused laser beam for high-resolution imaging introduces challenges, such as reduced lateral resolution outside the focal regions and a limited depth-of-field (DoF).The restricted DoF in OR-PAM consequently hampers volumetric imaging speed, thereby imposing limitations on its practical applications, such as imaging of biological tissue with a rough surface (e.g., cerebrovascular 11 ) and fast acquisition of physiological and pathological processes. 7,9Conventional photoacoustic microscopy utilizes axial scanning to achieve the volumetric imaging of sample, and the multi-focus photoacoustic data can be acquired by mechanically moving the probe or sample. 12The utilization of volumetric fusion of multi-focus photoacoustic data is a cost-effective strategy for enhancing the DoF of OR-PAM.
To address the limited DoF of OR-PAM, Yao et al. 13 proposed double-illumination photoacoustic microscopy (PAM) by illuminating the sample from both the top and bottom sides simultaneously, which provides improved penetration depth and extended DoF.However, this method is restricted to thin biological tissue.Shi et al. 14 utilized the Grueneisen relaxation effect to suppress the artifact introduced by the sidelobe of Bessel beam to achieve PAM with extended DoF.However, two lasers are required to excite the nonlinear photoacoustic signal.Hajireza et al. 15 reported a multifocus OR-PAM for extended DoF based on wavelength tuning and chromatic aberration.However, this system is limited to the acquisition of multifocus imaging at discrete depths.These methods can achieve high-resolution photoacoustic imaging with large DoF, at the expense of increased system complexity and high cost.Multi-focus image fusion (MFIF), which is used to integrate multiple images of the same target with different focal positions into single in-focus image, [16][17][18] has shown promising prospects in addressing the narrow DoF of microscopy system recently. 19,20Awasthi et al. 21proposed a deep learning-based model for fusing the photoacoustic images reconstructed using different algorithms to improve the quality of photoacoustic imaging.However, this model is primarily targeted at photoacoustic tomography and a large amount of data is required for training.Zhou et al. 22 utilized a 2D image fusion algorithm with enhancement filtering to construct the photoacoustic image with extended DoF for accurate vascular quantification.However, this method falls short in the fusion of volumetric information for photoacoustic data.
In this work, a cost-effective volumetric fusion method is proposed, to facilitate the acquisition of high-resolution and large volumetric photoacoustic image with conventional PAM.The focused regions in multi-focus photoacoustic data were identified with the proposed 3D modified Laplacian operator.The misidentified regions in the built initial decision map (IDM) were corrected by consistency verification, and Gaussian filter (GF) was employed to smooth the map and reduce block artifact.Finally, photoacoustic data with enhanced DoF can be achieved by the voxel-wise weighted-averaging based on the final decision map (FDM).Quantitative evaluation suggests that the DoF of photoacoustic microscopy can be expanded by a factor of 1.7 while maintaining the lateral resolution within focused regions through the proposed method.The effectiveness and robustness of the proposed method were verified by fusing four sets of multi-focus vessel data under different noise levels.
Volumetric Fusion Based on 3D Modified Laplacian Operator
To construct high-resolution and large volumetric photoacoustic imaging, the focused regions in multi-focus photoacoustic data were extracted and preserved in the fused imaging.A focus measure based on 3D modified Laplacian operator, which quantifies the sharpness of photoacoustic imaging, was proposed to identify focused regions within multi-focus data.The Laplacian operator ∇ 2 for photoacoustic data P is defined as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 7 ; 7 3 6 ML P ðx; y; zÞ ¼ j2Pðx; y; zÞ − Pðx − 1; y; zÞ − Pðx þ 1; y; zÞj þ j2Pðx; y; zÞ − Pðx; y − 1; zÞ − Pðx; y þ 1; zÞj þ j2Pðx; y; zÞ − Pðx; y; z − 1Þ − Pðx; y; z þ 1Þj; (3) where Pðx; y; zÞ is the signal intensity of P at ðx; y; zÞ.The focus measure for the i'th image block of P centered at ðx 0 ; y 0 ; z 0 Þ is defined as the sum-modified Laplacian (SML) within the i'th block as shown in Eq. ( 4): E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 7 ; 6 4 7 where N determines the size of the block.The SML evaluates the high frequency information of an image block, and a larger SML represents a higher level of focus.The multi-focus photoacoustic data P 1 and P 2 with the size of H × W × L simulated through the virtual OR-PAM 24 were divided into non-overlapping blocks with equal size of ð2N þ 1Þ 3 , respectively.The focus measures based on the 3D modified Laplacian operator for each block in P 1 and P 2 were computed to build the IDM as shown in Eq. ( 5): ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 7 ; 5 3 0 IDMðx; y; zÞ ¼
> <
> : where SML i P 1 and SML i P 2 are the focus measures for the i'th block in P 1 and P 2 , respectively.The voxels within focused regions in multi-focus photoacoustic data can be identified through IDM.The voxel ðx; y; zÞ is considered to be within the focused regions in P 1 if IDM ðx; y; zÞ ¼ 1, and be within the focused regions in P 2 if IDM ðx; y; zÞ ¼ 0. The noise in the photoacoustic data can cause errors in the process of focus detection.Therefore, the consistency verification based on majority filter (MF) was employed to refine the IDM.If the j'th block is identified as the focused region in P 1 while the adjacent six blocks in the orthonormal six directions are identified as the focused regions in P 2 , the IDM for the voxels of j'th block are switched to zero and vice versa.The GF was then employed on the refined IDM to smooth the boundaries to generate FDM.The Gaussian filtering for IDM is formulated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 7 ; 3 5 2 FDMðx; y; zÞ ¼ 1 W X ðx 0 ;y 0;z 0Þ∈S Gðx 0 ; y 0; z 0; x; y; zÞIDMðx 0 ; y 0; z 0Þ; where G is the Gaussian function for spatial difference.S is a window centered at ðx; y; zÞ.W is the normalization factor defined as Gðx 0 ; y 0; z 0; x; y; zÞ; where G is given by Eq. ( 8), Gðx 0 ; y 0; z 0; x; y; zÞ where σ is the standard deviation of Gaussian function G.The high-resolution and large volumetric photoacoustic imaging P f was computed by the voxel-wise weighted-averaging as shown in Eq. ( 9): ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 7 ; 1 6 4 P f ðx; y; zÞ ¼ FDMðx; y; zÞP 1 ðx; y; zÞ þ ð1 − FDMðx; y; zÞÞP 2 ðx; y; zÞ: The process of the proposed volumetric fusion method is shown in Fig. 1.The ML of each voxel in multi-focus photoacoustic imaging was computed, and the multi-focus photoacoustic imaging was divided into non-overlapping blocks.The SML of each block was calculated and compared to construct the IDM.The IDM is refined with MF and smoothed with GF to generate the FDM.The Fusion can be constructed by voxel-wise weighted-averaging of multi-focus imaging based on the FDM.
Multi-Focus 3D Data Simulation through Virtual OR-PAM
The multi-focus 3D photoacoustic data were simulated through a virtual photoacoustic microscopy 24 using Gaussian beam, as shown in Fig. 2.An objective lens with a numerical aperture of 0.14 was used to generate the Gaussian beam.The wavelength of light was set as 532 nm.The 3D grid is Nx × Ny × Nz ¼ 120 × 120 × 120 and the pixel size is dx ¼ dy ¼ 2 μm, dz ¼ 3 μm.The medium around the sample was set as water, and the speed of sound was set to 1500 m/s.The photoacoustic signal was collected using an ultrasonic detector with a center frequency of 75 MHz and a bandwidth of 67%.Multi-focus photoacoustic data with two focuses were employed as an example to demonstrate the proposed method.Two vertically tilted fibers were placed in the grid as required and imaged to test the performance of the proposed method.Four sets of multi-focus tilted vessel at five noise levels (Gaussian noise was added in the experiment since most noise in photoacoustic imaging can be considered as Gaussian noise [25][26][27][28] ) were simulated to further validate the robustness and effectiveness of the proposed method.The experiment data in this work were simulated through a 64-bit Windows 10, Intel (R) Core (TM) i7-12700H CPU @ 2.30 GHz desktop running windows operating system.The 3D visualization and max amplitude projection (MAP) images of the simulated multi-focus photoacoustic data presented in Fig. 2 show that the imaging within DoF reveals more details while the imaging outside the DoF appears partially blurred.The B images of the simulated multi-focus data at the position indicated by the white dashed lines in the MAP images demonstrate that the lateral resolution within focused regions is better than that of the defocused regions.
Performance Test by Fusing Multi-Focus Vertically Tilted Fiber
The performance of the proposed method was tested by fusing multi-focus vertically tilted fibers as shown in Figs. 3 and 4. Figure 3 shows the process of the proposed volumetric information fusion method, taking the simulated fibers f 1 and f 2 as an example.The focus measures based on 3D modified Laplacian operator of multi-focus fiber were calculated to generate the IDM.The IDM was then refined and smoothed by filtering to generate the FDM, and photoacoustic imaging with extended DoF can be achieved by the voxel-wise weighted-averaging, as shown in Fig. 3. different depths.The full width at half maximum (FWHM) of the profile of the fiber f 1 before and after fusion was measured, as shown in Fig. 4(i).A smaller FWHM suggests a better lateral resolution.The lateral resolution in the focused part (30 to 100 μm of Focus 1, 80 to 150 μm of Focus 2) is better than that of the defocused part (80 to 150 μm of Focus 1, 30 to 100 μm of Focus 2).The DoF of OR-PAM is quantified as the depth interval over which the FWHM of the fiber becomes twice that of the focal plane.The DoF of the fiber of Focus 1, Focus 2, and Fusion was measured to be about 71.2, 79.9, and 124.6 μm, respectively, which suggests that the proposed method can increase the DoF of OR-PAM by a factor of 1.7 without sacrificing the lateral resolution.The SNR variation of the fiber f 1 along the depth direction was measured, as shown in Fig. 4(j).The SNR in the focused part (30 to 100 μm of Focus 1, 80 to 150 μm of Focus 2) is higher than that of the defocused part (80 to 150 μm of Focus 1, 30 to 100 μm of Focus 2) and is precisely preserved in the fused fiber.
Large Imaging of Vascular
The robustness and effectiveness of the proposed method were verified by fusing multi-focus vessels at five noise levels, as shown in Fig. 5. Figures 5(a) and 5(b) are the MAP images of 1 set of multi-focus vessels where the optical focuses were set at z ¼ 35 (Focus 1) and z ¼ 60 (Focus 2) in the 3D grid, respectively.The complete structure of the vessel cannot be captured in single imaging due to the narrow DoF as shown in Figs.4(a) and 4(b).The noise presented in the MAP images increases with the decrease in SNR. Figure 5(c) is the MAP image of the high-resolution and large volumetric data (Fusion) obtained via the proposed method at five noise levels, which verifies the remarkable robustness to noise using our method.The focused regions can be accurately identified through the proposed 3D modified Laplacian operator under noise condition.The normalized intensity distribution at positions 1 and 2 indicated by the white dashed lines in Figs.5(g) and 5(h) was analyzed to evaluate the capability to preserve lateral resolution within focused regions using our method, as shown in Figs.5(i) and 5(j).When there is no noise, the FWHM of the normalized photoacoustic signals of Focus 1, Focus 2, and Fusion at position 1 was measured to be 2.7 (Focus 1), 9.3 (Focus 2), and 2.7 μm (Fusion), respectively, as shown in Fig. 5(i).The FWHM of the second peak of the normalized photoacoustic signals of Focus 1, Focus 2, and Fusion at position 2 was measured to be 9.9 (Focus 1), 3.0 (Focus 2), and 3.0 μm (Fusion), respectively, as shown in Fig. 5(j).The lateral resolution within focused regions can be maintained in the fused vessel through the voxel-wise weighted-averaging fusion rule, which validates the effectiveness of the proposed method in processing the sample with intricate structure.The normalized intensity distribution of the Fusion at position 2 under different noise levels was analyzed, as shown in Fig. 5(k).The influence of noise on the photoacoustic signal is insignificant when SNR is 30 dB.The decrease in SNR leads to the increase of the influence of noise on the photoacoustic signal.The photoacoustic signal cannot be visually distinguished from the added noise when SNR drops to 15 dB, as shown in Fig. 5(k).However, the focused regions within DoF can be accurately identified and preserved through the proposed method when a high level of noise is added, which further verifies the effectiveness and robustness of our method.
The superior performance of the proposed method over previous representative 2D-based MFIF algorithms was verified by comparing the MAP images and B images of the fused data.Two state-of-the-art MFIF methods, including the transform domain-based method dual tree complex wavelet transform (DTCWT) 29 and the spatial domain-based method guided filterbased focus region detection for multi-focus image fusion (GFDF), 30 were selected for comparison.Four common metrics in MFIF were selected to quantify the performance of different methods from multiple perspectives, including (1) information theory-based metric cross entropy (CE), 31 which estimates the dissimilarity between source images and fused image in terms of information; (2) image feature-based metric spatial frequency (SF), 32 which reveals the edge and texture information of the fused image; (3) human perception-based metric Q CV , 33 which quantifies the performance of MFIF algorithm by leveraging the principles of human visual system; and (4) similarity-based metric structural similarity index measure (SSIM), 34 which measures the similarity between source images and fused image in terms of luminance, contrast, and structure.The multi-focus volumetric imaging of vessel was sliced to establish the multi-focus slice sequence.The 2D slices at the same position in multi-focus sequence are processed with DTCWT and GFDF, respectively.The fused 2D slices were stacked to produce high-resolution photoacoustic imaging with extended DoF.As shown in Fig. 6, one group of simulated multifocus vessel was selected to compare different methods at two noise levels.Figures 6(a (e) and 6(f).The proposed method, which utilizes the 3D modified Laplacian operator for the focus measure of volumetric imaging, can accurately identify and preserve the lateral resolution within focused regions at different noise levels compared to 2D-based MFIF methods.By contrast, the GFDF, which was affected by the lateral resolution outside the DoF in Focus 2, failed to identify the lateral resolution within focused regions at different noise levels.The poorer lateral resolution within the defocused regions in Focus 2 was mistakenly preserved in the fused photoacoustic imaging, as shown in Figs.6(c)-6(f).The outperformance of the proposed method is attributed to the direct focus detection and fusion of volumetric information, whereas the slicing process of volumetric imaging leads to a loss of spatial correlation when implementing 2D-based MFIF methods.The MAP images of the 4 groups of high-resolution and large volumetric vessel obtained through different methods were evaluated using 4 metrics when there is no noise and SNR ¼ 25 dB, respectively, as shown in Table 1.The proposed volumetric fusion method outperforms the conventional 2D-based MFIF method from multiple perspectives, which further validates the effectiveness of the direct fusion of volumetric photoacoustic information.
Conclusion and Discussion
We proposed a noise insensitive volumetric fusion method that utilizes 3D modified Laplacian operator and Gaussian filtering to enhance the DoF of OR-PAM.Experimental results demonstrate that the proposed method is capable of extending the DoF of OR-PAM by a factor of 1.7 and shows superior performance at different levels of noise.The superiority of the proposed method over previous 2D-based MFIF methods was quantitatively verified with four categories of metrics.Our work provides a cost-effective approach for the acquisition of photoacoustic imaging with extended DoF.The virtual OR-PAM, which is capable of performing A, B, and C scan, was verified to be consistent with the actual OR-PAM system. 11,35,36Hence, the experiments based on the virtual OR-PAM are reliable.For sample with simple structure, the focused boundary can be determined through the quantification of FWHM or SNR, and the volumetric fusion can be achieved through the simple combination of multi-focus data.For example, the focused boundary of the multifocus fiber in this work can be estimated as a single plane given by the intersection between the FWHM of Focus 1 and Focus 2, as shown in Fig. 4(i).For sample with intricate structure (such as cerebrovascular), accurately quantifying the variation of FWHM and SNR along depth direction is difficult.In addition, the depth of optical focus experiences a shift due to the variations in scattering and absorption of heterogeneous samples. 37The focused boundary cannot be approximated as a single plane.Therefore, this approach is not applicable to turbid biological tissue and limited to transparent sample with weak absorption and scattering such as water.Furthermore, this approach can be time-consuming and labor-intensive for multi-focus data that include more than two focuses.By contrast, the proposed method can automatically identify and preserve the focused regions within multi-focus data in the fusion results.
In this work, the effectiveness of the proposed method was demonstrated through dual-focus photoacoustic data of fiber and vessel.Actually, the proposed method can be applied to multifocus data that include more than two focuses by pairwise fusion.Dual-focus data with adjacent focuses can be first combined through the proposed method.Then, the resulting fused data can be subsequently integrated with data from another adjacent focus.This process is repeated iteratively until the data from all focuses have been processed to achieve high-resolution and large volumetric photoacoustic imaging.The proposed method is not limited by the focal positions, or the number of focuses in multi-focus data.Compared to the approach of estimating a focused boundary through FWHM quantification, the proposed method exhibits the advantages of enhanced flexibility, ease of portability, and broader applicability.
Fig. 1 Fig. 2
Fig. 1 Proposed volumetric fusion method based on SML.ML is the discrete approximation of ∇ 2 M .
Fig.3Demonstration for the process of volumetric fusion, taking the fusion of multi-focus fiber as an example.f 1 and f 2 are the two vertically tilted fibers.ML P 1 and ML P 2 are the discrete approximation of ∇ 2 M for P 1 andP 2 , respectively.The yellow dashed lines in IDM and FDM indicate the position of fibers f 1 and f 2 .
Fig. 4
Fig. 4 (a), (b) MAP images of multi-focus fiber data.(c) MAP image of the fiber fused through the proposed method.The depth-coding MAP images of (a)-(c) are presented in the lower right corner, respectively.(d), (e) 3D visualization of the multi-focus fiber from two views, respectively.f 1 and f 2 are the two vertically tilted fibers.(f)-(h) B images of the white dashed lines in panel (c) before and after fusion.(i) Variation of FWHM along with the depth before and after fusion.(j) Variation of SNR along with the depth before and after fusion.NPA, normalized photoacoustic amplitude.
Fig. 5
Fig. 5 (a)-(b) MAP images of multi-focus vessel data at five noise levels.(c) MAP image of the vessel fused through the proposed method.(d)-(f) 3D visualization for (a)-(c) rendered by the Amira software.(g) Close-up images of vessel before and after fusion at five noise levels indicated by the white dashed rectangle m in panel (c).(h) Close-up images of vessel before and after fusion at five noise levels indicated by the white dashed rectangle n in panel (c).(i), (j) Normalized intensity distribution before and after fusion at position 1 and 2 indicated by the white dashed lines in panels (g) and (h).(k) Normalized intensity distribution of the fused vessel under five noise levels at position 2 indicated by the white dashed line in panel (h).NPA, normalized photoacoustic amplitude.
Figures 5 (
Figures 5(d)-5(f) are the 3D visualization rendered by Amira software for intuitive observation.The normalized intensity distribution at positions 1 and 2 indicated by the white dashed lines in Figs.5(g) and 5(h) was analyzed to evaluate the capability to preserve lateral resolution within focused regions using our method, as shown in Figs.5(i) and 5(j).When there is no noise, the FWHM of the normalized photoacoustic signals of Focus 1, Focus 2, and Fusion at position 1 was measured to be 2.7 (Focus 1), 9.3 (Focus 2), and 2.7 μm (Fusion), respectively, as shown in Fig.5(i).The FWHM of the second peak of the normalized photoacoustic signals of Focus 1, Focus 2, and Fusion at position 2 was measured to be 9.9 (Focus 1), 3.0 (Focus 2), and 3.0 μm (Fusion), respectively, as shown in Fig.5(j).The lateral resolution within focused regions can be maintained in the fused vessel through the voxel-wise weighted-averaging fusion rule, which validates the effectiveness of the proposed method in processing the sample with intricate structure.The normalized intensity distribution of the Fusion at position 2 under different noise levels was analyzed, as shown in Fig.5(k).The influence of noise on the photoacoustic signal is insignificant when SNR is 30 dB.The decrease in SNR leads to the increase of the influence of noise on the photoacoustic signal.The photoacoustic signal cannot be visually distinguished from the added noise when SNR drops to 15 dB, as shown in Fig.5(k).However, the focused regions within DoF can be accurately identified and preserved through the proposed method when a high level of noise is added, which further verifies the effectiveness and robustness of our method.The superior performance of the proposed method over previous representative 2D-based MFIF algorithms was verified by comparing the MAP images and B images of the fused data.Two state-of-the-art MFIF methods, including the transform domain-based method dual tree complex wavelet transform (DTCWT)29 and the spatial domain-based method guided filterbased focus region detection for multi-focus image fusion (GFDF),30 were selected for comparison.Four common metrics in MFIF were selected to quantify the performance of different methods from multiple perspectives, including (1) information theory-based metric cross entropy (CE),31 which estimates the dissimilarity between source images and fused image in terms of information; (2) image feature-based metric spatial frequency (SF),32 which reveals the edge and texture information of the fused image; (3) human perception-based metric Q CV ,33 which quantifies the performance of MFIF algorithm by leveraging the principles of human visual system; and (4) similarity-based metric structural similarity index measure (SSIM),34 which measures the similarity between source images and fused image in terms of luminance, contrast, and structure.The multi-focus volumetric imaging of vessel was sliced to establish the multi-focus slice sequence.The 2D slices at the same position in multi-focus sequence are processed with DTCWT and GFDF, respectively.The fused 2D slices were stacked to produce high-resolution photoacoustic imaging with extended DoF.As shown in Fig.6, one group of simulated multifocus vessel was selected to compare different methods at two noise levels.Figures6(a) and 6(b) are the MAP images of the Focus 1, Focus 2, and fused vessel obtained via different methods when no noise is added and SNR ¼ 25 dB, respectively.The B images at the position indicated by the white dashed line in Fig.6(a) before and after fusion were compared, as shown in Figs.6(c) and 6(d).The normalized intensity distribution of photoacoustic signal processed with Hilbert transform at the position indicated by the yellow dashed line in Fig.6(c) before and after fusion is compared, as shown in Figs.6(e) and 6(f).The proposed method, which utilizes the 3D modified Laplacian operator for the focus measure of volumetric imaging, can accurately identify and preserve the lateral resolution within focused regions at different noise levels compared to 2D-based MFIF methods.By contrast, the GFDF, which was affected by the lateral resolution outside the DoF in Focus 2, failed to identify the lateral resolution within focused regions at different noise levels.The poorer lateral resolution within the defocused regions in Focus 2 was mistakenly preserved in the fused photoacoustic imaging, as shown in Figs.6(c)-6(f).The outperformance of the proposed method is attributed to the direct focus detection and fusion of volumetric information, whereas the slicing process of volumetric imaging leads to a loss of spatial correlation when implementing 2D-based MFIF methods.The MAP images of the 4 groups of high-resolution and large volumetric vessel obtained through different methods were evaluated using 4 metrics when there is no noise and SNR ¼ 25 dB, respectively, as shown in Table1.The proposed volumetric fusion method outperforms the conventional 2D-based MFIF method from multiple perspectives, which further validates the effectiveness of the direct fusion of volumetric photoacoustic information.
Figures 5(d)-5(f) are the 3D visualization rendered by Amira software for intuitive observation.The normalized intensity distribution at positions 1 and 2 indicated by the white dashed lines in Figs.5(g) and 5(h) was analyzed to evaluate the capability to preserve lateral resolution within focused regions using our method, as shown in Figs.5(i) and 5(j).When there is no noise, the FWHM of the normalized photoacoustic signals of Focus 1, Focus 2, and Fusion at position 1 was measured to be 2.7 (Focus 1), 9.3 (Focus 2), and 2.7 μm (Fusion), respectively, as shown in Fig.5(i).The FWHM of the second peak of the normalized photoacoustic signals of Focus 1, Focus 2, and Fusion at position 2 was measured to be 9.9 (Focus 1), 3.0 (Focus 2), and 3.0 μm (Fusion), respectively, as shown in Fig.5(j).The lateral resolution within focused regions can be maintained in the fused vessel through the voxel-wise weighted-averaging fusion rule, which validates the effectiveness of the proposed method in processing the sample with intricate structure.The normalized intensity distribution of the Fusion at position 2 under different noise levels was analyzed, as shown in Fig.5(k).The influence of noise on the photoacoustic signal is insignificant when SNR is 30 dB.The decrease in SNR leads to the increase of the influence of noise on the photoacoustic signal.The photoacoustic signal cannot be visually distinguished from the added noise when SNR drops to 15 dB, as shown in Fig.5(k).However, the focused regions within DoF can be accurately identified and preserved through the proposed method when a high level of noise is added, which further verifies the effectiveness and robustness of our method.The superior performance of the proposed method over previous representative 2D-based MFIF algorithms was verified by comparing the MAP images and B images of the fused data.Two state-of-the-art MFIF methods, including the transform domain-based method dual tree complex wavelet transform (DTCWT)29 and the spatial domain-based method guided filterbased focus region detection for multi-focus image fusion (GFDF),30 were selected for comparison.Four common metrics in MFIF were selected to quantify the performance of different methods from multiple perspectives, including (1) information theory-based metric cross entropy (CE),31 which estimates the dissimilarity between source images and fused image in terms of information; (2) image feature-based metric spatial frequency (SF),32 which reveals the edge and texture information of the fused image; (3) human perception-based metric Q CV ,33 which quantifies the performance of MFIF algorithm by leveraging the principles of human visual system; and (4) similarity-based metric structural similarity index measure (SSIM),34 which measures the similarity between source images and fused image in terms of luminance, contrast, and structure.The multi-focus volumetric imaging of vessel was sliced to establish the multi-focus slice sequence.The 2D slices at the same position in multi-focus sequence are processed with DTCWT and GFDF, respectively.The fused 2D slices were stacked to produce high-resolution photoacoustic imaging with extended DoF.As shown in Fig.6, one group of simulated multifocus vessel was selected to compare different methods at two noise levels.Figures6(a) and 6(b) are the MAP images of the Focus 1, Focus 2, and fused vessel obtained via different methods when no noise is added and SNR ¼ 25 dB, respectively.The B images at the position indicated by the white dashed line in Fig.6(a) before and after fusion were compared, as shown in Figs.6(c) and 6(d).The normalized intensity distribution of photoacoustic signal processed with Hilbert transform at the position indicated by the yellow dashed line in Fig.6(c) before and after fusion is compared, as shown in Figs.6(e) and 6(f).The proposed method, which utilizes the 3D modified Laplacian operator for the focus measure of volumetric imaging, can accurately identify and preserve the lateral resolution within focused regions at different noise levels compared to 2D-based MFIF methods.By contrast, the GFDF, which was affected by the lateral resolution outside the DoF in Focus 2, failed to identify the lateral resolution within focused regions at different noise levels.The poorer lateral resolution within the defocused regions in Focus 2 was mistakenly preserved in the fused photoacoustic imaging, as shown in Figs.6(c)-6(f).The outperformance of the proposed method is attributed to the direct focus detection and fusion of volumetric information, whereas the slicing process of volumetric imaging leads to a loss of spatial correlation when implementing 2D-based MFIF methods.The MAP images of the 4 groups of high-resolution and large volumetric vessel obtained through different methods were evaluated using 4 metrics when there is no noise and SNR ¼ 25 dB, respectively, as shown in Table1.The proposed volumetric fusion method outperforms the conventional 2D-based MFIF method from multiple perspectives, which further validates the effectiveness of the direct fusion of volumetric photoacoustic information.
Fig. 6
Fig. 6 (a), (b) MAP images of Focus 1, Focus 2, and the fused data when there is no noise and SNR ¼ 25 dB, respectively.(c), (d) B images at the position indicated by the white dashed line in panel (a) when there is no noise and SNR ¼ 25 dB, respectively.(e), (f) Normalized intensity distribution at the position indicated by the yellow dash line in panel (c) when there is no noise and SNR ¼ 25 dB, respectively.NPA, normalized photoacoustic amplitude.
Table 1
Quantitative evaluation of different methods. | 7,484.4 | 2023-10-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
It ’ s a Matter of Time
This paper describes a universe consisting only of time dilation and compares it to the one which we observe.
Introduction
I would first like to apologize that this paper may not be up to the rigor that you are accustomed to.At best, I consider myself an amateur with a penchant for out-of-the-box thinking.That said I have a perspective that is important to share.
Matter has been described with numerous characteristics that almost appear magical at times such as gravity, atomic repulsion, etc., but possibly the most intriguing is that it dilates time.This time dilation is seen as an outcome of gravity and the constant speed of light.I would like to take a moment to discuss the opposite, a world where there are only dilations in time and we observe matter and energy as the interactions of these dilations.
Time Points
If we formulate the time dilation similar to the Schwarzschild formulation [1], substituting the mass and gravitational multiplier with 1, it would appear as the following equation.Time would slow as the distance to the location of the time dilation decreased.That which would take a fraction of a second in real time could take billions of years in relative time as shown in Figure 1.where t′ is the location specific relative time as observed by someone unaffected by the time dilation; t is the real time that takes place absent of any dilation; h is the distance from the time dilation; T is the time dilation coefficient.This is an asymptotic relationship showing that the dilation in time approaches 1 ∞ with less and less dis- tance.This indeed describes more of a point and not an object with physical dimensions.Its main dimensionality becomes its impact on time around it.In such a universe of relative time, these points of time dilation or "time points" would be all that exists.
Their impact across distances would be faint and not observable.A single time point could only be observed when another time point were to come within a distance that causes enough of an impact for the observer to witness.These time points would remain unobservable while distances between them are substantial.
As two time points, A and B, move in close proximity towards one another, relative time for each would change.As B approaches A, time would slow.Any given movement under real time would translate into a slower movement in relative time.B would never be able to occupy the same location as A because time would slow to infinity before they could move to overlap: , d d h h t ′ ↓ ↓↓ .Mathematically, this can be represented by the following set of equations.As time point B moves towards A, it would initially be travelling along vector B V , where, treating it as an array, ( ) x y z = in real time while in relative time, d d
B B t t h t t h t t h V V t h x t h y t h z
. Comparing vector B V ′ , the movement in relative time, with vector B V we see that the components that would make them differ in direction are h x ∂ ∂ , h y ∂ ∂ , and h z θ = , and 0 z = causing the time dilated vector to be T , ,0 demonstrating that movement in relative time is not the same as real time.One could translate this vector in relative time back to a new vector in real time resulting in a new vector in real time with a new dx subsequently changing the trajectory.An alternative interpretation is that as time point B approaches A as in Figure 2, the movement towards time point A slows while the movement perpendicular to time point A does not, allowing time point B to move slightly more along its tangent.
The motion of time point B towards time point A is relative.As B approaches A, A approaches B. In this way, the vector B V has a reciprocal vector A V that has an equal but opposite motion.Because of the increased movement in relative time along the opposite tangents, time points A and B would rotate around one another in relative time, when compared to other time points at a distance.
Eventually, the two time points would pass one another in real time and B V would begin to increase the dis- tance between time points A and B. When this happens the impact of time dilation would dissipate until it is indistinguishable to any other time point.A third time point could also come near and disrupt this interaction between A and B. Time point C could dilate time more for time point B than for time point A and alter B's trajectory relative to A. This could break the relationship of AB apart even before the movement of the original vectors could.For these reasons, the interactions between two time points would be temporary and not a stable construct.
A third time point, C, however, could encounter time points A and B in a specific way, resulting in a more interesting relationship.Assuming that time dilation is additive, we can represent the vector C V ′ as follows, T where α is the static time dilation based on a time point's position relative to other time points being ex- amined; β is the tangential time dilation that is occurring due to the movements of the various time points be- ing considered; j represents the x, y, z axis.We can examine the individual s β and see that there are situations where the trajectory of C V ′ would be altered in such a way as to move towards the center of time points A and B. When CAj β and CBj β are of opposite signs, the corresponding trajectory would continue to travel between time points A and B.Even in three dimensional space, as a slower moving time point, C, moves towards the circle produced by a rotating A and B, we will see CAj β and CBj β remaining with opposite signs More important though, is that when, as in Figure 3 .When this happens C V ′ would continue on the same slope as C V .Eventually, the distances be- tween all three time points will become indistinguishable from one another eventually reaching the distance, δ , at which, for the sake of any other interactions, time can be considered to stop.This results in the situation where all three time points are in equilibrium.This creates the first stable set of time points.One that could not be pulled apart by themselves or by other sets of time points merely coming into their vicinity.
As another time point D would near the set of time points ABC, it would be slowed based on the combined time dilations of ABC and prevented from moving towards them.However, the time dilation caused by D would not be enough to prevent the set ABC to move towards it.External observers would see D being swept along with time points ABC until its vector starts to move away from ABC or it encounters other time point(s) that would release it by altering time point D's vector relative to ABC.The set of time points ABC would eventually approach the distance, δ from one another essentially forming the points of an equilateral triangle.
Such a stable geometry would have another distinct feature.It is progressive.In other words, it can come together while A and B are already interacting with one another.Other stable geometries may be constructed, but with the exception of the next two, they require time points aligning themselves in a more simultaneous fashion.
The next progressive and stable geometry, would require the fourth time point D to travel in three dimensional space on a vector that takes it through the triangle formed by time points ABC.If one looks at the triangular set of time points in three dimensions, one will find that there is a location directly above and below its center that is equal distance from all three time points.A time point heading through the triangle would have s β that would move it into this location where it would find that it has similar time dilations to all three time points in the approaching triangle.At such a location, time point D, though still moving towards the center would have its tangential movement equalized by the other time points.This is depicted in Figure 4.This equilateral triangular pyramid of time points would also be stable.Other time points would not be able to unlock any of the four time points in such a structure.Other time points could approach this pyramid but none would be immune to other sets of time points moving them away.Also, because it is formed by adding a single time point to an existing set, the triangle, it is progressive and easily formed.
A third stable and progressive geometry would also arise if two stable triangles, as in Figure 5, were to align with their centers moving through the other's triangle.As this trajectory continues to be altered in relative time, the planes of the triangles would become parallel and the time points would become rotated 60 degrees from one another.The resulting relationship would have six points with eight equilateral triangles providing eight planes or sides.Once again, this geometry would be stable in that no other set of time points could influence one time point in the structure independent of the others and although an external time point would be prevented in moving closer to this group, the group would not be stopped by them.
In such a universe of relative time, these time points and their subsequent geometries would make up the foundations of matter and energy while moving time dilators are the only thing that exists.An example of such stable geometries is given in Table 1.
Time points that are not in relationships with other time points would act like dark matter whose presence in large numbers may be felt while remaining individually elusive.The unstable relationships of pairs or other unstable combinations, with their short existence, would present themselves similar to quarks.The first stable construct, the equilateral triangles would be the smallest of the stable constructs and represent electrons.The equilateral pyramid would be only slightly larger than the electron and act as a neutron.The largest, the proton, would be the eight sided equilateral and because of its increased number of time points, it would dominate the others.
As these subatomic constructs move about, their interactions would be driven by the same principals of time dilations that formed them.As neutrons move towards protons, and vice versa, time would dilate joining them by making this movement take forever.Electrons would join with such a nucleus in a similar fashion.Multiple protons and neutrons, under the right circumstances, would obtain vectors that keep them associated with one another.
Forces Are Phenomena
As these basic structures making up a universe of time dilation move through three dimensional space, their interactions with other groups of time points would cause specific movements that would appear as separate forces.The first obvious phenomenon that would appear as a force is the phenomenon of time slowing as sets of time points approach one another.As sets of time points approach one another, the time dilation increases to infinity.As the resulting movement of the set slows, it would give the appearance that the one set is repelling the other.That which would be described as atomic repulsion would be merely time dilating to its extreme as objects approach one another and the resulting tangential movement that is accompanied with it.
Other phenomena would be caused by the varied distances of each of the time points in an equilibrium.The time points of an electron, neutron, or proton, would be at different distances to other sources of time dilation and therefore have different relative time.Though tied to one another by their relative motions to one another, the individual time points would move at different rates when confronting separate sources of time dilation.These unequal movements would be identified as separate forces but actually be phenomena caused by time dilation.
Relative to a separate group of time points, three distinct movements can be characterized for any two time points in an equilibrium.The first is the movement towards one another.As the slope of the time dilation function approaches perpendicular as the distance decreases, the differences in time dilation between the closer time point and the further can be significant.Even at the distance δ the time points are still moving towards one another.As the two time points move extremely close to another source of time dilation, the further time point would move faster than the closer time point causing the pair to move closer to the separate source.
Another movement would be the pair of time points in equilibrium moving past the separate mass of time points.As discussed, the equilibriums of time points in a time dilation universe would be at different distances and any pair that is not equal distant to the separate mass would have the closer time point moving past the mass at a slower rate than the further time point.
If we make the simplifying assumption that the separate mass, M, can be described as a collection of N time points whose combined effect is equivalent to their average distance to each of the time points contained in the equilibrium set of time points, we can state the time dilation as follows, where ji h the distance of time point j, within the equilibrium, from time point i, within the mass; j h the dis- tance of time point j, within the equilibrium, from the center of the mass; N is the number of time points within the mass.
At great distances the hyperbolic time dilation function takes on linear characteristics.At these distances, the differences in time dilation between the individual time points become small, therefore any phenomenon caused by the distant source of time dilation also becomes insignificant.
As the distances become smaller, though, the hyperbolic nature of time dilation dictate that the differences in relative time between the individual time points, increase.Eventually these differences in relative time will become dramatic enough to alter the behavior of the set.Absent any separate source of time dilations, two time points in equilibrium would mimic the movement of a stick.As depicted in Figure 6, the two ends would travel together moving them in the same direction.If one end moves faster than the other, the two ends would circle around a mid position.
For any two time points locked in an equilibrium, the farthest will always move faster than the nearer time point when they are close to a source of time dilation when viewed in relative time.This results in the third movement, a pair circling a mid position.We can isolate this circular motion by defining the circular motion, cf V , for the far time point and cn V , for the near time point as follows, Given that the two time points must remain in equilibrium, then but because the two time points are at different distances to the source of time dilation, ( ) ( ) an additional movement is required to maintain this equilibrium: Here, cm V ′ , is the movement of the overall pair in relative time.Examining, cm V ′ , we can determine that this will be consistently in the direction that the far time point is cir- cling and on the side of the source of time dilation.This is illustrated in Figure 6 where the dynamics are similar to a lever.The distance traveled by the far time point will be larger in relative time than that of the near time point causing the fulcrum of the two points to shift towards the mass.The new movements of these time points can be translated back to real time causing the shifted fulcrum in relative time to become the mid position in real time.This happens while the time point circles half a revolution around the mid position.Upon completion of half a revolution, when both time points become equidistant to the mass, the roles will reverse and the time point that was near becomes the far time point and the far becomes the near.This continues to shift the fulcrum towards the source of time dilation.
Along with this, the pair travels further in the direction that the far time point is circling and shorter in the direction of the near time point.This moves the overall pair in that direction in relative time.When this is translated back to real time this adds extra distance to the original vector causing the pair to increase their speed in that direction.Also, the faster the circular motion, the bigger this lateral motion will be.
As the two time points get closer and closer to a separate source of time dilation, the differences in the vectors increase.This causes the movement towards the mass to increase and the movement in the direction of the far time point to increase.This may be observed as a slow moving set because the overall time dilation is so high while the actual speed in real time is extreme.
As demonstrated, these movements are tied together.Each of the subatomic constructs in a world of time dilation would have multiple pairs of time points moving about and, with the exception of when time points are equidistant, they will all be combining to move the construct towards the other source of time dilation.Movement within the equilibrium, therefore, impacts the circular motion and lateral movement, circular motion also impacts lateral movement and changes internal movement, and lateral movement would impact circular motion and internal movement.It is only knowing each movement will the total phenomenon be really understood.
Each of the equilibrium structures, the electron, neutron, and proton, would have this distinctive attractive phenomena while each would also have a distinctive repulsion phenomena based on the exponential slowing of time.These would be minor when viewed individually at large distances from a mass.When combined, however, the phenomena can be much more dramatic.
As a neutron (or electron) is drawn to a proton, it's forward movement will slow while it's lateral movement will increase dramatically.This means the neutron and proton, relative to a distant mass, will be circling very fast around a focal position when they reach their own δ distance where, for all purposes, they cannot get meaningfully closer.This puts the neutron and proton in their own equilibrium, forming a nucleus.
As seen in Figure 7, connecting the time points within the neutron with the time points within the proton produces 24 pairs of time points in equilibrium with one another.Their average distance will be the distance between the mid position of the proton and the mid position of the neutron.This average distance will be considerably further apart than the distances within any of the subatomic equilibriums.An electron moving towards a combined neutron and proton, will also result in additional pairings, extremely fast circling, along with greater equilibrium distances.
Figure 7.A neutron and proton would circle extremely fast around one another with 24 pairs of time points in equilibrium with a much greater distance between them resulting in significant changes in their fulcrum position towards a distant source of time dilation.This would appear as the unseen force we observe as gravity.
As the three components, proton, neutron, and electron, are drawn together in a universe of time dilation, they form the basic atomic structure that we are familiar with.Because the extreme circling speed caused by the lateral motion within the atomic structure is combined with the larger distances between time points in the atom and the multitude of rotating pairs, the resulting movement towards the mass would be most significant.This would be observed as gravity but instead of being an unseen force, it is merely the phenomena of circling subatomic pieces.
This also implies that gravity is created by the combining of subatomic parts while individually they would show a much smaller movement towards a source of time dilation.In other words, a neutron will appear to have very little mass until it is circling a proton causing a much greater attraction.This also indicates that gravity is a function of the number of time points, the time dilation coefficient, and the distance at which forward movement is essentially stopped: As the various structures move about in such a universe, they are themselves governed by the same properties of time dilation that brought them together.As a proton moves towards other protons, their momentums will be blocked by time dilation, fusing them together.Neutrons and electrons could then move towards this increased source of time dilation forming different elements.In-turn, these atomic structures would move towards other atomic structures resulting in similar interactions governed by the varied time dilations of each of the component time points.As time dilates towards infinity, these atoms would become locked in the form associated with molecules, until some other source of time dilation were to impact them and change the trajectories of an individual atom.
Two Dimensional Electrons: Movement and Rotation
Electrons in a universe of time dilation would have strange and "funky" behavior similar to our own universe.This can be attributed to being a two dimensional equilibrium of time points in three dimensional space.The movements of an electron relative to a distant mass, as shown in Figure 8, can be broken into three separate movements.The basic movement is the overall movement of the mid position of the equilibrium, m V .Another can be described as the spin, s V , which would be the movement around the plane generated by the three time points.A final movement can be identified as the rotation of the plane, R V , around the mid position relative to the distant mass.These vectors could be represented as follows, ( ) where sx θ the angle of ∠Amx, time point A to the mid position, m, to the x axis, etc.; Rx θ the angle of the plane to the x axis, etc.As an electron moves amongst other constructs, it would interact with these constructs based upon the changes in time dilation as time points approach one another and move away from each other.This interaction would change the directions of the electrons and the atoms they are interacting with, causing them to in-turn change directions.This increased movement could also cause atoms that are not associated with one another to travel towards each other potentially locking one another together.As atoms move on vectors that take them through one another, time would dilate to infinity keeping them bound to one another.Likewise, atoms that are currently locked because they are moving towards one another could have their directions changed resulting in them unlocking from one another.This movement would be perceived as heat is perceived.
As the mid position of an electron moves, the rotation around the mid position relative to a separate mass would exhibit the characteristics of a wave.As the electron rotates, it flips end over end like a coin being tossed.From a distant mass, the distance between the closest time point to the furthest varies as the electron rotates.As the distance varies, so does relative time, causing the speed that each time point travels to be different.
The difference in relative time on the rotating electron and to a lesser extent, the time dilation that the electron has on other constructs would increase and decrease in accordance with the angle of the rotation with respect to the direction the electron is traveling.This effect would result in a typical sine wave as the electron rotates half a rotation around its mid position.The distance that the electron's mid position travels as it makes this half rotation would be equivalent to its wavelength resulting in Equation (4).
( ) where λ is the wavelength; π R V ′ is the amount of time the electron takes to make half a revolution.As the plane of the electron is aligned with a mass, the time point closest will have the slowest movement while the furthest time point will have the fastest and because of the exponential property of time dilation, the differences would be negative when viewed from a distant mass.
Again, the fulcrum position would change towards the distant mass.This would cause the overall path of the electron to change in relative time towards the mass.This is the behavior of a gravitational lens.This phenomenon would also be exaggerated as the distance to the mass becomes increasingly small.As a rotating electron comes extremely close to a group of atoms in real time, the differences between the movement of the far time point and the near time point would substantially increase and in relative time, the electron would bend towards the atoms.This describes diffusion.
Furthermore, the amount of bending would be a result of the orientation of the rotating electron as it passes.As in Figure 9, an electron passing as its rotation is aligned with the mass will have its course changed more than one that is at a 45 degree angle as it passes.An electron whose plane is parallel to the mass as it passes, would pass straight by this mass.Given an appropriate structure, electrons traveling with different wavelengths will pass the atoms composing the structure with different orientations resulting in different trajectories.This would be observed as refraction.
As described above, this shows an electron taking on the roll of a photon with the exception that it is not necessarily traveling at the speed of light.An electron dislodged while traveling around a nucleus can be used to explain how an electron can travel at such an extremely high speed.
As an electron comes close to a source of time dilation such as a nucleus, the closer time points will move slower than the further causing the electron to spin.Furthermore, as in Figure 10, the plane of the spin would align with the nucleus.As the time points within the electron spin along a plane (s: spin), the motions can be describe based on the direction they are moving relative to the nucleus (u: up and away or d: down and towards) Figure 9.The effects a time dilation has on a rotating electron oscillates as a wave as it travels.As an electron passes close to a mass, it would change direction based on its orientation resulting in a refraction effect.
Figure 10.An electron circling a nucleus will rotate to a perpendicular position while its speed increases as it nears the nucleus.As a rotating electron comes into the proximity of another mass it can be expelled at high speeds from the nucleus.and the position of each time point relative to the mid position of the electron (n: near or f: far).These motions can in-turn be split into the movement in-line with the nucleus and the tangential movement relative to the nucleus (l: in-line or t: tangential).
The tangential movement of a time point in any position as it spins around its mid position in the up direction, will have an equal but opposite tangential movement in the down direction resulting in stnd stnu 0 This leaves the up and down motions combined with the near and far positions to dominate.As the t h ′ ∂ ∂ slows time points moving down and towards the nucleus more in the near position than the far and vice versa as they move up and away, slnd slnu slfd slfu , the focal position, F ′ , will be off the plane causing the plane to rotate in the direction of the mass.There will be an unstable saddle equilibrium when the plane of the spin is on the tangent to the nucleus but for any plane not on the tangent, the result will be an equilibrium aligned with the nucleus.
As the electron moves closer to the nucleus, the lateral movement will increase its speed.As described earlier, such lateral motion is a function of how close the electron is to the nucleus, with its motion increasing substantially as it gets closer.As the electron circles the nucleus, it can easily be expelled from an atom at this very high speed.Another set of time dilations, say in the form of another atom, could move into a position that blocks the electron's path.As this occurs, two of the three time points could have time dilate to where it virtually stops while the third continues to travel at this extremely fast speed.This speed would be determined by the speed at which the electron travels around the nucleus while the positions of the time points as it is expelled would determine the rotation.This is a process that can transform an electron into a photon.As the electron circles around the nucleus we can represent the motion as m V ′ .As time points A and B move into positions where the time dilations both go to extremes, time point C would move as − .The maximum that the expelled electron, x V ′ , can travel will be when s V ′ , the spin is stopped and becomes 0. If we assume that this maximum is the speed of a photon, then the speed of light, C, can be defined as . As time points A and B are stopped, time point C will rotate around the electron's mid position changing the angles that A and B are traveling.When this new angle allows A and B to move, the electron would roll off controlling the rotation which, as previously shown, dictates the wavelength.Since ( ) This indicates that in a world composed of time dilation, the speed of light is determined by the time dilation coefficient and therefore the time dilation coefficient is not determined by the speed of light.This also connects the speed of light with gravity, since both are the result of the time dilation coefficient and the distance at which the dilated time virtually stops:
Two Dimensional Electrons: Spin
As described above, the movement of an electron would also consist of a spin.This is the movement along the plane established by the three time points.In a universe of time dilation, the dynamics of spin also produces peculiar phenomena that explains other behaviors seen in our universe.
Another interesting characteristic of two dimensional electrons is that they can synchronize their spin.If electrons were to have their planes become parallel, then the spin which one electron has would influence the other's spin.As depicted in Figure 11, time points in each electron would speed up or slow down until they are positioned 60 degrees apart and reach a rate of spin that is the same.A spinning time point that is moving slower than a corresponding time point in the other electron would have its time dilation decrease causing it to speed up.The time points that are spinning faster would approach the corresponding time point in the slower spinning electron causing it to slow.This would continue until they reach the same rate of spin.Subsequently, other electrons whose planes are parallel would also synchronize their spin and a continuum could result with a large group of electrons spinning in synchronization.This would also allow electrons to become packed among one another.Given the electrons are contained in an overall structure, adding other electrons with a parallel plane would result in more synchronized electrons being suspended by one another with the distances between them reduced.As an electron is added, the resulting β of the spin of the nearby electron would move that electron further away.This would bring it closer to the next spinning electron moving it until the distances between all the spinning electrons become the same.More and more electrons could be included within the structure further compacting the electrons.
If the containing medium allows, this process could expand out by taking control of the spin of adjacent electrons and equalizing their distances within the containing medium and the adjacent medium.The adjacent medium thereby becomes a conductor of the spin.
This is what would be observed as electricity in a world of time dilation.As the spin is transferred from the containing medium through a conductor, this spin would be decreased by each slower spinning electron that it encounters.As a static charge is discharged into a ground, the slower spinning electrons in the ground will spin faster while the spin in the static charge will decrease.This happens until there is no longer a definitive spin left in the static charge.
Electrons that are not part of an atom will move about a time dilation universe based on their interactions with other time dilators.This would occur all the time with some electrons moving out of a region and others moving into a region.This movement would not produce the phenomenon that we identify as electricity.The spin is therefore a vital characteristic of electricity in a universe of time dilation.
If electricity were indeed just the movement of electrons, a direct current battery with both terminals connected to a ground, would produce a flow from the terminal with excess electrons to the ground and then from the ground to the terminal with an electron deficit.As in our own universe, this does not happen.A conductor that allows the electrons to synchronize their spin and return the spin to the battery spinning with the same orientation as it was when it left would complete the required circuit.A battery with terminals connected to a ground would not be able to complete such a circuit and return such a spin back to the battery.This is indicative in what we observe with a direct current.
Furthermore, this compacted spinning collection of electrons would create changes in time dilation which would get amplified by the number of electrons spinning in a linear direction.If the electrons were to spin within a conductor such as a straight wire, they would be arranged in a line.As this occurs they would cause time to dilate more along that line.
∑
, of equal time dilation, K , would extend out further due to the exponential characteristic of time dilation.If we start at a position directly above the mid position of our line of electrons, the distances from each time point would be matched by a corresponding time point on the other side of the mid position resulting in a given time dilation.Moving to a side brings you closer to the time points on that side and further away from time points on the other side.Because the time dilation function is exponential, this additional time dilation caused by the closer side is larger than the reduced time dilation caused by the further side.A greater distance is therefore required to return to the same time dilation.
This additional time dilation along the direction of the line could be defined as the differences between the two ellipses and depend on the number of electrons and how tightly they are packed.Where as in gravity, objects move towards the increased source of time dilation, here, objects move towards this magnified line of time dilation.
This would be perceived as magnetism is perceived in our universe.If we increase the number of lines of electrons, a recognizable pattern emerges.As one moves past the tip of one line towards the second line, the time dilation decreases causing the line of equal time dilation to curve back towards the center of the two lines of electrons.This results in the pattern typically associated with magnetism.
If two separate lines of spinning electrons were placed with their tips near one another, the results would be dependent on the direction of the spin.One could label each tip based on the direction of the spin.As in Figure 13, matching spins would result in opposite tips being placed near one another.As the spin of one line adjusts to match the other, the tangential movement of each spinning time point would move each line towards one another.As in our universe, a universe of time dilations would have opposites attract.
If the spins are opposite one another, the tips coming near are of like labeling.As the lines of electrons spin opposite to one another, the tangential time dilation would combine causing each of the spinning electrons to have a β that moves away from the other line.This would cause the two objects containing the spinning electrons to move apart or be repelled.This would occur until the aligned electrons are no longer in a line, the one set of electrons stops spinning, or the two lines of electrons move far enough apart that the total time dilation is no longer big enough to offset the time dilations from other sources.
All together, this explains the phenomena which we see as electricity and magnetism.In a universe composed of time dilation, electricity is caused by the spinning and compaction of electrons while magnetism is the additional time dilation that spinning electrons cause when packed into straight lines.Electricity may require electrons to move to complete a path but the true phenomena is not based solely on such a flow of electrons but the compaction and spin of those electrons.The spin would indicate the speed and the polarity of the electricity or in other words, the voltage while the compaction would determine the magnitude of the electricity or amperage.
Furthermore, in a time dilation universe, it is easy to conceive of scenarios that allow for the transition of one state of an electron to another.Light can interact with collections of time points transforming the electron's rotation to a spin resulting in light transforming into electricity.Likewise, electrons traveling as waves such as light, or spinning as electricity can transition to heat by agitating electrons and atoms causing them to move about at greater and greater rates.Heat and electricity can cause movements that will result in the release of electrons from atoms at high speed resulting in light.These transitions are the same as those transfers of energy that we observe in our universe.
Causality
The current approach to the issue of causality of much contemporary research is very flawed.When two events are correlated, that is they tend to happen together, researchers will often conclude that the one causes the other.When this is done and the other possibilities of the correlation are ignored, the research is worthless.Not only are such conclusions of little value but they can often be done to get a desired response and mask a less than truthful agenda.
There are several potential explanations when things happen together.The first is that they may be just a coincidence.This can be further exaggerated when the occurrence of the two events leads to their detection.The spilling of gasoline and the lighting of a match may be separate random events and may go unnoticed.However, when they happen together, they are quite noticeable.Using only this observed correlation, one could incorrectly conclude that spilling gasoline causes the match to light.When observing such a correlation we must consider the potential that there is no causal relationship between the two events.
The second flaw is that of the direction of the causality.When confronted with a strong correlation between event A and event B, it is a mistake to conclude that A causes B. We must examine the other possibilities including that B causes A. A university recently touted the correlation between good grades and taking a full 15 credits of classes and, with their obvious agenda, indicated that students should take 15 credits so they will also get good grades.They neglected to examine the possibility that students that are struggling and getting bad grades are more likely to cut back on the number of classes they are taking.In this case, it is more logical to expect poor grades to lead to a lighter course load.Here, they incorrectly conclude that A causes B, when the more logical conclusion is that B causes A.
A third possibility is that both A and B are caused by a third event C. When a game of billiards is started one may observe the eight ball moving at the same time as the fifteen ball.They are correlated.It is incorrect, however, to conclude that the eight ball causes the fifteen ball to move or that the fifteen causes the eight to move.Instead, the true causality is that the cue ball is causing both to move.Looking for such a third party is often overlooked in poor research and the research ends up making an incorrect conclusion.
To complicate things further, the events may not be causal at all.A could be a catalyst that allows B to cause C. Leaving the corral gate open does not cause the cows to leave but merely allows it to happen.The cows could be startled by a separate event and with the open gate, they would be able to leave.
Contemporary thoughts in Physics have failed in this area, too.The focus has been that moving objects cause time to dilate.Time dilation has been associated with movement and it has been concluded that this movement causes the time dilation.Other possibilities have failed to be examined even though there is not a good explanation as to how movement causes this time dilation.In a universe made up of time dilating points, movement does not cause time to dilate relative to other objects, but time dilation must occur for objects to move.
In a time dilation world, two bodies that exist near one another will continue to exist near one another until the relative time of one becomes different from the other.To utilize the twin example [2], for the one twin to move away from the other, his relative time must change either by being pulled as in gravity or compressed as in being pushed.If relative time were to remain the same for both twins, they would remain together.To make the one twin be propelled away at the speed of light, his relative time must be made to come close to stopping.Therefore propulsion in such a universe is fundamentally a method of changing relative time in order for objects to move apart.In such a universe, the twin, that feels a G force upon takeoff, would actually be feeling the time points in their body becoming closer, thus dilating time.
Time Dilation Coefficient
Until now, I have purposely neglected to further define the time dilation coefficient, T. Contemporary views places this coefficient at the speed of light squared.If this is the case then there is width to a time point in that the time dilation would approach infinity as h approaches 2 1 C .This distance would also present itself in the wave pattern of light, however, it may be too small for it to be measured in either situation.I have already shown that in a universe made up of time dilating points, the speed of light is determined by T and δ, the distance at which time essentially stops, but there is no obvious tie to the square root of T.
Other possibilities do exist though.T could be infinity resulting in the time point truly being a point.Here, the time dilation would approach infinity as h approaches 0. Under these circumstances the speed of light squared would just be a really good approximation for infinity.The differences between the speed of light squared and infinity may be hard to distinguish in the common occurrences in a universe.
Yet another intriguing possibility is that T is not a constant at all.It could be a value that has been expanding since time began.Under this possibility, the constant would be the beginning of a universe, "The Big Bang" [3], and T could be a function of the total amount of time that has accumulated.Under this scenario, T would be forever approaching infinity and perhaps the speed of light squared is an approximation of the amount of time that has passed since the beginning.
It is possible that there are no constants in the world and it is only human nature that tries to identify a constant when we are confronted with a result.Other values of T may also be valid but any large number would drive the described relationships within a universe of time dilation in a similar fashion.
Conclusion
In that a universe made up of only time dilation behaves identical to ours, one can conclude that our universe is actually made up of only time dilation.Movable points which dilate the time of other movable points, time points, make up the basis for our universe.These time points come together to form the various subatomic and atomic components of our world.These structures form equilibriums that in-turn move about with the structure's individual time points moving together but under different relative time.Relationships that are thought to be forces are actually phenomena associated with the movement and interactions of these time points.Gravity, atomic repulsion, and magnetism are specific behaviors of moving time points.Light and electricity are specific behaviors of electrons and their two-dimensional structure.In all, it appears that we are in a universe made up of relative time.Time is not only relative, but relative time is everything.
Figure 1 .
Figure 1.The dilation of time is asymptotic with its reciprocal approaching infinity as distance decreases.A constant movement in real time results in slower movement in relative time.
Figure 2 .
Figure 2.As time points approach one another, a set trajectory in real time would be seen as a changing trajectory in relative time and would present itself as a rotation to external elements.
, time point C is equidistant to A and B,
Figure 3 .
Figure 3.As time point C moves into an equilateral position, an equilibrium is reached, locking the three time points together.
Figure 4 .
Figure 4.A second progressive geometry is obtained when a forth time point moves into an equilateral position forming an equilibrium among the four time points.
Figure 5 .
Figure 5.As two triangles as in Figure 3 come together, a new equilibrium is obtained creating the third geometric set of time points.
Figure 6 .
Figure 6.A pair of time points in equilibrium would travel together in real time but in relative time they will circle, shifting their fulcrum in the direction of the circle of the far time point and towards the slower spinning time point or the distant mass.
Figure 8 .
Figure 8.A two dimensional electron would have three major motions, movement of the mid position m, spin around its plane, s, and rotation of the plane, R.
Figure 11 .
Figure 11.As two electrons spin in the same direction on parallel planes, the slower spinner would speed up while the faster spinner would slow until the two were synchronized.A large group of spinning electrons could become packed.
Figure 12 .
Figure 12.As electrons align and become packed in a straight line, time is dilated greater in the direction of the line leading to an attraction that is dependent on the number of electrons.Multiple lines would result in the pattern typically associated with magnetism.
Figure 13 .
Figure13.The direction of the electrons' spin would define the polarity of the magnetic field and therefor, the positive/negative aspects of the electricity. | 10,946.6 | 2015-05-06T00:00:00.000 | [
"Physics",
"Geology"
] |
Series Solution for Steady Heat Transfer in a Heat-Generating Fin with Convection and Radiation
The steady heat transfer in a heat-generating fin with simultaneous surface convection and radiation is studied analytically using optimal homotopy asymptotic method (OHAM).The steady response of the fin depends on the convection-conduction parameter, radiation-conduction parameter, heat generation parameter, and dimensionless sink temperature. The heat transfer problem is modeled using two-point boundary value conditions. The results of the dimensionless temperature profile for different values of convection-conduction, radiation-conduction, heat generation, and sink temperature parameters are presented graphically and in tabular form. Comparison of the solution using OHAMwith homotopy analysis method (HAM) and Runge-Kutta-Fehlberg fourthfifth-order numerical method for various values of controlling parameters is presented. The comparison shows that the OHAM results are in excellent agreement with NM.
Introduction
Fins (extended surfaces) are widely used to enhance the heat transfer rate between a hot surface and its surrounding fluid.Fin applications have included the cooling of computer processors, air conditioning units, refrigerators, air-cooled engines, and oil carrying pipelines.In the past three decades, fins have gained vast recognition for cooling electronic tools as heat sinks.The subject of extended surface heat transfer is now a fully developed technology but with continuing contributions from numerous researchers.Background information on heat transfer in extended surfaces may be found in the books [1,2], where the authors have presented wideranging coverage of the various facts of this technology.
Numerous mathematical models related to heat transfer in fins of various shapes with different boundary conditions are well documented in the research literature.For instance, the mathematical analysis of convective fins was first provided by Gardener [3] based on the assumption of constant conductivity and a uniform coefficient of convective heat transfer along the fin surface.Khani et al. [4] presented some exact solutions for 1D fin problem with uniform thermal conductivity and heat transfer coefficient.Khani et al. [5] also provided a series solution for 1D fin problem with constant heat transfer coefficient and temperature dependent thermal conductivity.
A variety of approximate analytical methods have been used to study the transient response of fins.Aziz and Na [6] presented a coordinate perturbation expansion for the response of an infinitely long fin due to a step change in the base temperature.Chang et al. [7] used the methods of optimal linearization and variational embedding, and Campo [8] utilized variational techniques to analyze radiative-convective fins under unsteady operating conditions.Solutions for transient heat transfer were constructed for fins by Onur [9].Aziz and Torabi [10] have presented the numerical analysis of transient heat transfer in fin with temperature dependent heat transfer coefficient.
In this paper, we used a new approximate method, namely, optimal homotopy asymptotic method [21][22][23][24][25][26][27] for steady-state heat transfer with internal heat generation fin, and investigated numerically the effects of the different governing parameters on dimensionless temperature profile in a nonlinear fin-type problem.For comparison purposes the governing highly nonlinear problem is also solved using Runge-Kutta-Fehlberg fourth-fifth-order method and homotopy analysis method (HAM) developed by Liao [28].
The paper is planned as follows: in Section 2 we formulate our nonlinear problem, basic principles of OHAM are discussed in Section 3, solution of the problem via OHAM is presented in Section 4, and Section 5 is reserved for results and discussion.Conclusions are drawn in Section 6.
Mathematical Formulation
Consider a straight fin of constant cross-sectional area (rectangular, cylindrical, elliptic, etc.), perimeter of the crosssection , and length as shown in Figure 1.The fin has a thermal conductivity and a thermal diffusivity .The surface of the fin behaves as a gray diffuse surface with an emissivity .The fin is deemed to be initially in thermal equilibrium with the surroundings at temperature .Its tip is insulated.A volumetric internal heat generation rate q occurs in the fin.The fin loses heat by simultaneous convection and radiation to its surroundings at temperature .The same sink temperature is used for both convection and radiation to avoid the introduction of an additional parameter in the problem.
For one-dimensional steady conduction in the fin, the energy equation may be written as The initial and boundary conditions are where is measured from the tip of the fin with the introduction of the following definitions: Equations ( 2) and ( 3) can be written in dimensionless form as follows: The instantaneous base heat flow is given by: which may be expressed in dimensionless form as follows.
The instantaneous convective heat loss from the fin is given by or in dimensionless form as Similarly, the instantaneous radiative heat loss from the fin can be obtained as or in dimensionless form as The instantaneous total surface heat loss in dimensionless form is the sum of convective and radiative losses given by ( 11) and (13); that is, The instantaneous rate of energy storage in the fin can be calculated from the energy balance as follows: or in dimensionless form as where
(i) Let us consider the following differential equation: where Ω is problem domain, where and are linear and nonlinear operators, v() is an unknown function, and () is a known function.(ii) Construct an optimal homotopy equation as where 0 ≤ ≤ 1 is an embedding parameter, and () = ∑ =1 is auxiliary function on which the convergence of the solution greatly depends.The auxiliary function () also adjusts the convergence domain and controls the convergence region.(iii) Expand (; , ) in Taylor's series about .One has an approximate solution: Many researchers have observed that the convergence of the series equation ( 19) depends upon , ( = 1, 2, . . ., ).If it is convergent, then we obtain (iv) Substituting ( 20) in (17), we have the following residual: (; ) = (ṽ (; )) + () + (ṽ (; )) .(21) If (; ) = 0, then ṽ will be the exact solution.
For nonlinear problems, generally this will not be the case.For determining ( = 1, 2, . . ., ), Galerkin's Method, Ritz Method, or the method of least squares can be used.(v) Finally, substitute these constants in (21), and one can get the approximate solution.
OHAM Solution for Heat-Generating Fin
According to the OHAM, (1) can be written as where prime denotes differentiation with respect to .
We consider and () as follows: Using ( 23) in (22) and after some simplifying and rearranging the terms based on the powers of , we obtain the zeroth-, first-, and second-order problems as follows.
The zeroth-order problem is with boundary conditions Its solution is The first-order problem is with boundary conditions having solution The second-order problem is with boundary conditions It is given by The second-order approximate solution by OHAM for = 1 is We use the method of least squares to obtain 1 , 2 the unknown convergent constant in θ.
By considering the values of 1 , 2 in (33) and after simplifying, the second-order approximate analytical OHAM solution can be obtained (34)
Results and Discussion
Equation (4) shows that fin temperature is based on four parameters: , , , and which govern this highly nonlinear second-order differential equation.The effect of each parameter on fin temperature is tabulated and graphically presented for different values of the controlling parameters.
In order to validate the accuracy of our approximate solution via OHAM, we have presented a comparative study of OHAM solution with homotopy analysis method (HAM) and numerical solution (Runge-Kutta-Fehlberg fourth-fifthorder method).Table 1 has been prepared to exhibit the comparison of dimensionless temperature obtained by OHAM, homotopy analysis method (HAM), and the numerical method (NM) for several values of heat-generating parameter , when other parameters are fixed.It is observed that, with increasing values of internal heat-generating parameter , the temperature profile gradually increases.Clearly the OHAM solutions are very close to the numerical solution as compared to HAM.This can be seen from the percentage error in the dimensionless temperature obtained by OHAM, HAM, and NM.The increase in dimensionless temperature is also evident in Table 2, in which we have used different values of sink temperature parameter , and other parameters values are predetermined.From Tables 1 and 2, it is observed that our OHAM solutions are more accurate than HAM; this confirms that OHAM is more consistent with approximate analytical method than with HAM.The major factor in HAM is its computational time for finding the ℎ (h curve), while in OHAM the ensuring convergence of the solution depends on parameters 1 , 2 , . .., which are optimally determined, resultantly HAM in more time consuming than OHAM.
In Table 3, we show the comparison of dimensionless temperature obtained by OHAM and the numerical method (NM) for several values of convection parameter , while other parameters are kept unchanged.It is observed that, with the increase of , the temperature profile shows decrease, and the same phenomena of decrease in dimensionless temperature can be observed in Table 4 for different values of radiation parameter , when the other parameters values are fixed.In Figures 2, 3, 4, and 5 we depict the dimensionless temperature profile and its variation for different values of parameters.It is important to note that the dimensionless temperature increases with each controlling parameter.
Conclusion
We have successfully applied the optimal homotopy asymptotic method for the approximate solution of steady state of heat-generating fin with simultaneous surfaces convection and radiation.The effects of radiation parameter , convection parameter , internal heat-generating parameter , and the sink temperature parameter on temperature profile in the fin are investigated analytically.It is observed that dimensionless fin temperature profile is dependent on the four parameters , , , and .Comparison for the dimensionless temperature has been made between the
Figure 1 :
Figure 1: A straight fin with constant cross-sectional area.
2 XFigure 2 :Figure 3 :
Figure 2: Effect of internal heat generation on fin dimensionless temperature for fixed values of , , and .
Figure 4 :Figure 5 :
Figure 4: Effect of radiation parameter on fin dimensionless temperature for fixed values of , , and . | 2,304.4 | 2013-09-19T00:00:00.000 | [
"Engineering",
"Physics"
] |
Near-Field Engineering in RIS-Aided Links: Beamfocusing Analytical Performance Assessment
Reconfigurable intelligent surfaces (RISs) are typically utilized in the far-field as an effective means of creating virtual line-of-sight (LOS) links to mediate non-LOS propagation in wireless communications via beamforming. Owing to their large surface and the multitude of scatterers, the use of RISs can be extended in the near-field, to transform the incident beam into a focused beam that is able to address the challenges of high frequencies more efficiently than conventional beamforming. In this paper we explain from a physics’ standpoint how the RIS can engineer wavefronts to transform the incident beam into a focused beam targeted at the user, and we employ the angular spectrum representation approach to describe analytically the dynamics of beamfocusing. We derive analytical expressions that provide the necessary insight into the dependencies and trade-offs between crucial parameters, such as the incident beam’s footprint on the RIS, the intended focal distance of the reflected beam, and the link topology. To assess the beamfocusing efficiency we provide metrics that are crucial for future applications, including energy efficient communications, wireless power transfer, tracking and localization.
I. INTRODUCTION
One of the most common functionalities of reconfigurable intelligent surfaces (RISs) regards redirecting an incident beam towards any desired direction, beyond specular reflection, essentially creating virtual line-of-sight (LOS) links to mediate non-LOS propagation.Owing to this unique feature, the RIS has been proposed as a means to bypass blockage, especially in high frequency communications, such as the terahertz (THz) band [1], [2], [3], [4], [5], [6], [7], [8], [9], [10].As wireless communications are nowadays shifting to higher frequencies with the aim to meet the perpetual demand for increased bandwidth, it gradually becomes clear that, to achieve and maintain high quality of service, future networks are envisioned to be equipped with functionalities beyond conventional beamforming.
The associate editor coordinating the review of this manuscript and approving it for publication was Zaharias D. Zaharis .
A direct consequence of frequency upscale is that, for a certain aperture, a radiating element becomes electrically large, and the transition from the near-to the far-field moves to larger distances.Objects and users that at GHz frequencies were located in the far-field of the antennas, are now found within the near-field of large-scale antennas and RISs operating in the THz band, for example.The availability of electrically large surfaces opens up new opportunities for manipulating the wavefront of the radiated wave, to acquire curvature beyond the typical far-field planar form [11], [12], [13].For example, by shaping the curvature of a beam into spherical wavefronts, beams that would typically diffract can now counteract spreading, and focus.While the RIS has been so far extensively studied within the context of beamforming, the possibility for beamfocusing has only recently been addressed [14], [15], [16], [17], [18], [19], [20], [21], [22], [23].With beamfocusing, the incident power can be concentrated at controllable distances from the RIS, towards any desired direction.Therefore, beamfocusing is ideal for dramatically increasing the received power in small areas, a key element for energy efficient communications [17], [24], [25], [26], [27] and localization applications [28], [29], [30], [31].
To exploit the full potential of beamfocusing, it is fundamental to understand the requirements for efficient and controllable formation of focal areas.The beamfocusing efficiency will depend on crucial parameters, such as the positioning of the RIS with respect to the transmitter and receiver, the properties of the beam footprint on the RIS, the reflection angle and the desired focal distance [32].Therefore, there is a need for analytical models that can clarify the bounds imposed by the system design parameters on the link performance.
In this work, the RIS beamfocusing capabilities are studied in terms of the received power.The main contributions are summarized as follows.
• An exact analytical model for beamfocusing at oblique angles is derived.The model provides the spatial distribution of the focused beam and the power delivered to the user at any location.
• The derivation of the analytical model is based on explicit electromagnetic modeling, by means of the angular spectrum representation approach, which captures the full-wave propagation characteristics of the RIS-reflected beam, as predicted by Maxwell's equations.
• The analytical model provides insight into the interplay between crucial parameters, such as the positioning of the RIS with respect to the transmitter and receiver, and the properties of the beam footprint on the RIS.
• Metrics to assess the beamfocusing efficiency are introduced.
• The approach followed in this work provides simple guidelines for algorithm design and performance optimization.
II. SYSTEM MODEL
Let us consider a RIS located at the origin of the coordinate system shown in Fig. 1.A beam illuminates the RIS, and is subsequently reflected by the RIS towards a desired direction.
The RIS reshapes the incident wavefront so that the reflected beam is focused at a desired focal distance.The directions of incidence and reflection are defined by the wavevectors k i and k r , respectively, which are expressed with respect to the elevation (θ) and azimuth (ϕ) angles of incidence (subscript i) and reflection (subscript r), as shown in Fig. 1 power delivery to the UE, it is necessary to bring the focal point at the UE location.This requires that the reflected beam is directed towards the UE, i.e. θ r = θ UE and that the focal distance is equal to the RIS-UE distance, i.e. f 0 = d UE .
Given a certain AP-RIS-UE topology, we would like to know how the incident beam is redistributed by the RIS upon reflection, and what is the power delivered to the UE from the reflected beam, which is focused at a desired location.To this end, we need to calculate the field reflected from the RIS at any desired observation point (x, y, z).Here, we will follow the angular spectrum representation approach adopted in [18], according to which the reflected field can be calculated anywhere in the semi-infinite space z > 0, using only (a) the footprint of the incident field on the RIS and (b) the phase introduced by the RIS.
A. FOOTPRINT OF INCIDENT BEAM ON THE RIS
The footprint of the incident beam on the RIS depends on the properties of the AP antenna.For most practical cases the main lobe of the incident beam can be modeled by a Gaussian distribution [9], [18], [33], [34].Without loss of generality we consider a y−polarized beam propagating on the xz−plane, the E-field of which at z = 0 is written as
RIS
) exp (−jk sin θ i x)ŷ, (1) where E 0 is a complex constant, w RIS the radius of the footprint at normal incidence (θ i = 0), k = 2π/λ is the free-space wavenumber and λ the wavelength.Note that this model takes into account the fact that under oblique incidence (θ i ̸ = 0) the footprint acquires elliptical shape, with the major axis of the ellipse residing on the x−axis [33].
B. RIS REFLECTION COEFFICIENT
To perform beamfocusing it is necessary to transform the incident beam into spherical wavefronts that converge at the desired location, characterized by the focal distance f 0 and the angle θ r , as schematically shown in Fig. 1(c).The role of the RIS is to introduce the necessary phase φ(x, y), in order to reshape the incident wavefront into the desired form, i.e. to add φ(x, y) to the phase of the incident wavefront.For polarization-preserving RIS, the footprint of the reflected beam on the RIS can be written as where (x, y) = |R| exp(−jφ(x, y)) is the RIS reflection coefficient and E i (x, y) the incident field at the RIS plane, i.e. the incident beam footprint.R is a complex constant that accounts for possible loss upon reflection (|R| ≤ 1) and φ(x, y) is the phase introduced by the RIS.
For conventional beam steering, the RIS corrects the phase tilt of the incident wave, which has the form exp (−jk sin θ i x) (see (1)), by adding a phase with the opposite slope and introduces a linear phase exp (−jk sin θ r x), to form a wavefront that directs the wave towards the desired direction.Hence, for beam steering on the xz-plane, the RIS introduces the phase (for steering out of plane, see e.g.[9]).For beamfocusing, after correcting the incident phase, the RIS must provide a phase of the form exp −jk (f 0 cos θ r ) 2 + (x − f 0 sin θ r ) 2 + y 2 , and the phase introduced by the RIS is expressed as Note that, in the limit f 0 → ∞, (4) approaches (3) (we may omit the global phase), i.e. corresponds to the phase for conventional beamsteering.
III. ANGULAR SPECTRUM REPRESENTATION APPROACH
Let us consider the field reflected from the RIS at any observation point (x, y, z), E r (r) = E r (x, y, z).In this notation, the reflected field right before departing from the RIS, i.e. on the xy-plane where the RIS surface resides, is E r (x, y, 0) ≡ E RIS (x, y).Its two-dimensional Fourier transform is y)e −j(k x x+k y y) dxdy, (5) where x, y are the Cartesian transverse coordinates and k x , k y the corresponding spatial frequencies.Similarly, the inverse Fourier transform reads Note that the field E RIS and its Fourier transform ÊRIS represent vectors and, hence, the Fourier integrals hold separately for each vector component.The field has to satisfy Maxwell's equations, which for free-space propagation reduce to the vector Helmholtz equation (∇ 2 + k 2 )E(r) = 0. Expressing similarly the reflected field E r (x, y, z) via its Fourier transform and inserting it into the Helmholtz equation, we find that the Fourier spectrum Êr of the reflected field evolves as where After performing the inverse Fourier transform of ( 7) we find for arbitrary z which is known as the angular spectrum representation [35], [36].The result of ( 9) states that the field at any z > 0 is determined entirely by its Fourier spectrum at z = 0. Hence, to calculate the reflected field, only knowledge of E RIS (x, y) is required, which is given by (2), using (1) and ( 4).Then, the reflected field at any observation point (x, y, z) is calculated using (5), the Fourier transform of E RIS (x, y), and inserting the result into (9).In (9), while the integration is performed in the entire R 2 domain, components with imaginary k z correspond to evanescent waves, which do not propagate.Therefore, we may reduce the domain of integration within the range k 2 x + k 2 y < k 2 .This technique provides the reflected power density distribution for any illumination conditions, from full to partial RIS illumination [33].Full illumination refers to the case where the incident beam footprint extends beyond the size of the RIS.In this case the integration domain in ( 5) is limited within the area of the RIS.Partial illumination refers to the case where the incident beam footprint is smaller than the RIS [9], [18], [34].Hence, we may neglect the RIS boundary and perform the integration of (5) in the entire R 2 domain; the RIS size becomes effectively infinite.Importantly, in this case, we may simplify the integration of (9), to express the reflected power density in compact analytical form.For example, we can take advantage of the fact that the integrand in (9) contributes to the integration essentially at a narrow region around k r = k sin θ r , where the k−content of the reflected beam is distributed.Therefore, for plane waves or beams of finite extent that typically have a narrow k−content, we may expand k z at the direction of propagation of the reflected wave, i.e. around k x = k r , k y = 0.This leads to the approximation of (8) (see Appendix A for details) For θ r = 0 • , this approximation reduces to the well-known parabolic form [18].We may further simplify the analytical calculation of ( 9) by taking advantage of the relatively large curvature of φ in (4), required to focus the beam at distances larger than the RIS size.In this case we may expand (4) around the center of the RIS to obtain (see Appendix A for details) For θ r = 0 • , i.e. beam focusing along the z-axis, (11) reduces to the simple parabolic form φ(x, y) = −k sin θ i x + kf 0 + k(x 2 + y 2 )/2f 0 [18].Using (1), ( 2) and ( 11) in ( 5) we calculate the k-spectrum of the reflected beam at the RIS plane, ÊRIS (k x , k y ).The reflected field at any observation point on the xz−plane is calculated by inserting this result and ( 10) into ( 9), and the resulting power density S r = |E r | 2 /2Z 0 is given by where Z 0 is the free-space wave impedance and z R = πw 2 RIS /λ is the Rayleigh length.Note that, for f 0 → ∞, (12) yields the received power for conventional beam forming [9], [18].The choice , where P t is the total power (Watts) of the incident beam.
Along the propagation direction of the focused beam, i.e. for x = r sin θ r , z = r cos θ r , the argument of the exponential term in (12) becomes zero, leading to a maximum power density expressed by the remaining prefactor By differentiating (13) with respect to r, the slope for r → 0 acquires the positive value 4P t |R| 2 cos θ i /π f 0 w 2 RIS , while vanishes for r → ∞.Therefore, there is always a maximum, which is expected at r = f 0 .To understand how beamfocusing depends on the involved parameters, next we examine the derived expressions with examples of realistic scenarios.
IV. IMPACT OF RIS FOOTPRINT AND FOCAL DISTANCE ON RECEIVED POWER
Let us consider a D-band indoor scenario with operation frequency of 150 GHz.The AP is equipped with a directional antenna of tunable gain that transmits a beam of constant power P t = 1 W towards the RIS.Depending on the position and orientation of the AP relative to the RIS, footprints of different size and ellipticity are captured by the RIS, all with the same total power.The RIS is lossless (R = 1) and focuses the incident beam towards θ r = 0 • .In Fig. 2 we use (12) to examine beamfocusing as a function of the beam footprint, for intended focusing at f 0 = 4 m focal distance.For relatively small footprint (w RIS = 0.1 m) we observe inefficient focusing, i.e. the power density is maximized sooner than f 0 , with maximum that is relatively weak.However, with increasing footprint (w RIS = 0.2 m, 0.4 m), the maximum power density improves dramatically and the focal point f 0 is quickly approached.
In Fig. 3 we examine the same scenario as a function of the focal distance, for constant footprint w RIS = 0.2 m.The relatively large footprint enables efficient beamfocusing at the desired distance and the maximum power density reduces with increasing f 0 .
To understand the observations in these two examples, let us examine the power density at the focal distance.By setting r = f 0 in (13) we obtain The form of ( 14) implies that, for constant incident power P t , the power density at a certain observation distance r is expected to increase with larger footprints (and/or angles of incidence), while it is expected to decrease with larger focal distances (and/or angles of reflection), in accord with our observations.The two general examples of Fig. 2 and Fig. 3 aim to provide insight into the underlying mechanisms of beamfocusing, beyond a specific AP-RIS-UE topology.
Next, we examine explicitly the impact of the topology on beamfocusing.
A. IMPACT OF AP POSITION ON RECEIVED POWER
To understand how beamfocusing depends on the distance and orientation of the AP relative to the RIS, let us now consider a mobile AP with antenna of 30 dB constant gain.In Fig. 4(a) the AP moves along the direction θ AP = 30 • and the RIS focuses the incident beam at f 0 = 4 m focal distance towards the direction θ r = 60 • .Due to oblique incidence (θ AP ̸ = 0), the incident footprint has shape that is elliptical (see ( 1)), and size that increases with increasing distance d AP .For d AP = 1, 2, 3 m, the radius of the footprint becomes w RIS = 9, 18, 27 cm, respectively (see Appendix B for details).Therefore, although the incident power is constant (P t = 1 W), the maximum power density at the focal point increases with increasing AP-RIS distance. In
B. IMPACT OF FOCAL DISTANCE ON RECEIVED POWER
The power density at the UE can be calculated using (12) at the UE location, i.e. for x = d UE sin θ UE , z = d UE cos θ UE .To maximize the power at the UE we need to ensure that the focal point is located at the UE, i.e. that θ r = θ UE and f 0 = d UE .In this case, the focal point essentially follows the UE as it moves, guaranteeing that the maximum power is delivered.Solving in terms of d UE we can retrieve the RIS-UE distance at which the power density acquires a desired threshold, S th , that is The form of (15) implies that the trajectory of constant power forms a circle with radius d UE (0)/2, centered at (x = 0, z = d UE (0)/2).As an example, in Fig. 5(a) we seek the UE positions at which the power density is constant and equal to S th = 0.4 W/cm 2 , when the beam is focused at focal distance f 0 = 4 m.The UE positions that satisfy the imposed criterion are denoted by the white circle, which is expressed analytically by (15).The circle essentially marks the locus of constant power density delivered by several different focused beams.The examples marked with the letters A, B and C depict three such beams, and their power density evolution along each individual direction is shown in 29540 VOLUME 12, 2024 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
V. BEAMFOCUSING EFFICIENCY ASSESSMENT A. POWER ENHANCEMENT OF BEAMFOCUSING VS. BEAMFORMING
From the examples demonstrated so far, it is evident that beamfocusing offers the advantage of high power concentration at confined regions in space.This is particularly practical for increasing the power at the receiver, without the need for higher gain at the transmitter, e.g. for energy-efficient communications and power transfer applications [11].To quantify the achievable enhancement in the received power we calculate the ratio of (13) over the received power in the absence of focusing (f 0 → ∞), accounting for conventional steering.The enhancement factor, e f ≡ P r /P r (f 0 → ∞), at r = f 0 yields In Fig. 6 we present ( 16), as a function of z R /f 0 and cos θ i / cos θ r , which serves as a universal plot, capturing any combination of all the involved parameters.In our examples, where z R /f 0 > 12 and cos θ i / cos θ r > 0.5, the minimum achievable enhancement is in the order of e f ∼ 10 2 .
B. FWHM OF FOCAL AREA
The ability to control the size of the focal spot is crucial for partitioning the radial distance along the focusing direction, e.g. for localization applications [11], [20], [21].Under the constraint of constant incident beam power, the size of the focal spot changes inversely with the beam maximum, which in turn depends on both the focusing direction and the focal distance, as captured by (14) and the examples studied so far.The extent of the focal spot can be expressed by its Full Width Half Maximum (FWHM) along the propagation direction, w FWHM , which for θ i = θ r = 0 • is given by (see Appendix C for derivation) The functional form of (17) dictates that w FWHM increases with f 0 and reduces with w RIS .As an example, in Fig. 7(a) we plot the trace of beam maximum (dashed black lines) and FWHM of focal spot (shaded areas) as a function of the steering angle θ r , for w RIS = 0.25 m and variable f 0 .Indeed, as f 0 increases, the FWHM increases as well.Note, however, that according to (17) we can achieve a focal spot with constant FWHM, regardless of the focal distance, if we increase w RIS with increasing f 0 .For example, as demonstrated in Fig. 7(b), using (17) with f 0 = 4 m and w RIS = 0.25 m, we find that upon changing f 0 to 8 m and 2 m, we can achieve a constant FWHM if we choose w RIS to be 0.5 m and 0.125 m, respectively.The constant ratio w RIS /f 0 in this example is not accidental; note that, in the limit z R ≫ f 0 , (17) takes the simple form indicating that a proportional change in f 0 and w RIS leads to the same FWHM.In our examples z R /f 0 > 12, i.e. we are well within the asymptotic limit of ( 17), as can be verified in Fig. 7(c).The simple form of ( 18) provides a straightforward way to partition the radial distance in segments of equal size, essentially forming zones of constant width.The formation of such zones and, in particular, the ability to control their extent, is crucial for tracking and localization applications, where a sensing-based communication scheme is able to detect the UE distance within controllable resolution, and transmit data with high quality of service.
VI. MULTI-BEAM OPERATION
For multiple users that reside in the RIS near field, the RIS can split the incident beam into a multitude of focused beams, to simultaneously serve all users.In this case, each reflected beam is associated with an individual reflection coefficient n (x, y) = |R| exp(−jφ n (x, y)), accounting for the n th user, with φ n (x, y) given by (4) (or its approximation, (11)).The angle θ r and the focal distance f 0 have distinct values for each n, θ r,n and f 0,n , respectively, thus accommodating reflected beams, directed and focused towards each user.The footprint at the RIS is then written as the linear superposition which is a generalization of (2) to N beams.The 1/ √ N prefactor is chosen to ensure that the incident power is preserved and equally distributed between the reflected beams, which are assumed to be spatially separated so that they interfere weakly (see Appendix D for details).Using (19) to integrate (9) leads to the analytical expression for multiple focused beams.Note that, because the total footprint is a linear superposition of N individual contributions, the total reflected field can be calculated in a straightforward manner, by solving for the n th E-field, E r,n , and expressing the total field as E r,total = N n=1 E r,n .The total power density, which is ∼ |E r,total | 2 , will eventually be cast in a rather long and unpractical analytical form; yet, we can take advantage of the fact that, for spatially separated beams that do not interfere significantly with each other, |E r,total In this case, we can apply the expressions derived in our work, which describe the dynamics of a single beam, to each of the simultaneously generated focused beams, individually.
As an example, in Fig. 8 we examine N = 3 simultaneously generated focused beams, and we compare the full numerical propagation of the beams using (9), with the analytical expression (12), applied to each of the three beams, individually.The incident beam has total power P t = 1 W and footprint radius w RIS = 20 cm, and is split into three beams of equal power upon reflection.The beams are reflected towards the directions characterized by the angles θ r,1 = −3 • , θ r,2 = 0 • and θ r,3 = 6 • , and the focal lengths f 0,1 = 4.5 m, f 0,2 = 3.5 m and f 0,3 = 5.5 m, respectively.In Fig. 8(a) we present the numerical solution of ( 9), as a function of the propagation distance z, using (19).The dashed lines denote cross-sections at distance z = 3 m, 4 m and 5 m, which are shown in panels (b),(c) and (d), respectively.Together with the numerical crosssections (green solid lines) is also shown the analytically calculated power density (black dashed lines), using (12) with = n , separately applied for n = 1, 2, 3. Note how, despite the beam interference that dominates close to the RIS, the analytical expression successfully reproduces the numerically calculated beam profile, especially at distances where the beams evolve into distinct, spatially separated beams.These observations enable us to use our analytical results to any number of simultaneously generated focused 29542 VOLUME 12, 2024 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
beams that are spatially separated, thus keeping the simplicity of our analytical expressions and the associated metrics, and generalizing the analysis and conclusions to multiple focused beams.Importantly, we can go beyond conventional beamfocusing, where an incident wave is focused into a single focal area, to design beams directed towards multiple directions with multiple focal points, for broadcast to selected multiple users.
Our framework can be applied to general multi-user scenarios, in conjuction with a multiple access scheme.For example, in a Time Division Multiple Access (TDMA) scheme, the RIS focuses the incident beam towards different directions and distances at different time slots, to successively serve each user.Similarly, in a Space Division Multiple Access (SDMA) scheme, different areas of the RIS are devoted to simultaneously focus multiple beams towards multiple users.In environments with severe multipath, where the instantaneous focused power may change due to fading, our framework can be directly extended to model beams with statistical properties (see e.g.[34]).
VII. CONCLUSION
As wireless communications are nowadays shifting to higher operation frequencies, taking advantage of the near-field offered by electrically large RISs opens up new opportunities for manipulating the wavefront of beams, to enrich communications with functionalities beyond conventional beamforming.In this work, we studied near-field engineering in RIS-aided links, in which the RIS transforms the incident beam into a focused beam, and we assessed the RIS beamfocusing capabilities analytically, using explicit electromagnetic modeling.To describe beamfocusing with a physically consistent model, the derivations were based on the angular spectrum representation approach, which captures the dynamics of free-space wave propagation in compliance with Maxwell's equations.With our model, we demonstrated the dependencies and trade-offs between crucial parameters, such as the incident beam's footprint on the RIS, the intended focal distance of the reflected beam, and the link topology.To assess the beamfocusing efficiency we provided metrics that are crucial for future applications, such as energy efficient communications, tracking and localization, and we demonstrated the theoretically expected performance with examples of typical D-band indoor scenarios.
APPENDIX A VALIDITY OF APPROXIMATIONS
To calculate (10) we expand (8) around the direction of propagation, which is characterized by k x = k r , k y = 0, where k r = k sin θ r .First, expansion around k y = 0 leads to where we have retained all terms up to 2 nd order.Next, expansion of (20) around k x = k r leads to where we have retained all terms up to 2 nd order.Substitution of k r = k sin θ r in (21) leads to the result of (10).
To calculate (11) we expand (4) around the center of the RIS, i.e. at x = 0, y = 0. Keeping all terms up to 2 nd order leads to the result of (11).
To verify the validity of the approximations ( 10) and ( 11) we have compared the analytically derived (12) with exhaustive full-wave numerical propagation examples under extreme conditions, e.g. at large angles of incidence and reflection.In this section we demonstrate the validity of the approximations ( 10) and ( 11) through comparison with their respective full form.In the following examples the AP is located in front of the RIS (θ AP = 0 • ), creating a footprint of w RIS = 0.15 m on the RIS.The RIS focuses the incident beam along three different directions characterized by θ r = 0 • , 30 • , 60 • .
In Fig. 9(a) we plot (10) for θ r = 0 • , which approximates the parabolic form of (8) in a relatively large region around k x = k y = 0, where the k-content of the reflected beam is located.This is illustrated in Fig. 9(b), where we show a cross-section of (10) along k y /k 0 = 0 (red dashed line) as well as the full form of k z for comparison (solid black line).The gray region marks the cross-section of the beam's k-content, verifying that within its extent, where integration takes place, ( 8) and (10) coincide.In the remaining panels we present the respective plots for θ r = 30 • (Fig. 9(c),(d)) and θ r = 60 • (Fig. 9(e),(f)).Note that the footprint used in these examples is relatively narrow and leads to a relatively wide kcontent.For larger footprint, the beam k-content is narrower, and the error in ( 10) is further suppressed.
In Fig. 10(a) we plot (11), the approximation of (4), which for θ r = 0 • has a parabolic form centered at x = y = 0. Using the full form of φ, in Fig. 10(b) we plot the relative phase error, which we define as (φ − φ approx.)/φ, where φ accounts for (4) and φ approx.for (11).The solid circle marks the FWHM of the incident beam.In the remaining panels we present the respective plots for θ r = 30 • (Fig. 10(c),(d)) and θ r = 60 • (Fig. 10(e),(f)).Note how the center of the parabolic phase moves along the x-axis with increasing θ r .In all cases the relative error is practically below 0.1%.
APPENDIX B DERIVATION OF FOOTPRINT RADIUS IN TERMS OF THE ANTENNA GAIN
The footprint of the AP beam on the RIS can be expressed using ( 1) with E 0 = 4Z 0 P t cos θ i /π w 2 RIS , as with expressing the AP radiation pattern at distance d AP , where the RIS is located.The power density of ( 22) can be alternatively expressed in terms of G t , the AP gain, as: where d is the radius of the sphere centered at the AP.For pencil beams d ∼ d AP and, upon simple inspection of ( 22) and ( 24), we reach the final result
APPENDIX C DERIVATION OF FWHM OF FOCAL AREA
To determine the FWHM of the focal spot we need to find the distance at which the power density is maximized.For θ i = θ r = 0 • , differentiation of (13) with respect to the propagation distance r yields the focal distance at which the power density is maximized and becomes equal to Using (27) to solve (13) for S r = S r (r = r focal )/2 in terms of w RIS , yields two solutions w RIS± corresponding to the two edges of the focal spot along r, where the threshold criterion is fulfilled.The FWHM of the focal spot along r is then w FWHM = w RIS+ − w RIS− .
APPENDIX D RIS REFLECTION COEFFICIENT FOR MULTI-BEAM OPERATION
The total reflected field can be expressed as the linear superposition of the field of the individual beams, E r,total = N n=1 E r,n .At z = 0 the total field E r,total becomes simply the footprint E RIS , and the individual fields are expressed in terms of the incident field as E r,n = w n n E i , where n is the reflection coefficient for the n th beam with w n ∈ R accounting for the weight with which the n th beam is reflected.Hence, the footprint takes the form For a lossless RIS, the power of the reflected beam must be equal to that of the incident, i.e. = N n=1 w 2 n (|R| = 1 for lossless RIS) and, hence, (30) leads to N n=1 w 2 n = 1.For equally distributed power among the reflected beams w 1 = w 2 = . . .w N , which leads to w n = 1/ √ N .
(a).In this work we consider beam steering on the xz−plane and, therefore, we may neglect the angle ϕ.The incident beam is generated by an access point (AP), the location of which is characterized by the distance d AP from the RIS center and the angle θ AP ≡ θ i , as shown in Fig.1(b).The location of the user equipment (UE) is characterized by the distance d UE from the RIS center and the angle θ UE .As shown in Fig.1(c), the reflected beam may point towards any direction and, hence, to maximize the
FIGURE 1 .
FIGURE 1. System model of a RIS-aided beamfocusing link.(a) The incident field, impinging along the direction characterized by the angles θ i , ϕ i , is focused along the direction characterized by the angles θ r , ϕ r (b) Cross-section of the focused beam on the xz−plane, illustrating the AP and UE locations relative to the RIS.(c) Schematic illustration of the spherical wavefronts of the focused beam that are directed at angle θ r and converge at distance f 0 .The power at the UE is maximized when θ r = θ UE and f 0 = d UE .
FIGURE 2 .
FIGURE 2. Impact of incident beam's footprint on the RIS, w RIS , on beamfocusing.Cross-section of power density distribution on the xz−plane of a beam focused along the direction θ r = 0 • with f 0 = 4 m, and (a) w RIS = 0.1 m, (b) w RIS = 0.2 m, and (c) w RIS = 0.4 m.(d) Power density along the beam center for the cases shown in panels (a)-(c).
FIGURE 3 .
FIGURE 3. Impact of focal distance f 0 on beamfocusing.Cross-section of power density distribution on the xz−plane of a beam focused along the direction θ r = 0 • with w RIS = 0.2 m, and (a) f 0 = 2 m, (b) f 0 = 4 m, and (c) f 0 = 8 m.(d) Power density along the beam center for the cases shown in panels (a)-(c).
FIGURE 4 .
FIGURE 4. Impact of AP distance and orientation with respect to the RIS, on beamfocusing.Power density along the beam center, for AP locations characterized by (a) θ AP = 30 • and d AP = 1, 2, 3 m (b) d AP = 2 m and θ AP = 0 • , 30 • , 60 • .In all examples the AP beam is focused by the RIS towards the focal point characterized by θ r = 60 • and f 0 = 4 m.
Fig. 4 (
b) the AP moves along a circular trajectory at constant distance d AP = 2 m from the RIS, creating a footprint with w RIS = 18 cm.As the AP departs from the RIS normal, the incident footprint becomes more elliptical, in turn increasing in size, and improving the maximum power density at the focal area, as shown in the examples for θ AP = 0 • , 30 • , 60 • .
FIGURE 5 .
FIGURE 5. Beamfocusing at the UE position.(a) Spatial distribution of power density for the three example beams A, B, C, focusing along θ r = 0 • , 30 • , 60 • , respectively.The white circle marks the locus of UE positions, where the received power is constant.(b) Cross-section along the propagation direction of beams A, B, C shown in (a).(c) Power density of beamfocusing at the UE position, as a function of the power density threshold S th .The contour lines show examples of S th , and represent the locus of UE positions, where the same received power can be achieved for each individual case.
Fig. 5 (
Fig. 5(b).Requiring different threshold leads to a circle of different radius and the collection of all such possibilities is shown in Fig. 5(c) as a function of S th .
FIGURE 6 .
FIGURE 6. Enhancement factor, e f , expressing the increase in the achievable power delivery to the UE due to beamfocusing, over the respective with beamforming.
FIGURE 7 .
FIGURE 7. Properties of focal spot.(a) Trace of beam maximum (dashed black lines) and FWHM of focal spot (shaded areas) as a function of the steering θ r for w RIS = 0.25 m and variable f 0 .(b) Same as in (a) with tunable w RIS , in order to achieve the same FWHM for the different f 0 's.(c) Plot of (17) and asymptotic limit for z R ≫ f 0 .
FIGURE 8 .
FIGURE 8. Multiple simultaneously focused beams, for broadcast to selected multiple users.(a) Power density of numerically propagated reflected field.The incident beam has footprint w RIS = 20 cm and carries total power P t = 1 W, which is split equally by the RIS into three beams, characterized by θ r = −3 • , 0 • , 6 • , and f 0 = 4.5 m, 3.5 m, 5.5 m, respectively.The white dashed lines mark cross-sections of the beams at (b) z = 3 m, (c) z = 4 m, and (d) z = 5 m, where comparison with the analytical model of (12) for single beam is also shown.
|E i | 2 .For beams that are spatially separated we may simplify |E r,total | 2 ≈ | 2 , which leads to |E RIS | 2 ≈ N n=1 |w n n | 2 |E i | 2 .Note that N n=1 |w n n | 2 = N n=1 w 2 n |R| 2 29544VOLUME 12, 2024Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.N n=1 |E r,n | 8,989.4 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Effects of obesity on the healing of bone fracture in mice
Background Obesity affects bone health to varying degrees, depending on the skeletal site (weight-bearing or non-weight-bearing) and compartment (cortical or trabecular), and is a risk factor for orthopedic disorders, including bone fractures. However, the effect and mechanisms of obesity on healing of bone fracture is little understood. Methods The healing bone fractures of the tibia in genetically obese mice was evaluated relative to normal mice at weekly intervals for 28 days using X-ray scans, hematoxylin and eosin (H&E) stain, and alcian blue (AB) stain. Plasma concentrations of relevant proteins were also compared via enzyme-linked immunosorbent assay (ELISA). These included calcitonin gene-related peptide (CGRP), fibroblast growth factor (FGF), transforming growth factor beta 1 (TGF-β1), and tumor necrosis factor-α (TNF-α). Results Bone fracture healing was delayed in the obese mice compared with the control group of normal mice, based on X-ray, H&E stain, and AB stain analysis. This was accompanied with significantly low plasma CGRP, FGF, and TGF-β1 (ELISA). However, TNF-α was significantly higher in obese mice compared with the control. Conclusion Bone fracture healing was significantly slower in the obese mice, relative to that of normal mice. The lower levels of CGRP, FGF, and TGF-β, and higher level of TNF-α, observed in obese mice may contribute to this observed delay in fracture healing.
Background
Obesity is a complex disorder in which excess body fat has accumulated to a body mass index ≥ 30 kg/m 2 [1], due to an energy imbalance between calories consumed and calories burned. According to the World Health Organization, in the year 2014 worldwide more than 600 million people were obese [2], with the prevalence doubling from 1980 to 2014 [2]. In developed countries, the prevalence of obesity is higher-for example, in the USA in 2007, 33% of men and 35% of women were obese.
Obesity is a significant contributor to many chronic disorders such as hypertension, dyslipidemia, type 2 diabetes mellitus, coronary heart disease, and certain cancers [3]. However, it has been considered that obesity may be beneficial to bone health, because of the well-established positive effect of mechanical loading (here, body weight) on bone formation. Controversially, fat accumulation due to obesity is detrimental to bone mass. Research has also indicated that obesity may affect bone metabolism by any of the following effects: increasing adipocyte differentiation and fat accumulation; decreasing osteoblast differentiation and bone formation; increasing circulating and tissue proinflammatory cytokines (promoting osteoclast activity and bone resorption); upregulating proinflammatory cytokine production; and interfering with intestinal calcium absorption, thereby decreasing calcium availability for bone formation [1,[3][4][5][6][7][8][9][10][11][12]. Less understood are the effects and mechanisms of obesity on bone fracture healing.
The present study investigated the effects of obesity (alone) on the healing of bone fracture, using a murine model of specific-pathogen-free (SPF) B6.Cg-Lep ob /J ob/ ob male mice (obese). We also included SPF normal-weight healthy male mice as the control group for comparison. X-ray scans, hematoxylin and eosin (H&E) stain, and alcian blue (AB) stain were used to study the progress of bone healing at the following post-fractural timepoints: days 0, 7, 14, 21, and day 28. The plasma concentrations of relevant proteins were also evaluated at the same timepoints. The relevant proteins included calcitonin gene-related peptide (CGRP), fibroblast growth factor (FGF), transforming growth factor beta 1 (TGF-β1), and tumor necrosis factor-α (TNF-α).
Our results suggest that bone fracture healing was significantly slower in the obese mice, relative to that of normal mice. The lower levels of CGRP, FGF, and TGF-β, and higher level of TNF-α, observed in obese mice may contribute to this observed delay in fracture healing.
Imposing tibia fractures
The Institutional Animal Care and Use Committee approved the use of mice in this study, which also complied with the guidance for animal use set forth by the National Institutes of Health. Twenty SPF male C57BL/ 6J mice (normal healthy control group; 6-8 weeks old) and 20 SPF male B6.Cg-Lep ob /J ob/ob mice (obese group; 6-8 weeks old) were purchased from the Jackson Laboratory (Bar Harbor, ME, USA). These mice were checked after arrival to ensure that they were not infected with any diseases. They were housed at 18-26°C and 40-70% humidity with freely accessible water and food in the animal center. The body weights of the mice in both groups were monitored throughout this study.
Both groups gained weight during the study, but the obese mice gained much more weight than the control mice (Table 1).
To implement fractures of the tibia, each mouse was completely sedated with 1% pentobarbital sodium and placed supine on a surgical table (Taizhou Xintai Medical Equipment Manufacturing, Jiangsu, China). The hair on the right caudal limb was shaved, and the limb was disinfected. A longitudinal incision (0.5 cm) was made below the right keen joint. The muscle and fascia were separated from the tibia, and the tibial shaft was cut at the caudate one-third using a bone saw. The tibial bone surfaces were immediately irrigated with sterilized 0.9% saline. A stainless steel intramedullary rod 1.0 (Jiangzhou Medical Devices, Jiangsu, China) was inserted to reconnect the broken tibial bones. The incisions were sutured via layer by layer.
All the mice had free activities and free access to food. A blood sample was collected from each mouse before the surgery (baseline). After the surgery, the mice in each group (obese and normal control) were randomly assigned to four sub-groups (n = 5 for each subgroup) to be examined at 7, 14, 21, and 28 days, respectively.
X-ray
X-ray radiographic analysis has been widely used to ensure the fracture pattern and the position of the fixation needle, and qualitatively examine fracture healing [13]. At the pre-set post-operative timepoints (days 7, 14, 21, and 28), one subgroup (5 mice) from each group of mice (i.e., obese and control) were selected and filmed using LX-24HA X-ray (Konica Minolta, Japan) at 30 kV and 8 mA before they were sampled for blood and then killed.
ELISA immunohistochemistry
At each postoperative timepoint (days 0, 7, 14, 21, and 28), blood samples were withdrawn from the designated subgroup (5 mice) selected for X-ray imaging from the obese and control mice and analyzed using ELISA [14,15]. Briefly, the selected mice were anesthetized via intraperitoneal injection of 10% chloral hydrate. After an eyeball was removed, 0.2-0.6 mL of blood was collected from the eye socket of each mouse using a 1.5 mL centrifuge tubes (EP). Each whole blood sample was centrifuged at 4000 rpm for 10 min to obtain the blood plasma sample. The resulting plasma samples were stored at − 80°C before ELISA. The plasma concentrations of CGRP, TGF-β1, FGF, and TNF-α were quantified using the corresponding ELISA kits, in accordance with the manufacturers' instructions.
H&E staining
At each postoperative timepoint (days 0, 7, 14, 21, and 28), after the X-ray imaging and blood collection (described above), the selected mice were killed. Some fractured tibial bone was harvested from each mouse for H&E staining. Briefly, two specimens (0.5 cm long, each) were cut from each freshly collected bone sample, one from each side of the fractured bone and starting from the fracture site. The fresh specimens were fixed in 10% neutral formalin solution for 48 h. The fixed specimens were soaked in 18% EDTA solution, decalcified using a microwave until the specimens were soft enough for a needle to penetrate, then washed under running water for 12 h. The resulting specimens were dehydrated, cleared, dipped in wax, and then processed with an embedding machine. The specimens were embedded using liquid paraffin in the presence of a base film and plastic-covered box. The embedded specimens were sliced using a microtome, to 5-μm slices. The slices were de-waxed, rehydrated, and stained using an H&E staining kit in accordance with the manufacturer's instructions. The stained slices were dehydrated, sealed using neutral balsam, and examined under an Olympus IX71 microscope (Tokyo, Japan).
AB staining
The protocol for specimen collection and preparation and staining with AB, and viewing slides, was identical to the protocol for H&E staining, except that AB was used for staining.
Statistical analysis
Each experiment was repeated ≥ 3 times. All numerical experimental data are presented as mean ± standard deviation. Statistical analyses were performed using one-way analysis of variance and the t test, with SPSS 19.0 software. Statistically significant differences between the obese and control groups were considered whenever the P value was < 0.05.
Results
Delayed bone fracture healing for mice X-ray images were obtained of the fracture sites of the obese and normal control mice at post-fracture days 7, 14, 21, and 28 ( Fig. 1). No dislocation was observed of the intramedullary rods in either group at any timepoint. Both groups of mice had clear fracture lines at the fracture site at postoperative day 7 (Fig. 1a, e).
At postoperative day 14, the control group showed a large number of blurred shadows, which indicated callus formation (Fig. 1b). The obese mice showed no obvious blurred shadows around the fracture sites, suggesting little or no formation of callus (Fig. 1f ). At postoperative day 21, the control group showed a large number of continuous blurred images (Fig. 1c), and on day 28, continuous cortical bone at the fracture sites (Fig. 1d). In the obese group, on the corresponding postoperative days (21 and 28) there was less indication of callus and there was no continuous cortical bone at the fracture sites (Fig. 1g, h).
Cell activities at bone fracture sites
Optical images of H&E stained bone specimens were obtained at each post-fracture timepoint for each group of mice ( Fig. 2). At postoperative day 7, the H&E stained bone specimens of all the mice in each group had a large number of undifferentiated mesenchymal cells at the fracture sites (Fig. 2b, g). In addition, the control mice had a large number of chondrocytes that were in the resting period or proliferative phase at the fracture sites (Fig. 2b), while the obese mice had relatively few chondrocytes at the fracture sites (Fig. 2g).
At postoperative day 14, the normal mice had a large number of chondrocytes and much collagen tissue at the fracture sites, and a small amount of new cancellous bone trabeculae was also observed (Fig. 2c). At the same timepoint, the obese mice had a large number of chondrocytes (mainly hypertrophic chondrocytes) and much collagen tissue at the fracture sites, and a small amount of new cancellous bone trabeculae (Fig. 2h).
At postoperative day 21, in the control mice, only a few chondrocytes and no collagen tissue were observed at the fracture sites, but trabecular bones were visible (Fig. 2d). In the obese mice, a few chondrocytes and collagen tissue were observed at the bone fracture sites, and trabecular bones could be seen (Fig. 2i).
At postoperative day 28, in the normal mice, the fracture ends were connected by bone scabs, which were filled with well-arranged trabecular bone and osteoblasts (Fig. 2e). In the obese mice, a large number of osteoblasts and bone cells in the bone mass were observed at the bone fracture ends, with a reduction in cartilage callus and increase in bone callus as compared with the obese mice on day 21 (Fig. 2j).
Optical images of AB stained bone specimens were obtained at each post-fracture timepoint for each group of mice (Fig. 3). The AB staining clearly revealed that new cartilage (blue in Fig. 3b-e) was gradually formed from post-fracture day 7 for the control group of mice, indicating that bone healing was developing well. Unlike the control group, the obese mice had no new cartilage on post-fracture day 7 (Fig. 3g). New cartilage gradually formed from post-fracture day 14 (Fig. 3h), but the progress was slow (Fig. 3i, j).
Plasma protein levels
For both groups of mice, the plasma concentrations of CGRP decreased over time (Fig. 4a). However, at every postoperative timepoint, the plasma concentrations of CGRP in the control group were significantly higher than that of the obese mice.
The plasma concentrations of FGF in the control mice significantly increased from day 0 to day 7, and then consistently decreased thereafter at each timepoint (Fig. 4b). However, for the obese mice, plasma FGF levels decreased from day 0 to day 14, and then increased slightly without reaching the level at day 0. At each timepoint from days 7 to 28, the plasma concentrations of FGF in the control group were significantly higher than that of the obese mice. At day 28, the plasma concentrations of FGF in the normal control group was higher than that of the obese group.
In the control group, the plasma concentrations of TGF-β increased slightly from day 0 to day 14, and then consistently decreased at each timepoint thereafter (Fig. 4c). In the obese mice, the plasma concentrations of TGF-β decreased significantly from day 0 to day 7, and remained relatively unchanged from day 14 to day 28. At day 28, the plasma levels of TGF-β of the normal control mice were higher than that of the obese mice.
In the control group, the plasma concentrations of TGF-α slightly increased from day 0 to day 7, and then consistently decreased thereafter (Fig. 4d). In the obese mice, plasma concentrations of TNF-α rose from day 0 to day 7, gradually decreased from days 7 to day 21, and then decreased at a greater rate from days 21 to 28. At each timepoint, the plasma concentrations of TNF-α of the obese mice were higher than that of the control
Discussion
In this study, we created tibial bone fractures in genetically obese and normal healthy mice at their right hind limb, and compared the progress in healing of both groups of mice after fixing the fractures using stainless steel intramedullary rods. In general, the fracture fixation technique used in this study results in endochondral-based bone healing [16], which consists of a series of molecular and cellular processes that are temporospatially coordinated in four stages: initial inflammatory response; soft (cartilaginous) callus formation; hard (woven bone) callus formation; and initial bony union and bone remodeling [17,18]. Based on reports in the literature [16], we selected four timepoints to study the progress of fracture healing in the mice, specifically at 7, 14, 21, and 28 days after fracture. We set the last timepoint at 28 days, because it has been reported that at 28 to 35 days, osteoclasts populate the tissue and remodel the callus at the fracture sites of rats, converting it to a lamellar bone structure [16]. This timepoint in rats roughly corresponds to 6 to 7 weeks in humans [16]. We observed that the healing of the fractures in the obese mice was significantly slower compared with the normal mice. This was evidenced by the following observations via X-ray scans.
At 14 days post-fracture, the normal control mice had a large number of blurred shadows, indicating a large amount of callus formation (Fig. 1b), while the other group showed no obvious blurred shadows around the fracture sites, proving less formation of callus (Fig. 1f ). At 21 days, the control mice had a large number of continuous blurred areas on images (Fig. 1c), and at 28 days, continuous cortical bone at the fracture site (Fig. 1d). However, at both 21 and 28 days, the obese mice showed less volume of callus and had no continuous cortical bones at the fracture sites (Fig. 1g, h). This delayed healing of fractures in obese mice was consistent with previous reports [19].
The results from the H&E staining experiments further confirmed the slow or delayed healing of bone fractures in the obese mice. The formation of stabilizing callus is the key step for fracture healing, in which cartilage is formed, then resorbed, and finally replaced with new bone [20]. In the present study, at 21 days after fracture, the control mice already showed visible trabecular bone at the fracture site (Fig. 2d), while for obese mice, visible trabecular bone was absent (Fig. 2i). At post-fracture day 28, the fracture ends in the control mice were connected by bone scabs which were filled with well-arranged trabecular bone and osteoblasts (Fig. 2e). For obese mice, there was less cartilage callus, and bone callus had increased at the fracture sites (Fig. 2g). These observations are in accord with reports from other research groups [21].
It has been reported that, for bone healing, it is essential that mesenchymal cells and chondro-osteoprogenitor populations are recruited to the fracture site [17,22,23]. In the present study, during the initial 7 days after fracture, the normal mice apparently had more chondrocytes and undifferentiated mesenchymal cells than did the obese mice. At 28 days, osteoblasts had filled the bone scabs (Fig. 2e) in this group.
The slow or delayed healing of bone fractures in the obese mice was also confirmed by the AB staining, which showed that the time to form new cartilage in obese mice was 7 days later than for normal mice, and new cartilage formation progressed much more slowly than for normal mice.
The recruitment of mesenchymal cells and their subsequent differentiation into osteoblasts are significantly important for bone healing. However, these two steps require growth factors for angiogenesis, neovascularization, and promoting new bone formation. In the present study, we therefore quantified the concentrations of CGRP, FGF, and TGF-β1 in the mouse blood. These proteins are well recognized for their roles in bone healing. At each timepoint post-fracture, the normal mice had a higher plasma concentration of CGRP than did the obese mice. High levels of CGRP in serum released from brain tissue after traumatic brain injury has been shown to enhance fracture healing [24]. Another recent study showed that local neuronal production of CGRP induced by magnesium implants improved the healing of bone fracture in rats [25]. Therefore, our observations of both low plasma CGRP concentration and delayed fracture healing in obese mice is consistent with these earlier reports.
It has been reported that FGF-2 chemically controlled released from tissue engineering constructs significantly increased bone formation in a mice critical-sized calvarial defect model [26]. In addition, overexpression of FGF-2 in transgenic mice also accelerated bone formation via faster progression through the stages of cartilage formation, bone union, and callus remodeling with higher numbers of osteoblasts around the fracture area [27].
These previous reports confirm that the lower plasma FGF concentration in the obese mice of the present study, observed at all timepoints, strongly contributed to the slower fracture healing in the obese mice compared with the normal group. Indeed, the plasma FGF concentration in the control group increased from baseline after the surgery until at least day 14. This short-term increase in plasma FGF concentration benefited fracture healing in the normal mice.
For TGF-β1, the changes over time in its plasma concentration in the control group were similar to that reported for normal healing in humans, i.e., initially increasing and then decreasing thereafter [28]. However, in the obese group of the present study, plasma TGF-β1 levels decreased during the initial period, with a subsequent slight increase at 14 days, and then continuously decreased. The plasma TGF-β1 level in the obese mice never reached the corresponding level in the normal mice at any timepoint-7, 14, 21, or 28 days post-surgery. An earlier study reported that fracture healing in adult male rats could be accelerated by increased expression of TGF-β1 in both plasma and at the fracture site, when due to traumatic brain injury [29]. Another study revealed that the administration of naproxen sodium into bone fracture model rats decreased their TGF-β1 serum levels and resulted in a slow fracture healing for these rats, while the use of granulocyte colony stimulating factor (G-CSF) increased the TGF-β1 serum levels and led to a better fracture healing [30]. In addition, TGF-β1 released from TGF-β1-loaded microgranules promoted bone regeneration in rabbit calvarial defects after 4 weeks [31]. Therefore, it is reasonable to believe that, in the present study, the lower plasma concentration of TGF-β1 in the obese mice observed at all timepoints (7, 14, 21, and 28 days post-surgery) was a contributing factor to the slower fracture healing in the obese mice compared with the normal group.
Regarding the mechanism of slower healing in the obese mice of the present study, the lower plasma concentrations of TGF-β1 and FGF in the obese group were associated with slower formation of blood vessels at the fracture sites. This in turn contributed to less recruitment of mesenchymal cells and chondrocytes, also observed in these mice. Moreover, lower plasma concentrations of CGRP, FGF, and TGF-β1 apparently inhibited further the proliferation of mesenchymal cells and differentiation of mesenchymal cells into bone-forming cells such as osteoblasts, as few or no osteoblasts were observed in the obese group at 28 days. These results are consistent with earlier reports concerning the functions of CGRP [25,32], FGF [27,33], and TGF-β1 [28,34,35].
In this study, the obese group showed higher levels of plasma TNF-α compared with the normal control mice at all timepoints. This higher level of TNF-α in the obese group is consistent with earlier reports that TNF-α is highly expressed in obese children [36][37][38][39], and increased levels of TNF-α were detected in the serum of obese mice induced by a high-fat diet [40]. In addition, bone fractures immediately initiate inflammatory responses, thus further stimulating the secretion of various inflammatory factors, including TNF-α. In the present study, plasma TNF-α levels increased in both groups during the initial 7 days post-fracture, but the obese mice maintained their higher plasma TNF-α levels longer than did the normal control mice. This longer duration may have promoted the formation and differentiation of osteoclasts from mesenchymal cells [41,42]. This might further have contributed to the delayed bone healing in the obese group. Thus, the delayed bone healing in the obese group is in accordance with the observation from other research groups that a TNF-α blockade improved tendon-bone healing in rats at early timepoints [43].
This study is limited by its relatively small subgroup size and few observed timepoints.
Conclusions
Our results suggest that bone fracture healing was significantly slower in the obese mice relative to that of the normal mice. The lower levels of CGRP, FGF, and TGF-β, and higher levels of TNF-α, in the obese mice may have contributed to this delay in fracture healing.
Abbreviations AB: Alcian blue; CGRP: Calcitonin gene-related peptide; ELISA: Enzyme-linked immunosorbent assay; FGF: Fibroblast growth factor; H&E: Hematoxylin and eosin stain; TGF-β1: Growth factor beta 1; TNF-α: Tumor necrosis factor-α Availability of data and materials All data generated or analyzed during this study are included within the article.
Authors' contributions XDQ designed the study. GF collected and analyzed the data of X-ray scans. TRL advised on histological staining and analysis. JCZ contributed samples collection and ELISA assay. GF drafted and wrote the manuscript. GF, TRL, JCZ, and XDQ revised the manuscript critically for intellectual content. All authors gave intellectual input to the study and approved the final version of the manuscript.
Ethics approval
The study was approved by the Ethics Committee of First Affiliated Hospital, Nanjing Medical University. All procedures involving animals were performed in accordance with the ethical standards of First Affiliated Hospital, Nanjing Medical University. | 5,388.6 | 2018-06-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Asymmetries in the ENSO phase space
El Niño Southern Oscillation (ENSO) dynamics are best described by the recharge oscillator model, in which the eastern tropical Pacific sea surface temperatures (T) and subsurface heat content (thermocline depth; h) have an out-of-phase relationship. This defines a 2-dimensional phase space diagram between T and h. In an idealized, stochastically forced damped oscillator, the mean phase space diagram should be a perfectly symmetrical circle with a clockwise propagation over time. However, the observed phase space shows strong asymmetries. In this study we illustrate how the ENSO phase space can be used to discuss the phase-dependency of ENSO dynamics. A normalized spherical coordinate system allows the definition of phase-depending ENSO growth rates and phase transition speeds. Based on these we discuss the implications of the observed asymmetries with regards to the dynamics and predictability of ENSO; with a particular focus on the variations in the growth rate and coupling of ENSO along the oscillation cycle. Using linear and non-linear recharge oscillator models we will show how dynamics and noise are driving ENSO at different phases of the ENSO cycles. The results illustrate that the ENSO cycle with positive phase transitions is present in all phases but has strong variations in its strength. Much of these variations result from presenting the ENSO phase space with estimates of h based on the iso-thermal depth, that is not ideal as it is not out-of-phase with T. Future work should address how h can be estimated better, including aspects such as the vertical temperature gradients and the meridional or zonal range. We further illustrated that a non-linear growth rate of T can explain most of the observed non-linear phase space characteristics.
Introduction
The most widely used theoretical, conceptual model of the El Nino Southern Oscillation (ENSO) mode is the linear recharge oscillator (ReOsc) model [Burgers et al., 2005;Jin, 1997;Timmermann et al., 2018]. In this model ENSO is described by a cycle between subsurface upper ocean heat content and the sea surface temperature (SST), see sketch Fig. 1. Here increased (recharge) upper ocean heat content, which is measured by a deepening of the thermocline depth (h) leads to the development of El Nino SST anomalies in the eastern equatorial Pacific (T). These SST anomalies an analytical discussion of how non-linear dynamics affect the probabilities in the ENSO phase space. Takaheshi et al.
[2019] used the ENSO phase space to illustrate differences between a linear and non-linear model of ENSO.
Non-linear aspects of ENSO have been documented in the past in many different studies; including non-linearities in the amplitude, time evolution and patterns [Burgers and Stephenson 1999;Dewitte et al. 2013;Su et al. 2009;Ohba et al. 2010;Okumura and Deser 2010;Takahashi et al. 2011;Dommenget et al. 2013]. The ENSO phase space should be able to reflect the non-linearities in the amplitude and time evolution of ENSO and could potentially help to better understand the underlying dynamics of these two characteristics.
Several studies have tried to model ENSO non-linearities with the help of a non-linear variation of the ReOsc or other models [e.g., Choi et al. 2013;Levine et al. 2016;Frauen and Dommenget 2010]. They have been able to explain a number of different non-linear aspects of ENSO, but it is unclear how these approaches capture the asymmetries observed in the ENSO phase space.
Previous studies suggest that the predictability of ENSO is likely to be phase-depending [e.g., Dommenget et al. 2013;Timmermann et al. 2018]. Dommenget et al. [2013] found that strong La Nina events are likely to be more predictable than strong El Nino events of lead times of 7-11 months, due to the non-linear wind-SST relation. In contrast, Timmermann et al. [2018] argue that the transition from a recharge to an El Nino state is more predictable and that La Nina conditions are generally less predictable.
The aim of this study is to take a closer look at the ENSO phase space and present a detailed analysis of its observed characteristics. We aim to combine this analysis with a comparison of the observed phase space and the observed ReOsc model fits. By doing so, we would like to illustrate the extent a linear and non-linear ReOsc model can describe Fig. 1 Sketch of the ENSO recharge oscillator model dynamics. The ENSO cycle is clockwise with the heat content (h) in the vertical direction and sea surface temperature anomalies (T) in the horizontal direction. The three blue arrows in the horizontal planes mark wind anomalies resulting from T and explain the observed phase space characteristics. The ultimate aim of this study is to introduce the ENSO phase space characteristics as an effective way to present and analyse key ENSO dynamics.
The study is organised as follows: The following section introduces the data set used, the ReOsc model equations and the methods of estimating important parameters and statistics. Section 3 presents the results of the observed ENSO phase space, which is followed by a section on the linear ReOsc model and a section on a non-linear ReOsc model. In the final analysis section, we focus on the predictability of ENSO in the context of the ENSO phase space. Then the study is concluded with a summary and discussion.
Data, models and methods
Observed SST data is taken from the HADISST 1.1 data set for the period 1980 to 2019 [Rayner et al., 2003]. The monthly mean SST anomaly index region for T is the NINO3 region (150°W-90°W, 5°S-5°N). The thermocline depth anomalies, h, is estimated on the basis of the 20 o C thermocline depth (Z20) averaged over the equatorial Pacific (130°E-80°W, 5°S-5°N). Given the limitations in subsurface observations of temperature we use a combination of datasets to estimate monthly mean h: the 1980-2019 20 °C isotherm depths from the temperature analyses of the Bureau National Operations Centre (BNOC) at the Australian Bureau of Meteorology [Meinen and McPhaden 2000], the ocean reanalysis from SODA3 1980-2017[Carton and Giese 2008 and the CHOR AS and RL ocean reanalysis 1980-2010 [Yang et al., 2017]. The four datasets are combined to one long time series of T and h, thus repeating each year four times to better capture the variability. We also considered the GECCO2 reanalysis data [Köhl, 2015], but neglected it for this analysis, because it produced significantly different statistics compared to the other four datasets.
The ReOsc model is based on two tendency equations [Burgers et al., 2005]: with the growth rates of T(a 11 ) and h(a 22 ), the coupling parameters (a 12 and a 21 ) and the noise forcing terms (ζ T and ζ h ). The parameters of Eqs. [1][2] are estimated for the combined observations by multivariate linear regression the monthly mean tendencies of T and h against monthly mean T and h, respectively [Burgers et al. 2005;Jansen et al. 2009;Vijayeta and Dommenget 2018]. The residual of the linear regression fit can be interpreted as the random noise forcing, with the standard deviation (stdv) of the residuals being the stdv of the noise forcing for the T and h equations (ζ T and ζ h ). The values are shown in Table 1.
The ENSO phase space is presented by plotting T on the x-axis versus h on the y-axis, see Fig. 2. This Cartesian coordinate system can be transformed into a spherical coordinate system with the phase angle φ = 0 o in the h (y-direction) and 90 o in the T (x-direction). φ follows a clockwise rotation (Fig. 2). For this presentation it is useful to normalise T and h, by their respective standard deviation (Table 2) to get a non-dimensional presentation of the variables (T n and h n ). This normalization can also be applied to the fitted ReOsc model parameters ( Table 2).
In this normalized presentation we can define an ENSO system anomaly, S, as function of the two components T n and h n . The magnitude of S is constant for a constant radius and is not a function of the phase φ . Thus, the ENSO system is now described by the magnitude of S and φ . The tendencies of the ENSO system, as a function of the ENSO phase, are best described by the radial and tangential components. The radial component describes the tendency to move away from the origin (positive values) or towards it (negative values). The tangential component describes the tendency of the system to circle around the origin, with positive values indicating clockwise motion and negative values indicating anti-clockwise motion.
The analysis of observed or simulated statistics is based on monthly mean vales of T and h. The tendencies of T and h are estimated as: from the ReOsc model. This clockwise rotation is present for all phase angles, or more generally, in all four quarters of the diagram. Thus, positive heat content anomalies (h n ) lead to positive SST anomalies (T n ), which subsequently lead to negative h n , which lead to negative T n , and then back to a positive h n to complete the cycle. Therefore, the observed ENSO anomalies and their mean tendency do fit into the ReOsc model idea. However, there are some clear asymmetries present in the observed ENSO phase space diagram that are not expected from the idea of a linear ReOsc model. First of all, we can note that the ENSO system scatters much more towards positive T n values, than towards negative ones, and more towards negative h n values, than towards positive ones. Both asymmetries are expected from the well-known positive skewness in T n and negative skewness in h n [Trenberth 1997;Burgers and Stephenson 1999;Su et al. 2009].
It should be noted that much of the analysis could potentially also be done by analytically analysing Eqs. 1 and 2. However, this is not done here to provide a basis for applying this kind of analysis to any simulated or observed data. Figure 2 shows that the observed monthly mean ENSO phase space values are mostly a chaotic clustering around the origin, but for larger values the transition from one month to the next appears to be circling around the origin. This is indicating a transition in the phase space. To better illustrate how the system is developing in this phase space we compute the mean tendencies of the ENSO anomalies at different sections of the phase space, see vectors in Fig. 2.
Observed Phase Space
The mean tendencies of the ENSO anomalies highlight a clear clockwise rotation in the ENSO system, as expected between T n and h n with a lag zero correlation of zero, is not quite as it is observed.
The mean tendencies, as a function of the phase, are best described by the radial and tangential components, see Fig. 3b. For a stationary system, as ENSO is, the mean radial part over all phases must be zero, as the system is in average around the origin. The radial component is related to the growth rate of the system as it describes the tendency of the system to grow or decay.
The observed mean radial tendency is positive around 0 o and negative around 100 o and 220 o . This can also be noted by the mean tendency vectors in The smaller values indicate that the transition in the ENSO cycle is slowed down on average.
As mentioned above, the radial component of the tendencies are related to the growth rate of S. However, unlike in the ReOsc model (Eqs. [1-2]) where a 11 and a 22 are constant growth rates of T and h, which do not depend on T and h, the mean radial component as presented in Fig. 3b is a function of the mean S for each phase (e.g., the vectors in Fig. 2 depend on the mean S; they increase with distance to the origin).
Analogous to the ReOsc model growth rates, we can estimate a growth rate of S as a function of the phase by dividing the radial component ( Fig. 3b) by the mean S (red line in Fig. 2); see Fig. 5a. The structure of the growth rate of S is very similar to that of the radial component of the tendencies but can now be interpreted in the same way as the growth rate in the ReOsc model. We should note here that this statistical definition of the growth rate by definition is in average over all phases zero, and it represents the combined effect of the dynamics (T and h) and the noise forcing.
Similarly, the tangential component of the tendencies is also a function of the mean S for each phase. We can define a phase transition (angular speed) by dividing the tangential component ( Fig. 3b) by the mean S (red line in Fig. 2), see Fig. 5b. The phase transition is fastest between the El Nino state and the discharged state (~140 o ), and slowest between the discharged state and the La Nina state (~220 o ). The differences in the angular speeds are consistent with the differences in the likelihoods to be at different phases (Fig. 3a). ENSO phases which have large ENSO angular speeds are less likely to occur because ENSO transitions through these phases are relatively fast. In turn, phases which have small In the phase space we can note that we have larger scatter from about 30 o to 240 o and smaller scatter clockwise from about 240 o to 30 o . We can quantify these phase dependent probability distributions by estimating the probability statistics as a function of φ . The mean of S as function of φ is shown in Fig. 2 (red line). The mean is largest around 60 o to 90 o and smallest around 270 o to 360 o . Figure 3a shows the 2-dimensional probability density function. It shows the highest probabilities near the origin in quarter Q4 and larger probabilities for large S values in quarter Q1 to Q3, consistent with the scatter plot in Fig. 2. We can further estimate the probabilities of S values to be at different phase angles φ (black line in Fig. 3a). This shows higher probabilities to be in Q1 or Q3, and lower probabilities to be in Q2 or Q4. The probabilities are somewhat similar for Q2 and Q4 but show some enhanced likelihoods to be in Q1 compared to those in Q3. This shows that ENSO states between a recharge and an El Nino state (40 o ) have the highest probabilities. The lowest probabilities are the state at 90 o , and states before and after this. The probability distribution shifts a bit towards quarter Q2, if we only consider ENSO states with |S| > 1.0 (red line in Fig. 3a). It illustrates that large ENSO anomalies are primarily in phases from about 60 o to 240 o and less so from about 270 o to 360 o . Thus, large ENSO anomalies do not exist from the La Nina to the recharge state phase.
The scatter in Fig. 2 or the probability distribution between T n and h n in Fig. 3 shows an enhance likelihood along the diagonal from lower left (225 o ) to upper right (45 o ), which is reminiscent of a positive correlation between T n and h n . The observed correlation between T n and h n at a time lag of zero is 0.4, see Fig. 4a. Thus, the idealized concept of the ReOsc model which has an out-of-phase relation ReOsc model to illustrate what observed asymmetries are significant. We then discuss the linear ReOsc model with parameters fitted to the observed data to illustrate what kind of structures in the space phase can be explained by the observed linear dynamics. An idealised damped oscillator can be presented by the ReOsc model with all model parameters being symmetrical for T and h. To illustrate the characteristics of an idealised damped oscillator, we create a ReOsc model that is identical for both tendency equations. That is, the growth rates, coupling and strength of the noise forcing are the same magnitudes for both T and h. We chose the following parameters based on the normalised parameters in Table 2: We refer to this model as the idealised linear ReOsc model. The resulting phase space statistics of T n and h n are shown in Fig. 6a-c. Here we can note that all statistics are phase independent. The growth rate is zero at all phases, indicating that the mean tendencies in all phases only have a tangential part. Subsequently, the system is in statistical average moving around the origin in a perfect circle. In contrast, a ReOsc model without coupling (a 12 = a 21 = 0), which reduces to two unrelated red noise processes, does not have any mean ENSO angular speed are more likely to occur because the ENSO system spends more time in these phases.
The time to complete a full cycle (the mean period of ENSO) can be estimated by integrating the angular speed over all angles. This gives us a period for one cycle of about 42 months (3.5 yrs). This is consistent with the observed peak period in the T power spectrum (Fig. 4c). The phase transition time is however strongly variable within the cycle with the slowest transition of about 0.1 per months. This corresponds to a full cycle in about 5yrs. The fastest transition is about 0.3 per months which corresponds to a full cycle in 1.7yrs. These variations in the phase transitions are likely to contribute to the broadening of the power spectrum of ENSO.
The observed data record for the above analysis is only about 30yrs to 40yrs, which opens the question: 'to what extend are the observed characteristics statistically significant?' To address this question, we can use the linear ReOsc model; which will be discussed next.
Linear recharge Oscillator
The ReOsc model can help us understand what underlying dynamics are causing the asymmetries in the ENSO phase space. We start the discussion with an idealised linear idealized linear ReOsc model. Similarly, the distribution of asymmetries in the tangential part for quarter Q2 minus Q4 values (Fig. 7b) is well separated from the observed value, indicating that the observed variations in the tangential part of the tendencies is a clear signal.
We now focus on the linear ReOsc model with parameters as they result from a linear regression to the observed monthly mean T and h data, see Table 2. Using these parameters, we integrate the linear ReOsc model (Eqs. [1-2]) for 10 4 yrs and analyse the resulting normalized anomalies of monthly mean T n and h n . We refer to this model as the observed linear ReOsc model. Figure 8 shows statistics of the ENSO phase space for the observed linear ReOsc model. There are several interesting tendencies, and therefore, shows no mean propagation in the phase space cycle (Fig. 6d-f).
We can use the idealised linear ReOsc model to evaluate the statistical significance of the phase variations we noted for the observed ENSO phase space. For this, we integrate a 30yrs period with the idealized linear ReOsc model and repeat this 10 4 times to estimate distributions of important statistics for a 30 year observational period. In Fig. 7a, we show the distribution of the mean radial component of the tendencies around φ = 0 o ± 45 o from the 10 4 idealized linear ReOsc model integrations in comparison to the observed value. The observed value is clearly outside the modelled distribution, indicating that such large positive radial components of the tendencies cannot happen by chance in an ENSO anomalies, with much higher likelihoods around the Q2 quarter and lower in the Q4 quarter. This is not present in the observed linear ReOsc model (compare Fig. 3a with Fig. 8b). The observed growth rate around the phases of 0 o is much larger than around 180 o , which is not captured by the observed linear ReOsc model (compare Fig. 5a with Fig. 8c).
The phase-depending characteristics of the observed linear ReOsc model result from asymmetries in the ReOsc model parameters. The normalized model parameters (Table 2) allow us to compare the dynamics of the two tendency equations irrespective of the physical units of T and h. Here we can note that the main asymmetries in the two dynamical equations are in the growth rates. The growth rate of T is strongly negative and therefore T is damped. The growth rate of h is slightly positive and therefore unstable. In contrast, the coupling parameters and strength in noise forcing are nearly identical in magnitude for both equations. Thus, it is the asymmetry in the growth rates of T and h that cause the phase-depending characteristics of the observed linear ReOsc model.
The asymmetries in the growth rates have consequences for the growth and decay of the ENSO system at different phases. This is best illustrated if we split the total tendencies of the system (Eqs. [1-2]) into a dynamical part (first two terms on right hand side) and a noise driven part (last term on right hand side). The dynamical part can be calculated based on Eqs. [1-2] for any given T and h, and the noise aspects to point out in these statistics. First, we can note that in this observed linear ReOsc model, all statistics presented are phase dependent (Fig. 8), indicating that the observed linear ReOsc model does create structure in the phase space.
This contrasts with what may be expected from an idealised damped oscillator or the idealised linear ReOsc model (compare with Fig. 6). Secondly, we also note that all statistics are symmetric for opposing phases (e.g., shifts by 180 o ). This is a result of the linear approach in the ReOsc model, which assumes that the sign of T and h are irrelevant, and all feedbacks are symmetrical.
The probability distribution and tendencies in the ENSO phase space of the observed linear ReOsc model have some similarities with the observed statistics (compare Fig. 8a and b with Figs. 2 and 3a). This is also quantified by the correlations of these phase-depending statistics with those observed (see r-values in Fig. 8). The following similarities can be noted: (1) likelihoods are much higher in the Q1 and Q3 quarters relative to the Q2 and Q4 quarters. The phase transition speed is larger in Q2 and Q4 quarters relative to the Q1 and Q3 quarters.
In contrast, the observed linear ReOsc model also has clear deviations from the observed statistics. These are observed statistics that are asymmetric for opposing phases. The following important mismatches can be noted: There is a clear asymmetry in the observed probability of extreme at first seem strange since the noise part is by construction random and therefore should not have a preferred direction.
Here we need to remember that in the phase space diagram, we are considering conditional probabilities. For instance, if we are at S = 1 and φ = 30 o (Fig. 9a), then the ENSO system must have arrived at this point due to its past tendencies. Since the system is overall stationary and damped by the dynamics, it is by statistical average that it would arrive at this point, that is away from the origin, due to the noise. Thus, the noise is overall creating the variability leading to growth in general. This balance between dynamical damping and growth by the noise forcing is also part is estimated as the difference between the total tendencies and the dynamical part. Figure 9 shows the total tendencies and their dynamical and noise driven parts for the observed linear, idealized and uncoupled ReOsc models. Starting with the uncoupled ReOsc model (Fig. 9a) we can see that the mean tendencies are zero. For the radial part, which is related to the growth, we can see that the dynamical and noise parts of the tendency balance each other with the dynamical tendencies damping, therefore, pointing towards the origin. The noise part is pointing away from the origin, indicating that the noise is leading to the growth of the system. This may Table 2. Therefore, we compute the dynamical tendencies for all phases using S = 1 and Eqs. [1-2] without the noise terms. Figure 10a-c shows the magnitudes, radial and tangential part of the dynamical tendencies. Since S = 1 for all phases, we can interpret the radial part as the dynamical growth rate and the tangential part as the dynamical phase transition (angular speed).
The dynamical growth rate of the system is directly related to the ReOsc model growth rate of T(a 11n ) and h(a 22n ). The strongly negative a 11n leads to strongly negative growth rate of the system when T n is large (at phases 90 o and 270 o ). The weakly positive a 22n leads to near zero growth rates when h n is small and weakly positive growth rates of the system when h n is large (at phases 0 o and 180 o ). The entire phase dependency of the growth rate of the system is directly related to these two extreme cases.
The phase dependencies of the magnitudes of the dynamical tendencies and the phase transition speed have similar structures as those of the dynamical growth rate of the system. However, they have maxima and minima at different phases. The phase dependency of the magnitude of the dynamical tendencies can best be understood if we look at the equation for the magnitude of the dynamical tendencies: Considering that a 21n ≈ −a 12n and |a 11n | |a 22n | we find: present in the observed linear and idealized ReOsc models ( Fig. 9b and c). The uncoupled ReOsc model has also no mean tangential tendencies for transition to another phase (Fig. 9a). Here both the dynamical and the noise part are zero. A mean phase transition in the ReOsc model is caused by the dynamical coupling between T and h [Lu et al. 2018], which is by construction zero in the uncoupled ReOsc model.
For the idealized ReOsc model, the mean dynamical and noise terms add up to have perfectly circular motion with the mean tendencies only having a tangential part and zero radial part. The dynamical part has a negative radial component; which is compensated by a positive radial noise as mentioned above, and a larger tangential component; which leads to the clockwise phase transition of the whole ENSO system. In this idealized ReOsc model all dynamical and noise parts of the tendencies are the same for all phases (Fig. 9b). Hence, it is entirely symmetrical in all parts.
The observed linear ReOsc model is similar to the idealized ReOsc model, but all elements of the mean tendencies are phase dependent, this includes the radial and tangential parts of both the dynamical and noise part (Fig. 9c). Starting with the dynamical part of the tendencies, we can see that the radial part (growth) is pointing towards the origin (is negative) at phases 90 o and 270 o , but is close to zero at phases 0 o and 180 o . Further, we can see that the overall tendencies and the tangential parts are larger at phases 315 o and 135 o , and smaller at phases 45 o and 215 o .
We can best understand these different phase dependencies of the dynamical tendencies by examining the ReOsc model Eqs. [1-2] using the normalized model parameters of T(a 11n ) and h(a 22n ) directly lead to a phase dependency of the dynamical magnitudes of the tendency; with maxima and minima at different phases than the growth rate. Similar computations (not shown) find that the dynamical transition speed has maxima and minima at phases similar to those of the magnitudes but shifted closer to the maxima and minima of the growth rate. ≈ (a 11n T n ) 2 + (a 12n h n ) 2 + (a 12n T n ) 2 − |2a 11n a 12n T n h n | (5) Here we can note that all terms add up if T and h have opposing signs (quarters Q2 and Q4), but if T and h have same signs, then the last term of the equation act against the other terms, reducing the magnitude of the tendencies. Consequently, the asymmetry in the dynamical growth rates In summary, we can say that the dynamical tendencies of the observed linear ReOsc model have phase dependencies resulting from the asymmetries in the dynamical growth rates of T and h. This makes the system anomalies decay when |T| is large and grow when |h| is large. The dynamical phase transition speed is largest at about 45 o before we reach the largest dynamical growth rates, and smallest at about 45 o after we reached the largest dynamical growth rates. This falls in-phase with the minima and maxima of the mean ENSO system anomalies (red line in Fig. 8a or 9c). As a result, the observed linear ReOsc model transitions fast when it is at phases with relatively small mean anomalies, and transitions slow at phases with relatively large mean anomalies.
Interestingly, the noise part of the tendencies of the observed linear ReOsc model is also phase dependent (Fig. 9c), although we have assumed by construction that the noise is purely random and not state dependent. Nevertheless, the phase-dependent dynamical parts of the tendency also lead to noise tendencies that are effectively phase-dependent. The radial part of the noise tendencies is always positive, but smaller at phases 30 o and 210 o , and larger at phases 120 o and 300 o . The tangential part of the noise tendencies is weak, but not zero. It is acting against the clockwise phase transition and is most strongly negative at phases 150 o and 330 o .
Non-linear recharge Oscillator
The above discussion has shown that the observed linear ReOsc model can capture a few characteristics of the observed ENSO phase space but has also illustrated that there are some asymmetries in the phase space that cannot be captured by a linear ReOsc model (e.g., asymmetries for opposing phases).
It is therefore instructive to consider non-linear ReOsc models to study how they would represent the ENSO phase space. Previous studies have suggested several different approaches to incorporate non-linear aspects of ENSO into the ReOsc model [e.g., Frauen and Dommenget 2010;Kim and An 2020;Levine et al. 2016]. These studies focused mostly on non-linear growth rates of T, state dependent noise or considered other non-linear elements in the ReOsc model. It is beyond the scope of this study to explore which non-linear process may explain the observed ENSO nonlinearities. However, we do want to provide an example to illustrate what a non-linear model could do and what it may be missing in the ENSO phase space.
We chose to focus on a non-linear growth rate of T and follow the approach of Frauen and Dommenget [Frauen and Dommenget 2010] by assuming a quadratic function. We non-linear parameters of this model suggest a stronger negative feedback for large negative T values, and a weaker or positive feedback for large positive T values. This is qualitatively similar to models suggested in previous studies [e.g., Frauen and Dommenget 2010;Geng et al. 2019;Kim and An 2020]. We integrate this model with the same noise forcing as for the linear model. We refer to this model as the non-linear ReOsc model.
Several phase space statistics of this model are shown Fig. 11. First, we can note that the non-linear ReOsc model has clear phase-depending statistics. Unlike the linear ReOsc model the statistics are also different for opposing phases. For instance, the phase-depending mean values (red line in Fig. 11a) are different at phases 90 o and 270 o (e.g., positive vs. negative T values).
The non-linear ReOsc model does capture the observed phase-depending characteristic that the observed linear therefore use Eq. [2] of the ReOsc model and change Eq. [1] to include a non-linear growth rate of T: We used a Nelder-Mead optimization scheme [Nelder and Mead 1965] to estimate the non-linear model parameters (a 11−2 , a 11−1 , a 11−0 ). The cost function for this optimization is based on integrating the model for 1000yrs and estimate the monthly mean distribution parameters. These are: the mean, stdv and skewness for T and h, and also the correlation between T and h. The root mean square of the differences in these statistics between the observed and the model values define the cost function of our optimization fit. The values of the non-linear model are shown in Table 1. The and 3a with Fig. 11a, b). The growth rate of the tendencies is larger for phases around 0 o than they are for phases around 180 o . The mean phase transition is slowest in Q3 and fastest in the Q2 quarter. While the non-linear ReOsc model is clearly closer to the observed phase space than the linear model there are also a number of significant mismatches between the non-linear ReOsc model captures, but also captures a few other characteristics. This is also quantified by larger correlation values in the phase-depending statistics (compare r values in Figs. 8 and 11). The following additional similarities to the observed can be noted: The mean and probabilities of the ENSO system are shifted away from the Q4 quarter and towards the other quarters Q1, Q2 and Q3 (compared Figs. 1 would correspond to a full ENSO cycle period of 35yrs to 1.7yrs, respectively. The extremely small phase transition values suggest that the ENSO cycle stalls and is potentially interrupted. This is also reflected in the total phase transition of the system (Fig. 11d). We can further note that small S anomalies transition faster relative to large S anomalies in quarters Q2 and Q3. The opposite is true in quarters Q1 and Q4.
These large variations in the phase transition does affect the power spectrum of T, by reducing the power at the peak oscillation period and increasing the power at all other frequencies (Fig. 4c). In particular, it enhances the decadal variations of T. Thus, the non-linearities in the growth rate of T is broadening the power spectrum, making it more realistic.
Predictability
We would expect that the variations in the ENSO characteristics at different phases of the ENSO cycle would lead to differences in the predictability of ENSO for different phases. We can get some approximation of how the observed ENSO may be predictable at different phases of the ENSO cycle by studying the predictability of the linear and non-linear ReOsc models discussed above.
First, we use the non-linear ReOsc model to start 8 ensembles of 100 members at different phases of the ENSO cycle with an initial S = 2, see Fig. 12. For each ensemble member a different realisation of the noise forcing was used, and the integration of the model was done for 12 months. We can define an ensemble mean S and phase φ for each forecast lead month, defining a mean position in the phase space. This is equivalent to a mean T and h (see solid lines in Fig. 12). The spread can be estimated by the distance of each ensemble member to the mean T and h for each forecast month and is shown as dashed lines in Fig. 12. Note, that in this representation we neglect the fact that the spread is not just in S, but also in the phase φ , as can be seen in the individual ensemble members in Fig. 12a.
The first example, starting at phase φ = 0 o (Fig. 12a), illustrates how the ensemble spreads out in terms of amplitude (S) and phase (φ). Some ensemble members decay in amplitude, while others grow or stay at the same amplitude. There are fairly large variations in the phase propagation, with some ensemble members propagating much further in the ENSO phase than the ensemble mean while others almost do not transition at all in the ENSO phase, but stay close to the initial phase.
The growth of the forecast ensembles is strongly depending on the initial starting phase (see Fig. 12b). Forecast ensembles that start at phases where the growth rate is model and the observations. The following deviations in the phase space can be noted: The probabilities of the nonlinear model are higher in Q3 than Q1, which is the opposite of what is observed. The observed growth rate asymmetry between 0 o and 180 o is much larger than in the non-linear model. In addition to these phase space deviations, we can also note that the cross-correlation between T and h of the non-linear model deviates quite significantly from the observed (Fig. 4a). In particular, when T leads h we find a strong underestimation of the cross-correlation in the nonlinear model.
The phase-dependency of the non-linear ReOsc model dynamical tendencies are overall like those of the linear ReOsc model, but are in detail more complex; see Fig. 10. First, we must note that in the non-linear model the dynamical tendencies do not just scale with S, as in the linear model, but will change its phase-dependency depending on the scale of S. This is illustrated by presenting the dynamical tendencies of the non-linear ReOsc model for three different values of S (S = 0.5, 1, 2; see Fig. 10). Here we can clearly note that the magnitude, radial (growth rate) and tangential part (phase transition) all vary more strongly as function of phases than in the linear model.
The variations in the radial part (growth rate) of the dynamical tendencies are mostly a function of T. This is an expected result given we have only made the growth rate of T non-linear and kept growth rate of h linear (see Eqs. [2 and 6]). The growth rate of T is less negative for large positive T values and much more negative for large negative T values, with a reverse relation for small T values (Fig. 10e). From this difference in the dynamical growth rate, we would have assumed a similar asymmetry in the overall growth rate of the non-linear ReOsc model. However, this is not observed (Fig. 11c). The growth rate of the non-linear ReOsc model is mostly symmetric in respect to T, but is asymmetric in respect to h. This suggests that the interaction with the phase-dependent tangential part of the tendencies and the noise forcing does lead to a significant shift in the asymmetries of the total system growth rate.
The variations in the tangential part (phase transition) of the dynamical tendencies in the non-linear ReOsc model are substantial (Fig. 10f). Before we discuss these large variations, we need to note that the coupling between T and h ( a 12 and a 21 ) in the non-linear ReOsc model are assumed to be still linear (e.g., not phase-depending). So, all non-linear and phase-dependent variations that we can note in the phase transition is a result of the non-linear growth rate of T.
The most extreme variations in the dynamical phase transition can be noted with minima in the Q1 and Q3, and maxima in Q2 and Q4 quarters. The extremes are ranging from 0.015 month − 1 for small values of S in Q1 to 0.315 month − 1 for small values of S in Q2. These extreme values the anomaly correlation between each forecast run and the control run for all forecast whose initial phase falls within ±15 o of the reference phase of T and h at different lead times, see Fig. 13.
Starting with the forecast of the idealised linear ReOsc model we can note a clear structure in the phase space for both T and h anomaly correlation skill ( Fig. 13a and b). First, we have to recall that the idealised linear ReOsc model has no phase-depending ENSO characteristics, as discussed above, Subsequently, the structure that we see in the anomaly correlation skill scores is a characteristic of the phase space presentation; not a reflection of the characteristic of the idealised linear ReOsc model itself. For instance, at phases 0 o and 180 o , T is zero, and an anomaly correlation skill score for T at these phases must be zero too. Another instance is at a 3-month lead forecast starting at 3 40 o . This will on average end up at phase ~0 o and will therefore have small anomaly correlation skill scores for T. Accordingly, the minima and maxima will shift for different lead times, and the anomaly correlation skill scores for h will be shifted by 90 o .
The anomaly correlation skill scores for the observed linear ReOsc model are very similar to those of the idealized linear ReOsc model ( Fig. 13c and d). However, there are some small differences. Due to the asymmetries in ENSO amplitudes and phase transition speeds for this particular relatively large (see Fig. 11c) will have ensemble means that do not decay as fast, as seen for the ensembles starting at phases 0 o , 135 o , 180 o and 315 o . The opposite holds true for ensembles that start at phases where the growth rate is strongly negative (e.g., 90 o or 270 o ).
The phase transition speed is also strongly depending on the initial starting phase, with the fastest phase transition for the ensembles starting at phases 135 o and the slowest at 225 o (see Fig. 12b). Here it should be noted that all forecast ensembles have the same length in time (6mon), but appear to have different length in the phase space diagram due to their different phase transition speeds. The phase transition speed variations are strongly linked to the mean phase transition speeds (see Fig. 11d). The combination of the growth rate variations and phase transition variations splits the ENSO cycle into phases where the system clearly follows an ENSO cycle (around 315 o to 30 o and 135 o to 210 o ) and phases where the system is more or less collapsing and not propagating much (around 210 o to 300 o and 30 o to 90 o ).
We can evaluate the predictability of T and h in terms of the anomaly correlation skill as a function of phase within the ENSO cycle; based on the linear and non-linear ReOsc models. For this, we integrate a long control simulation, from which we start one additional simulation with different noise forcing every 60 months for a 9-month lead forecast. We do this 3.6 • 10 4 times, which roughly gives us about 100 forecasts for every 1 o of the ENSO cycle. We then estimate than the recharge state (at around 0 o − 30 o ). It is remarkable that we observe stronger non-linearities in the forecast skill of h than in T, considering that the non-linear ReOsc model discussed here is only non-linear in the tendencies of T, but is linear in the tendencies of h.
Summary and discussion
In this study we introduced the ENSO phase space for a detailed analysis of the ENSO dynamics. The observed model, the correlation skill score for h is not 90 o but only about 60 o out-of-phase with those of T.
The non-linear ReOsc model shows some clear asymmetries in anomaly correlation skill scores that are different from those of the linear ReOsc models. Anomaly correlation skill scores for T are in general larger in quarters Q1 and Q2 and lower in Q3 and Q4. Asymmetries are even more pronounced for correlation skill scores in h; with much larger skill scores in Q2 (with values at around 0.6) than in the Q4 quarter (values around 0.2). This suggests that the discharge state of ENSO (at around 180 o ) is much more predictable The positive in-phase (lag zero) correlation between T and h is not ideal in the context of the ReOsc model, suggesting that this is not an accurate presentation of the ENSO phase space as it assumes that T and h should be out-ofphase (zero correlation at lag zero). Other studies assume that the western equatorial thermocline is a good presentation of the ReOsc model [e.g., Jin 1997b; Chen et al. 2021], but western equatorial thermocline has a significant negative correlation with T at lag zero [e.g., Chen et al. 2021].
It is likely that the Z20 estimate of the thermocline depth (h) is causing a problem. Vijayeta [2020] analysed how differences in the estimation of h affects the ReOsc model presentation. The study found that a more accurate estimation of h, by a maximum temperature gradient approach, finds a nearly perfect out-of-phase correlation between T and h. This suggests that a better estimate of h would improve the ENSO phase space presentation.
The ReOsc model with a non-linear growth rate for T can explain most of the asymmetries in the observed phase space that can otherwise not be explained by the linear ReOsc model. A non-linear growth rate for T reproduces the observed shift in the likelihoods for large ENSO anomalies away from quarter Q4 and towards the quarter Q1-Q3. It further reproduces the strongly reduced phase transition speed in quarter Q3 (discharge to La Nina state) and the enhanced phase transition speed in quarter Q2 (El Nino to discharge state).
The variations in ENSO phase transition speed, as captured by the non-linear ReOsc model, lead to a more realistic power spectrum, with a broader interannual peak and enhanced decadal variability. The latter is consistent with earlier studies suggesting that ENSO non-linearity causes decadal ENSO variability [Rodgers et al. 2004;Wittenberg et al. 2014;. Here it is important to note that the mean ENSO period is primarily controlled by the coupling parameters [Lu et al. 2018], which have been kept linear in this model.
However, the ReOsc model with a non-linear growth rate for T cannot explain all aspects of the observed ENSO phase space. In particular, the observed lag-lead cross-correlation between T and h, with enhanced cross-correlation when T leads h are not well captured by the model.
The phase-depending ENSO characteristics should affect the predictability of ENSO. The non-linear ReOsc model suggests that ENSO predictability changes along the phases depending on the lead-time of the prediction. It affects the amplitude and phase transition differently, whilst also being different for T and h, respectively. In particular, h is most predictable in quarter Q2.
The ENSO phase space presentation introduced here provides many opportunities for further studies. A key aspect that needs to be addressed in future studies is the in-phase ENSO phase space showed several interesting asymmetries that reflect important aspects of ENSO dynamics. In agreement with Kessler [2002], we find that the probability distribution of ENSO phases has some clear asymmetries for large ENSO amplitudes, with lower probabilities to be within the Q4 quarter (La Nina to recharge state).
An important aspect of the ENSO phase diagram is that it allows the analyses of ENSO tendencies as a function of the ENSO phase. The spherical coordinate system of the ENSO phase space diagram allows us to define tendencies in the radial and tangential direction. A normalization of the radial tendencies defines the ENSO system growth rate as a function of the phase. While by construction, the mean growth rate in this definition must be zero, the growth rate at different phases shows clear deviations from zero, with positive growth rates around and after the recharge state (330 o to 45 o ) and negative growth rates around the El Nino (70 o to 120 o ) and La Nina states (210 o to 270).
A normalization of the tangential tendencies defines an ENSO system phase transition speed, which, if integrated over the whole cycle, gives an estimate of the ENSO period. The mean observed phase transition speed varies substantially as a function of the ENSO phase; with fast transitions in quarter Q2 (after the El Nino state) and slowest around the La Nina state (220 o to 260). This is somewhat consistent with the argument put forward in Kessler [2002] that ENSO is more event-like rather than a cycle, where it remains in a weak La Nina-like states for longer periods of time. However, the phase transition speed is significantly positive for all phase of the ENSO cycle, also supporting the idea that ENSO is indeed cyclic, though the speed or "clearness" of the phase transitions vary substantially in the ENSO cycle.
The underlying dynamical cause for the observed structures in the ENSO phase space is best analysed with a linear or non-linear ReOsc model. We illustrated that a linear model can explain some of the observed structures in the ENSO phase space and a non-linear ReOsc model can explain most of the remaining asymmetries.
A fit of a linear ReOsc model to the observed data and a normalization of the units reveal an asymmetry in the growth rate of T and h; with a negative growth rate for T and a weakly positive growth rate of h. The coupling parameters and strength of the noise forcing show no asymmetries. The asymmetry in the growth rates reflects a positive in-phase (lag zero) correlation between T and h. This positive correlation explains the observed characteristics in the ENSO phase space that are symmetric for opposing phases (shift by 180 o ); including enhanced growth rates at phases around 0 o and 180 o , and reduced growth rates at phases around 90 o and 270 o . This explains the enhanced phase transition speeds in the quarters Q2 and Q4 and reduced phase transition speeds in the quarters Q1 and Q3. correlation between T and h, which dominates the ENSO phase space characteristics and therefore potentially hides more interesting aspects to phase-depending ENSO dynamics. This is most likely related to how h is estimated by Z20 rather than true vertical profile gradient methods. More generally, other aspects of estimating of h, such as the meridional or zonal range, may affect the ENSO phase space representation.
A further aspect that has not been discussed here is the seasonal changes in ENSO dynamics [e.g., Li 1997;Tziperman et al. 1998;McGregor et al. 2013;Zhu et al. 2015;Dommenget and Yu 2016]. It needs to be considered that each quarter of the ENSO phase space should be transiting through all four seasons of the year. We therefore would expect seasonal variations in the ENSO phase space in all four quarters.
The discussion presented here for the dynamical phase of ENSO can also be applied for other climate modes. The Madden-Julian oscillation (MJO), for instance, has a welldefined dynamical phase space [e.g., Wheeler and Hendon 2004;Oliver and Thompson 2016]. The discussion, presented here for ENSO, could be applied for the MJO or other climate modes in a similar way.
Acknowledgements This study was motived by the Honours Bachelor of Science project of Maryam Al Ansari in 2021. We like to thank Tobias Bayr, Shayne McGregor, Peter van Rensch and the two anonymous reviewers for helpful comments and discussions. This study was supported by the Australian Research Council (ARC), discovery project "Improving projections of regional sea level rise and their credibility" (DP200102329) and Centre of Excellence for Climate Extremes (Grant Number: CE170100023).
Funding Open Access funding enabled and organized by CAUL and its Member Institutions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 12,170.2 | 2022-08-11T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
LYAPUNOV EXPONENTS FOR HIGHER DIMENSIONAL RANDOM MAPS
A random map is a discrete time dynamical system in which one of a number of transformations is selected randomly and implemented. Random
maps have been used recently to model interference effects in quantum physics. The main results of this paper deal with the Lyapunov exponents for
higher dimensional random maps, where the individual maps are Jablonski
maps on the n -dimensional cube.
Introduction
Ergodic theory of dynamical systems deals with the qualitative analysis of iterations of a single transformation.Ulam and von Neuman [12] suggested the study of more general systems where one applies at each iteration a different transformation chosen at random from a set of transformations.In this setting one could consider a single transformation, where parameters defining the transformation are allowed to vary dis- cretely or even continuously.
The importance of studying higher dimensional random maps is, in part, inspired by fractals that are fixed points of iterated functions systems [1].Iterated function systems can be viewed as random maps, where the individual transformations are con- tractions.Recently, random maps were used in modeling interference effects such as those that occur in the two-slit experiment of quantum physics [2].For a general study of ergodic theory of random maps, the reader is referred to the text by Kifer [8].Additional ergodic properties of random maps can be found in [4, 5, 10] and [11].
One of the most important ways of quantifying the complexity of a dynamical system is by means of the Lyapunov exponent.This quantifier of chaos can be defined for random maps.In this paper, we develop formulas for the individual Lyapunov exponents for higher dimensional maps, where the basic maps are Jabtofiski maps on the n-dimensional cube [7].
1Research supported by NSERC and FCAR grants.
Lyapunov Exponents
Our considerations are based on Oseledec's Multiplicative Ergodic Theorem [9].Let (X,%,m) be a probability space and let r be a measurable transformation -: X--.X preserving an invariant measure #, absolutely continuous with respect to m.Let A: X--,GL(n, ) be a measurable map with f log + I I A(. )11d# < + c.Then, in particular, the limit X A v lim ,1-_log [I A( vl-ix)A( vk-2x) .(A(vx))A(x)vI I exists for any vER n and # almost any xEX.The number A vcan have one of at most n values A1,..., A n.
In this note, X-I n-[0, 1] n and m is the Lebesgue measure on In. v is a piece- wise expanding C 2 transformation and A(x) is the derivative matrix of r, where it is well defined (it is not defined on a set of measure 0).In this case, the numbers A1,..., A n are called Lyapunov exponents.
For the Jabofiski transformation v, the derivative matrix ) there exists a measure # invariant under v (with density f) with respect to Lebesgue measure.
All measures considered in this paper are assumed to be probability measures.
The transformations v we consider have a finite number of ergodic absolutely contin- uous measures.To simplify our considerations, we assume that the absolutely contin- uous invariant measure # is unique.Maps, which are piecewise onto, will satisfy this condition as will maps which are Markov and for which the matrix A is irreducible.In the general case we would consider each ergodic absolutely continuous measure 1,..., k, separately and our formulas would hold for each of them #i-a.e.
J=IDj
We do not assume that the Ai's are numbered in increasing order nor that they are all different.
For the quasi-Jabtofiski transformation , the derivative matrix is given by A-Aj-o (.) xGDj.
(.) 0 If there exists a constant s > 1 such that inf inf I}1 >, i, j [aij bij then r 2 is an expanding Jabtofiski transformation and there exists a measure # invar- iant under r 2 with density f with respect to Lebesgue measure.
/ \ 0
If we take v 2 -1 )then, as above, we have "2 lim I I A( rk-Ix) .A(vx)A(x)v2 j (lg ]J (x2)] + log l'2j(Xl)])f(x)dx Ai" D. 3 It is easy to see that no other value is possible as a limit v" 5. Random Maps c be a sequence of transformations from X into X.A random map Let {rt} 1 3-{{rt}= 1, {Pt}tc= 1} is a discrete dynamical system, where at each iteration, r is chosen with probability Pt, Pt > O, _, = l Pt-1.
If n-1, then (6.1) reduces to 1 " Y Pt log r(x)I f (x)dx.t=l 0 o To Proof: Let fi(x,w)-log o.()1.Since the shift (a,P) is ergodic (even exact), and we assume uniqueness of the absolutely continuous rt-invariant measures, there exists a unique t invariant absolutely continuous measures ([10]), and it gives a T-ergodic measure #-P.We have: In the general case, we can have a finite number of such measures, not more then minimal number of absolutely continuous invariant measures for vt, t 1,2, In particular, if a least one of v has a unique absolutely continuous measure, then the 3- invariant measure # is unique ([5]).In the case of more then one invariant measure, the above formula holds for x in the support of any fixed , -a.e.
A1--A2 t=l 12
Proof: The first part of the theorem follows from Theorem1 and the observation that A(3)-1/2A(32).The last equality follows from the definition of Lyapunov expo- nent and the fact that there are two iterations of 3 for each iterate of 32.
If is 32 invariant, then the measure where rt.to r 1, is 3-invariant.Since any 3-invariant measure is 32-invariant, if is unique, then #]51 and is also 3-invariant.This implies that for any integrable function f E (X, %, m), slP s f o vsd f d, 12 12 which simplifies formulas for )1 and A 2.
8 Random Composition of Jabtofiski and Quasi-Jabto,Sski Transformations Let r 1 be a Jabtofiski transformation and r 0 a quasi-Jabtofiski transformation on 12.
In this section we will find Lyapunov exponents for a random map {rl, Vo, p,q}, wherep, q>_0, p+q=l.Consider a sequence of Jabtofiski transformations: T 1 T 1 T 3 -T O 0 T 1 0 T O T T OoT-2oTO, pt-2q2, and let Pl-P, Pt t-2,3, Instead of :], we will consider the random Jabtofiski transformation 1-{{7t}__ 1, {Pt}t--}" Lemma 1: Ai( A il (1), i-1,2. Proof: Ai( -_,li log I[ A(k)(x)vi I I lim k+oo For Ai(tl) we have the same expression under the norm sign, but the averaging, factor k is different.It is enough to prove that, on average, there are two iterations of for each interaction of The average number of iterations of in each iterate of 1 is: Pt {number of iterates of in vt} p + pt-2q2t 2. t=l t=2 Lemma 2: If # is 5-invariant, then # is 5l-invariant. | 1,568 | 1997-01-01T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Sustainable City- Green Walls and Roofs as Ecological Solution
The impact of urban development on the natural environment creates unique challenges for architects and the need to seek a change in design strategies by building green and sustainable buildings. Designing and displaying green elements such as roofs and walls becomes an important element in this sense. Greenery plays a very crucial role in the city space. Green roofs and walls are the missing link between the built environment and the natural environment. They can complement urban greenery. This paper aims to show the possibilities of green roofs and walls solutions in the city, their aspects and impact on the environment and people. The research method is based on the analysis of selected existing objects with greenery solutions and showing their role in creating a sustainable city. The analysis shows that the green roofs and walls offers many environmental, social and economic benefits. They have the ability to improve the microclimate and increase air humidity. Thus, they affect the health and well-being of the city's inhabitants. This technology should be considered a valuable part of the design process to tackle climate change and the energy crisis. Green roofs and facades are passive techniques and provide benefits in reducing the energy requirements of buildings, among other things, but also play a role in shaping a better visual aspect of the city. In the 21st century, people are slowly beginning to realize the advantages of green architecture, which is considered a new perspective also for the urban heat island problem. Thus, the living roofs and walls are of major importance as part of a sustainable strategy for the urban environment. Sustainable cities will exist when society makes an informed choice to move towards a more sustainable lifestyle. The green roofs and walls these are the solution for the future, for better quality of life.
Introduction
In recent years, designing and displaying green elements such as roofs and walls have become important in urban landscapes. These elements can increase the city's environmental value. Greenery in the built environment is an important element of sustainable development and, therefore, sustainable, and healthy city.
For many years, progressive urbanization has harmed the city environment. For this reason, architects and city planners have begun to introduce new concepts and projects that are more environmentally friendly and can improve the quality of life of city residents. Overcoming the existing environmental problems has become the key to achieving sustainability in cities. The definition of sustainable IOP Publishing doi: 10.1088/1757-899X/1203/2/022110 2 development assumes that the needs of the current sustainable city residents must be met without reducing the opportunities of the city's future residents. Consequently, many challenges need to be overcome for cities to continue to provide the right conditions for living and working. Another important element is also the energy efficiency and promotion of the development of energy-saving technologies enabling energy recovery, improving air quality, supporting the recovery of building materials, and limiting materials damaging the natural environment and enabling the protection of the ecological zone.
This paper aims to present examples of the implementation of green building solutions in cities and their environmental benefits in terms of sustainable development. When creating sustainable cities, we think about the future. At the same time, society is paying more and more attention to the problems of our planet. Green solutions in buildings become an important element and a rescue for regaining balance in the environment. Through the ultimate potential of green roofs and walls to promote sustainable development, they are very important.
A huge number of buildings are being built around the world, and the challenge is to build them intelligently with minimum consumption of non-renewable energy, minimum pollution production, and minimum energy costs. Other important issues are increasing the comfort, health, and safety of people living and working in these constructions.
The genesis of green roofs and walls
The tradition of creating green roofs derives from the distant times of antiquity, as it existed already five thousand years ago in Egypt, where the population planted the roofs of their houses. Due to the dense development of cities that could not grow beyond their existing walls, the construction of private roof gardens was popular [1]. Even then, attention was paid to creating a harmonious, natural surroundings of buildings. The first commonly known example of the use of greenery in architecture from the Mesopotamian area is the so-called Hanging Gardens of Babylon. This was the result of the Babylonians' excellent ability to irrigate land and construct engineering structures. It is known that from the 3rd century BC, the Romans grew grapevines that climbed walls and special trellises. They used supports that allowed the plants to form the shape of a wall, a system called "palmette".
The industrial development in the 20th century, and in particular the emergence of reinforced concrete, contributed to the popularization of the idea of flat roofs. In the 1920s, architect Le Corbusier recognized the roof garden as an important point in the program promoting new architecture. On the other hand, the first steps to promote structures in which plants could develop freely creating vertical surfaces took place in the first half of the 20th century. Thus, various methods of turning roofs and facades green, which are more and more popular all over the world (and recently also in Poland), are not a new idea. Later, until the mid-1970s, intensive green roofs, i.e., usable roofs, were built. To further popularize green roofs, more economically viable construction systems were developed. Thus, at the beginning of the 1980s, the first extensive (light, inexpensive, ecological) green roofs were created, where the focus was not so much on usability, but on ecology and economy.
More intensive implementation of green roofs started in German-speaking countries in the 1970s [2]. Around the same time, green roofs gained popularity also in France and Switzerland.
Vertical gardens began to appear in the 1970s. In turn, the trend for the first vertical green walls was spread by a French botanist Patrick Blanc in the 1990s.
Terminology and possible applications
All over the world, big cities like New York, Melbourne, London, and Paris are introducing more and more greenery into their urban landscapes. These are both green roofs and green walls of buildings. These cities usually lack space for parks, and this kind of greenery associated with buildings can arise There are several terminologies for green roofs. Green roofs are also named "eco-roofs", "living roofs" or "roof gardens", and are roofs with plants in their upper layer [3,4].
Plant-covered roofs can be extensive, semi-intensive and intensive. They mainly differ in the type of vegetation and the way of use. Extensive green roofs have a low construction height below 15 cm [5] and they are planted with plants having lower vegetation requirements, i.e., plants that can cope without excessive care by humans. They are plants with a high regeneration capacity, adapted to extreme climatic conditions, e.g., mosses, sedum, etc.
Semi-intensive green roofs are constructed with a height between 16 and 20 cm. It is also possible (apart from grasses) to incorporate higher plants such as shrubs and perennials. Limited watering and maintenance are required. It is possible for people to access.
Extensive roofs are the most popular type of green roofs used on terraces in residential buildings as well as on garage panels and industrial constructions. They are the so-called maintenance-free greenery. Extensive roofs are characterized by lower vegetation and smaller water demand. In the case of such roofs, a very important element is the appropriate selection of the vegetation and drainage layer, which allows, through its accumulation capacity, to provide the plants with adequate water supplies. Extensive green roofs weigh less and are appropriate for large-sized rooftops while the process of their construction is technically simple and allows for implementation on sloped roofs [6].
In turn, intensive green roof systems allow the planting of some shrubs and even trees. Of course, they require a lot of care, irrigation, and a sufficiently strong supporting structure. However, they provide recreation and rest areas comparable to city parks. In intensive green roofs, an important factor, apart from the effective drainage of surplus water, is also its accumulation in a properly selected part. This water resource is used for optimal plant vegetation during periodic rainfall shortages.
The term "green wall" covers all forms of wall surfaces with plants. Also known as vertical gardens, they refer to the facades of buildings, vertically covered with vegetation. Such gardens fall broadly into two groups: green facades and living walls. The former ones are elements of the structure covered with climbing vegetation, the roots of which are in the ground. Vertical gardens can be created using all kinds of nets, fences, and metal tube structures that support the vertical growth of plants. However, it will be a kind of garden where the plants are planted directly into the ground, not in hanging panels.
Living walls, on the other hand, are more technologically complex. They usually have modular panels and special irrigation structures. Although there are many systems in which they are maintained, all of them require a lot of resources. Ready-made panels and pocket planters are very popular. The advantage is the variety of products with different dimensions. They are easy to assemble by fixing the panels to the walls using screws. To protect the wall against moisture, a PVC board should be installed between it and the panel. Ready panels are equipped with plant irrigation systems, and seedlings should be placed directly in appropriate "pockets" or sleeves.
Patrick Blanc, mentioned above, is a well-known creator of vertical gardens. In his concepts, he used plants of various forms and colors of leaves and flowers, ideally suited to the climatic conditions of a given place. He started to use stainless steel ropes for the construction of facades, and in the early 1990s, he introduced innovative mesh and modular grating systems. Blanc has designed a unique alternative to traditional soil cultivation -a non-woven, practically soilless system that provides the plants with an optimal amount of water on an ongoing basis. The structure of his vertical gardens is IOP Publishing doi:10.1088/1757-899X/1203/2/022110 4 made of a metal frame to which PVC panels are attached. A double layer of felt mat is attached to them. Felt mats are made of recycled clothes from artificial materials which makes them a durable substrate (imitating soil) for plants. In its outer layer, holes are cut into which the cuttings are inserted. Over time, their roots grow into the mat, creating a compact whole. The green wall has an irrigation system that brings water with a minimum dose of fertilizer to the plants. That small amount of nutrients allows the plants to develop and prevents them from overgrowing. Rainwater may be used to water the garden. This method is often used by architects in sustainable architecture projects.
Contemporary examples of good practices in cities
All over the world, many facilities conducive to the formation of sustainable environment have been constructed. New technologies for growing plants on the roofs and walls of houses popularized this phenomenon. Green roofs have been built in many major cities such as Singapore, Hong Kong, New York, and many more. Vertical gardens have also begun to appear more and more often.
In Europe, technologies for the construction of green roofs and walls have gained great recognition. They result directly from new laws introduced and governmental financial support offered for such projects. This is visible in countries such as Germany, France, Austria, and Switzerland [7].
In France, we can find many examples of buildings with green roofs and walls. Paris often referred to as one of the most beautiful cities in the world, is also distinguished by a special approach to arranging green areas. "Paris Garden Cultures" (Les Paris-culteurs) is an action designed to turn the city green over the course of several years. An interesting example of a green wall is the building of the Musée du quai Branly in Paris. It was designed by the architect Jean Nouvel, but the author of the green wall is the aforementioned Patric Blanc. An interesting example of a green wall is also the vertical garden on the northern wall of the Halles d'Avignon shopping arcade in Avignon. It was also designed by Patrick Blanc. An interesting fact is that the green wall present there is an area of about 600 square meters, with 1 square meter of the structure weighing 30 kilograms. On each square meter, around 20 plants were planted. It is a large plain of greenery that brings huge environmental benefits.
In Germany, there is a conscious policy for turning cities green. The model cities in which the strategy to support the construction of green roofs is carried out are Hamburg, Bremen, Stuttgart, and Munich, and in the field of green walls -Hanover and Munich. Similar activities supporting the development of green roofs have been carried out for a long time in London, Basel, Chicago, and Portland. In Copenhagen, the Adaptive Climate Plan was created, assuming many green initiatives and projects counteracting the negative effects of climate change. A green roof program was created and from 2010 it has been decided that all newly built and modernized flat roof buildings should be turned into green roofs. Moreover, the obligation to create green roofs exists in most local development plans.
Another example of this initiative is also Berlin, where buildings with green roofs with a total area of 40,000 square meters were built next to each other on Potsdamer Platz [9]. In turn, in Hamburg, the BIQ building designed by Splitterwerk Architects and the Arup Group has a system of green facades filled with algae, which also produce some of the energy needed for the building.
In Spain, the most interesting example is the CaixaForum (figure 1), the art and culture center in Madrid run by the "la Caixa" Foundation. The building was enriched with a green wall located on the IOP Publishing doi:10.1088/1757-899X/1203/2/022110 5 southern square. A huge number of plants consisting of 250 species are planted on it without soil and vegetating only based on water and nutrients provided. The structure was designed similarly to the previously mentioned projects by P. Blanc. According to Wojciech Kosiński [10], greenery integrated with houses in the sense of horizontal (roofs) and vertical (walls) plant tissue plays an invaluable role.
It is worth mentioning that Paris has planned "green" buildings of 100 hectares by 2020, while London hopes to make itself the capital of the world's first National City Park where more than half of the city's area is turned green by 2050 [11].
In London, we can also find impressive living walls of buildings. One example is the Athenaeum Hotel, a five-star hotel overlooking Green Park. In 2009, a living wall, i.e., a vertical garden, was installed on a part of the facade of the north-eastern corner of the hotel. It very quickly became the symbol of the building. This vertical garden is 30 meters high, has 260 species and 12,000 plants. The aluminum shell, properly attached to the wall of the building, is covered with plastic, i.e., synthetic felt, in which roots can develop. Given that the green wall has as many as ten stories, the vegetation varies on different levels. For years, the Singapore city-state has been paying attention to creating greenery, not only as urban parks and botanical gardens but also by constructing living vertical walls and green roofs. Singapore aims to become a "Garden City" using green spaces to connect communities, enrich biodiversity and improve the climate. An interesting fact is that green roofs have been installed even on public buses as plants can help to reduce the temperature inside them. In addition, a green roof was also installed on the top of a bus stop in Kuala Lumpur.
Canada also has an idea of "green cities". An extensive, long-term program has been adopted in Vancouver. The concept of the revitalization of Vancouver developed according to the principles of IOP Publishing doi:10.1088/1757-899X/1203/2/022110 6 sustainable development is the evidence of the achievement of full success in this aspect. The novelty is the designation of green roofs for crops and allotments that residents can use for work and leisure. Community (vegetable and flower) gardens grown in designated areas among residential and service buildings are also becoming more and more popular. In this city, the Semiahmoo Library boasts a beautiful green wall made of not only perennials but also shrubs and small trees.
All activities aimed at improving the environment are appreciated, as evidenced by the awards granted to the projects. Last year, The Garden House in Beverly Hills, California, received the Award of Excellence 2020. The planted green facade consumes the entire length of the building, encasing the windows, balconies and curving at the intersection of two major cross streets. The system is hydroponic and recirculates from a large holding tank at the bottom of the basement floor, pumping water up to the top of the wall and wicked down throughout the felt layers [12].
There are several successful implementations of green architecture in Poland, too. One of the examples is the University of Warsaw Library, which has both a green roof and green portions of the walls ( figure 2, figure 3). The walls are made of patinated copper, and their green color harmonizes with the greenery of the plants. Facades are covered with copper nets that support vines.
The garden on the roof of the University of Warsaw Library is considered one of the most beautiful gardens in Europe. It was the first Polish project to use green in an intensive form on the roof. There are various types of vegetation, paths, sculptures, and even a stream cascading down into the groundlevel gardens. This place has recreational, cultural, and social functions. It is visited by students, residents, and tourists, and serves as a popular spot for photoshoots. The ecological and economic function of this place is also important. The green roof compensates for losses in the biologically active slopes used for the construction of the library. It is also important that it reduces heating and air conditioning expenses (by up to 30%), collects rainwater, and suppresses noise [13].
Another example in Warsaw is the green roof at the ARKADIA shopping center, which was built in an extensive system. Yet another example of a green roof can be found in the Copernicus Science Centre, which is also open to the public. In Wrocław, the green roof complex was built on multifamily housing located in the very center of the city. However, these roofs are not made available to residents, as their function is only to improve the aesthetic values of the buildings and improve the local microclimate. In Lodz, the author's hometown, the Hanging Gardens apartment complex located on Tuwima street in the very center of the city will have green roofs, too. This project aims to start turning the city center green.
Benefits of using green solutions in buildings and their importance for sustainable development
There are many sustainable advantages of green solutions. The most important environmental benefits of the use of green roofs and walls are related to improving the microclimate, reducing the urban heat island effect, improving thermal insulation properties, reducing the building's energy needs, reducing temperatures, improving the water balance, reducing the amount of rainwater discharged by rainwater drainage, improving the air quality (CO2 absorption, oxygen evolution, reduction of dust and pollutants released to the air). An additional advantage of vertical gardens (and vegetation in general) is their positive influence on the climate by regulating air humidity and lowering the temperature. In cities, it is an invaluable property. Strongly urbanized areas suffer from the effects of the phenomenon known as the "Urban Heat Island". In the case of building surfaces, the installation of green roofs or green facades can be used to reduce the temperature of the environment and the building [14]. Research conducted in New York showed that during a hot summer afternoon standard roof surface temperature can be up to 40°C higher than the surface temperature of a green roof [15]. On average (according to measurements conducted in July 2003), the surface temperature of a standard roof was higher by 19°C in the daytime and lower by 8°C at nighttime compared to a green roof. On the other hand, the temperature inside the building covered by a green roof was on average 2°C lower in the daytime, and about 0.3°C higher at nighttime. In warm climates, green roofs potentially reduce the indoor temperature by shading the rooftop layer and preventing it from the direct influence of solar radiation [16]. In the energy consumption of the buildings, one of the vital factors is thermal comfort because that shows the occupants' satisfaction.
Finally, green roofs are often seen as an opportunity for supporting the process of receiving sustainability labels, such as LEED or BREEAM. This indirect policy, which comes from the sustainable building assessment movement, promises to be fundamental for the diffusion of green roofs. The studies discussed above show the importance and advantages of living walls or roofs as part of a sustainable strategy for the urban environment. The undoubted advantage is also the improvement of the aesthetics of the space (the visual aspect), the possibility of hiding installation devices located on the roof and creating characteristic plant elements that distinguish individual buildings. In the case of green walls, it is possible to cover less interesting parts of the facade or hide its shortcomings. Walls with plants also provide additional, effective sound insulation. Green roofs, in addition to their function of retaining water and increasing biodiversity, are the missing link between the built environment and the natural environment, which is essential for sustainable human life in cities. Green roofs are often indicated as a valuable solution for resolving the issue of the lack of green space in urbanized areas. Besides, green roofs increase the fire resistance of the roof, reduce noise (from approx. 20 dB to as much as 50 dB), and gain new functions, e.g., as recreational spaces.
Nevertheless, each solution also has drawbacks and in the case of green roofs, these are mostly: high design and construction costs, condensation of water vapor in the insulation, water stagnation, risk of plant roots breaking the insulation layer. Apart from that, intensive green roofs require more maintenance and costly renovations. While green roofs have higher initial costs than traditional roofing, green roofs have a diverse array of potential benefits.
Vertical gardens, on the other hand, not only improve the quality of life of city residents but also reduce the harmful effects of their activities on the environment and support local biodiversity. As such, they satisfy the human need for communing with nature.
Conclusions
Green walls and roofs can bring significant environmental, social, and economic benefits. This technology should be recognized as a valuable part of the design process aimed at tackling climate change, solving the energy crisis, and building sustainable cities.
Green infrastructure can play many functions and provide numerous benefits in modern cities. Also, the environmental benefits of green roofs and walls are not limited to new buildings only.
Thus, the living roofs and walls are of major importance as part of a sustainable strategy for the urban environment. Sustainable cities will exist when society makes an informed choice to move towards a more sustainable lifestyle. The green roofs and walls these are the solution for the future, for better quality of life. For a broader implementation of the process could be for example, by subsidies from the local government. | 5,506.2 | 2021-11-01T00:00:00.000 | [
"Engineering"
] |
A High-Functional-Density Integrated Inertial Switch for Super-Quick Initiation and Reliable Self-Destruction of a Small-Caliber Projectile Fuze
With the aim of achieving the combat technical requirements of super-quick (SQ) initiation and reliable self-destruction (SD) of a small-caliber projectile fuze, this paper describes a high-functional-density integrated (HFDI) inertial switch based on the “ON-OFF” state transition (i.e., almost no terminal ballistic motion). The reliable state switching of the HFDI inertial switch is studied via elastic–plastic mechanics and verified via both simulations and experiments. The theoretical and simulation results indicate that the designed switch can achieve the “OFF-ON” state transition in the internal ballistic system, and the switch can achieve the “ON-OFF” state transition in the simulated terminal ballistic system within 8 μs or complete the “ON-OFF” state transition as the rotary speed sharply decreases. The experimental results based on the anti-target method show the switch achieves the “ON-OFF” state transition on the μs scale, which is consistent with the simulation results. Compared with the switches currently used in small-caliber projectile fuzes, the HFDI inertial switch integrates more functions and reduces the height by about 44%.
Introduction
Small-caliber artillery ammunition is among the most consumed weaponry in practical training. The high initiation reliability of fuzes directly improves the damage effectiveness of ammunition and reduces the number of dangerous duds during training, which is a critical combat technical indicator [1]. In a small-caliber projectile fuzing system, the inertial components have always been the core means by which the fuze recognizes changes in the ballistic environment [2]. Especially in an electromechanical fuzing system, the inertial switch is the key to the reliable response of the fuze to the impact overload of the projectile hitting the target. The typical conditions of a small-caliber projectile fuze hitting a 2 mm thick aluminum alloy plate are listed in Table 1.
Research on improvement of inertial switches applied to fuzes has been reported at recent sessions of the American Fuze Annual Conference. This involves dividing the sensitive structures of inertial switches into spring-mass and cantilever beam types [3][4][5][6].
In recent years, the research focus on spring-mass inertial switches has evolved from traditional coil-spring-mass structures to planar spring-mass structures based on micro-electro-mechanical systems (MEMS) technology [7][8][9][10][11]. Inertial switches based on Cantilever beam structures, as typical inertial overload-sensitive structures, have been widely used in the design of high g-value accelerometers and have shown excellent performance in withstanding highly dynamic environments [12][13][14]. Ning et al. [15] designed an inertial switch based on a cantilever beam structure to respond to high axial overload in the direction of the projectile axis. When the fuze hit the target, the cantilever beam structure bent in response to forward impact overload and was electrically connected to the fixed electrode to achieve a switch state transition. Compared to spring-mass structures, cantilever beam structures are simple to manufacture. The inertial switch can also withstand highly dynamic environments while having a small overall volume, which gives it great potential for application in small-caliber projectile fuzes.
It must be noted that the small-caliber projectile fuze has the combat technical requirements of super-quick (SQ) initiation and reliable self-destruction (SD) [16]. However, existing inertial switches are based on the "OFF-ON" state transition rather than the "ON-OFF" state transition, which results in longer response time (ms level). At present, an inertial switch has not been seen that can quickly respond to axial overload on the basis of the "ON-OFF" state transition or respond to the rotational change of the projectile, which helps the fuze achieve reliable SD based on the attenuation in projectile rotary speed.
With the aim of achieving the combat technical requirements of SQ initiation and reliable SD for a small-caliber projectile fuze, a high-functional-density (HFDI) inertial switch based on the "ON-OFF" state transition is described in this paper. First, the model and working logic of the switch are designed on the basis of the axial/radial overload variation of the whole small-caliber projectile fuze. Second, a computer-aided design (CAD) model of the switch is established, and LS-DYNA software is used to perform explicit dynamics simulations based on the whole ballistic inertia overload. The simulation results show the correctness of the working principle of the switch. The switch is then processed and assembled. Finally, the switch is tested using a home-built highly dynamic testing system based on the anti-target method.
Structural Design
The designed HFDI inertial switch model is shown in Figure 1. The switch is placed on the fuze control circuit board (11) in the vertical direction of the rotary axis and at a certain distance from the rotary axis.
In the initial state, the axially sensitive layers (2-1, 2-2) and GND layers (6-1, 6-2), which consist of conductive material, are not interconnected with each other. After the projectile is launched, the axially sensitive layers respond to axial setback overload. The structure of axially sensitive layer #1 is a cantilever beam mass with a lower natural frequency. Its stronger response under setback overload helps the cantilever beam with the higher natural frequency on axially sensitive layer #2 to generate elastic-plastic deformation. As the setback overload increases, the cantilever beam on axially sensitive layer #2 contacts the copper foil (4) below insulating layer #1 (3-1). The copper foil is initially connected to the GND layers. Because the substrate (5) of the copper foil consists of flexible material (i.e., PDMS), the cantilever beam on axially sensitive layer #2 continues to generate greater elastic-plastic deformation after contact with the copper foil. As the setback overload gradually decreases and disappears, axially sensitive layer #1 releases axially sensitive layer #2. The cantilever beam of axially sensitive layer #2 generates elastic recovery due to the strain characteristics of the elastic-plastic material, but the recovery deformation of the flexible substrate can ensure that the copper foil is always stably connected to the cantilever beam on axially sensitive layer #2 (i.e., the ON state). Furthermore, Figure 1. Designed HFDI inertial switch model. 1-1: upper cover plate; 1-2: bottom cover plate; 2-1: axially sensitive layer #1; 2-2: axially sensitive layer #2; 3-1: insulating layer #1; 3-2: insulating layer #2; 4: copper foil; 5: flexible substrate; 6-1: GND layer #1; 6-2: GND layer #2; 7: radially sensitive layer; 8: insulating bolt; 9: pull-up resistance; 10: fuze microcontroller; 11: PCB.
In the initial state, the axially sensitive layers (2-1, 2-2) and GND layers (6-1, 6-2), which consist of conductive material, are not interconnected with each other. After the projectile is launched, the axially sensitive layers respond to axial setback overload. The structure of axially sensitive layer #1 is a cantilever beam mass with a lower natural frequency. Its stronger response under setback overload helps the cantilever beam with the higher natural frequency on axially sensitive layer #2 to generate elastic-plastic deformation. As the setback overload increases, the cantilever beam on axially sensitive layer #2 contacts the copper foil (4) below insulating layer #1 (3-1). The copper foil is initially connected to the GND layers. Because the substrate (5) of the copper foil consists of flexible material (i.e., PDMS), the cantilever beam on axially sensitive layer #2 continues to generate greater elastic-plastic deformation after contact with the copper foil. As the setback overload gradually decreases and disappears, axially sensitive layer #1 releases axially sensitive layer #2. The cantilever beam of axially sensitive layer #2 generates elastic recovery due to the strain characteristics of the elastic-plastic material, but the recovery deformation of the flexible substrate can ensure that the copper foil is always stably connected to the cantilever beam on axially sensitive layer #2 (i.e., the ON state). Furthermore, owing to the high natural frequency of the cantilever beam on axially sensitive layer #2, this layer is insensitive to low-amplitude axial overload in the external ballistic system, including air damping and possible impact on raindrops. In the terminal ballistic system, the cantilever beam of axially sensitive layer #2 generates elastic-plastic deformation in response to large axial overload, resulting in disconnection from the copper foil (i.e., the "OFF" state).
In the initial state, the radially sensitive layer (7) and GND layers, which consist of conductive material, are not interconnected with each other. After the projectile is launched, the radially sensitive layer responds to radial centrifugal overload. As the rotary speed of the projectile increases, the deformation of the sensitive structure based on the Y-shaped cantilever beam gradually increases until it contacts the circular boss of GND layer #2. The stable centrifugal force provided by the projectile rotation is sufficient to support a reliable connection between the radially sensitive layer and GND layer before the fuze hits the target (i.e., the "ON" state). In the terminal ballistic system, when the radial overload of the Y-shaped cantilever beam response is greater than the centrifugal force in the opposite direction, or when the centrifugal force is insufficient to resist the elastic recovery force of the Y-shaped cantilever beam with the attenuation in projectile rotary speed, the Y-shaped cantilever beam of the radially sensitive layer and the circular boss of GND layer #2 are disconnected (i.e., the "OFF" state).
Working Logic
The working logic of the fuze based on the HFDI inertial switch is shown in Figure 2. When the fuze is launched with the projectile and enters the internal ballistic environment, the switch responds to both setback overload and centrifugal overload in highly dynamic environments, and the cantilever beam structures of the axially/radially sensitive layers move. When the fuze is armed through software timing or a clock mechanism, the fuze microcontroller detects the levels of the two I/O ports, which are connected to the switch. According to the peripheral circuit in Figure 1, the levels of the two I/O ports under normal conditions should be low, indicating that the axially/radially sensitive layers of the switch have already moved into place. Then, the fuze microcontroller activates the rising-edgeinterrupt-triggering function of the corresponding I/O ports and will continuously detect the level changes of the I/O ports at a frequency of 1 MHz.
When the fuze hits the target at different incident angles (i.e., angles between the axis of the projectile and the normal direction of the target surface), as shown in Table 1, the axial/radial overload increases/decreases with the increase in incident angle, respectively. Under different terminal ballistic conditions, the three situations for the state transition of the switch are as follows: (i). When the fuze hits the target at a small incident angle, the axial component of the impact overload is large. The switch achieves the "ON-OFF" state transition through the axially sensitive structural response. (ii). When the fuze hits the target at a large incident angle, the radial component of the impact overload is large, and the switch achieves the "ON-OFF" state transition through the radially sensitive structure's response. However, it should be noted that owing to the rotation of the projectile, the direction of radial overload when the fuze hits the target is unpredictable. When the direction of overload is similar to the direction of centrifugal force on the radially sensitive structure, the switch cannot reliably complete the state transition. (iii). When the fuze misses the target and flies for a longer distance, there is no axial/radial overload. The centrifugal force on the radially sensitive structure decreases with the attenuation in projectile rotary speed and is insufficient to resist the restoring force of material elastic deformation. The switch achieves the "ON-OFF" state transition through elastic recovery of the axially sensitive structure.
When the fuze microcontroller detects the change in the rising edge of the corresponding I/O ports, it outputs the initiation signal to achieve SQ detonation of the warhead or reliable SD. When the fuze microcontroller detects the change in the rising edge of the corresponding I/O ports, it outputs the initiation signal to achieve SQ detonation of the warhead or reliable SD.
Theoretical Analysis
The designs of the axially/radially sensitive structures, which are the core structures of the switch, are shown in Figure 3. The key to achieving the "ON-OFF" state transition of the switch in the terminal ballistic system is to switch from "OFF" to "ON" in the internal ballistic system and maintain stability in the external ballistic system.
Theoretical Analysis
The designs of the axially/radially sensitive structures, which are the core structures of the switch, are shown in Figure 3. The key to achieving the "ON-OFF" state transition of the switch in the terminal ballistic system is to switch from "OFF" to "ON" in the internal ballistic system and maintain stability in the external ballistic system. Micromachines 2023, 14, x FOR PEER REVIEW 6 Assuming that the cantilever beam structures of the axially/radially sensitive la are elastic-plastic beam structures, the deflection of the beam structures in the int ballistic system is analyzed using the beam deflection equation in material mechanics deflection of the cantilever beam structures of axially/radially sensitive layers is calcu using where F is the setback/centrifugal inertial force, l is the distance between the cen of the cantilever beam structure and the fixed end, ( ) E ε is the change in the Young m ulus of the material with strain, and I is the moment of inertia of the section.
The natural frequency of the cantilever beam-mass structure in axially sensitive #1 is much lower than that of the three cantilever beams in axially sensitive layer #2 two in the middle are buffer cantilever beams, and the one on the right is a conne cantilever beam). Assuming that the cantilever beam-mass structure of axially sens layer #1 is the free mass, the inertial force generated under setback overload is unifo distributed on the three cantilever beams of axially sensitive layer #2. The deflection o free end of the connecting cantilever beam over time can be written as Assuming that the cantilever beam structures of the axially/radially sensitive layers are elastic-plastic beam structures, the deflection of the beam structures in the internal ballistic system is analyzed using the beam deflection equation in material mechanics. The deflection of the cantilever beam structures of axially/radially sensitive layers is calculated using where F is the setback/centrifugal inertial force, l is the distance between the centroid of the cantilever beam structure and the fixed end, E(ε) is the change in the Young modulus of the material with strain, and I is the moment of inertia of the section. The natural frequency of the cantilever beam-mass structure in axially sensitive layer #1 is much lower than that of the three cantilever beams in axially sensitive layer #2 (the two in the middle are buffer cantilever beams, and the one on the right is a connecting cantilever beam). Assuming that the cantilever beam-mass structure of axially sensitive layer #1 is the free mass, the inertial force generated under setback overload is uniformly distributed on the three cantilever beams of axially sensitive layer #2. The deflection of the free end of the connecting cantilever beam over time can be written as where F S (t) is the setback inertial force of the cantilever beam-mass structure of axially sensitive layer #1, F b (t) is the setback inertial force of the connecting cantilever beam of axially sensitive layer #2, and I a is the moment of inertia of the section of the connecting cantilever beam. The equations for F S (t), F b (t), and I a are The cantilever beam structure of the radially sensitive layer responds to radial centrifugal overload. The deflection of the free end of the cantilever beam over time can be written as where l mass is the centroid of the cantilever beam of the radially sensitive layer, F c (t) is the centrifugal inertial force of the cantilever beam of the radially sensitive layer, and I r is the moment of inertia of the section of the cantilever beam. The equations for F c (t) and I r are According to the structural design in Section 2.1, before the setback overload reaches its peak, the connecting cantilever beam of axially sensitive layer #2 contacts the copper foil (i.e., w a > H 3−1 , where H 3−1 is the thickness of insulating layer #1). Before the end of the internal ballistic system, the cantilever beam of the radially sensitive layer contacts the circular boss of GND layer #2 (i.e., The conductive material is brass, with a density of 8912.9 kg/m 3 and Young modulus of 117.2 GPa. The main structural parameters of the switch are listed in Table 2. According to modal analysis, the natural frequencies of the cantilever beam-mass structure of axially sensitive layer #1 and the cantilever beam of axially sensitive layer# 2 are 12.656 Hz and 112.66 Hz, respectively. Figure 4 shows the theoretical results for the axially/radially sensitive layers responding to the internal ballistic environment of the small-caliber projectile fuze. Before the setback overload reaches its peak, w a = 110 µm > H 3−1 = 100 µm, and before the end of the internal ballistic system, w r = 160 µm > W2 r /2 − W3 r − R f = 150 µm. This means that the designed switch can achieve the "OFF-ON" state transition in the internal ballistic environment. sensitive layer #1 and the cantilever beam of axially sensitive layer# 2 are 12.656 Hz and 112.66 Hz, respectively. Figure 4 shows the theoretical results for the axially/radially sensitive layers responding to the internal ballistic environment of the small-caliber projectile fuze. Before the setback overload reaches its peak, a =110 w µm > 3 1 100 H − = µm, and before the end of the internal ballistic system, r 160 w = µm >
Dynamic Simulations of the Whole Ballistic System
In order to improve simulation efficiency, the axially/radially sensitive components of the switch were modeled and simulated based on the whole ballistic inertial overload. Figure 5 shows the finite element model of axially sensitive components. *CON-TACT_AUTOMATIC_NODES_TO_SURFACE is used to define the contact relationship between axial sensitive layer #2 and other components, while *CONTACT_AUTOMATIC_ SINGLE_SURFACE is used to define the contact relationship between other components. *BOUNDARY_PRESCRIBED_MOTION_NODE is used to load the model with axial overload. The compositions of the axially sensitive components and key material parameters are listed in Table 3. The distance between node1 on the connecting beam and node2 on the surface of the copper foil (D S ) is selected as the key parameter. Figure 6 shows the variational trend of D S with and without cushioning beams based on the simulated whole ballistic axial overload. The full ballistic axial overload is composed of the internal/external ballistic axial overload calculated using an empirical formula and a triangular pulse for simulating terminal ballistic axial overload with a high peak and short duration. The cushioning beams not only effectively slow down the compression deformation by the upper beam mass of the connecting beam in the early stage of the internal ballistic process, which can prevent an accidental drop from causing deformation of the connecting beam, but also prevent the connecting beam from being excessively compressed by the upper beam mass. As the setback overload decreases, S F is less than T F , and the PDMS begi recover from deformation. The deformation process offsets the elastic recovery d mation of the connecting beam, which means that the connecting beam and coppe supported by the PDMS maintain a reliable electrical connection at the ends of the inte Figure 7 shows the Von Mises stress cloud of axially sensitive components. At 0.5 ms, under the action of the setback inertial force (F S ), the connecting beam touches the copper foil and begins to compress the PDMS. The elastic recovery force (F T ) of the PDMS helps the connecting beam resist deformation. At 1 ms, the values of F S and F T reach their maxima. As the setback overload decreases, F S is less than F T , and the PDMS begins to recover from deformation. The deformation process offsets the elastic recovery deformation of the connecting beam, which means that the connecting beam and copper foil supported by the PDMS maintain a reliable electrical connection at the ends of the internal ballistic and external ballistic environments. At 4.5 ms, a forward impact overload with an amplitude of 4000 g was loaded to simulate the impact of a large-diameter raindrop on the projectile during flight. It can be seen that the connecting beam and copper foil still maintain a stable connection, which verifies that the connecting beam is not sensitive to low-peak impact overload because of its low natural frequency. Using a forward impact overload with an amplitude of 130,000 g and pulse width of 10 µs in the simulated terminal ballistic process, the connecting beam and copper foil are disconnected within 8 µs, which helps the small-caliber projectile fuze achieve the SQ initiation function.
Dynamic Simulations of Axial Overload
amplitude of 4000 g was loaded to simulate the impact of a large-diameter raindrop on the projectile during flight. It can be seen that the connecting beam and copper foil still maintain a stable connection, which verifies that the connecting beam is not sensitive to low-peak impact overload because of its low natural frequency. Using a forward impact overload with an amplitude of 130,000 g and pulse width of 10 µs in the simulated terminal ballistic process, the connecting beam and copper foil are disconnected within 8 µs, which helps the small-caliber projectile fuze achieve the SQ initiation function. Figure 8 shows the finite element model of radially sensitive components. *CON-TACT_AUTOMATIC_SINGLE_SURFACE is used to define the contact relationship between all components. *BOUNDARY_PRESCRIBED_MOTION_NODE is used to load the model with rotary speed. The compositions of the radially sensitive components and key material parameters are listed in Table 4. The distance between node1 on the circular boss of the GND layer and node2 on the cantilever beam of the radially sensitive layer ( C D ) is selected as the key parameter. The variational trends of C D with different eccentric distances ( R ) based on the simulated whole ballistic radial overload are shown in Figure 9.
Dynamic Simulations of Radial Overload
The response speed of the radially sensitive components in the internal ballistic system is faster farther away from the projectile axis. When =4 R mm, the radially sensitive components cannot transition to the "ON" state in the internal ballistic process. When R is greater than 5 mm, the radially sensitive components can achieve the switch state transition in the internal ballistic system and maintain the "ON" state in the external ballistic system. In the simulated terminal ballistic process, as the rotary speed sharply decreases, the radially sensitive components switch from "ON" to "OFF". Figure 8 shows the finite element model of radially sensitive components. *CON-TACT_AUTOMATIC_SINGLE_SURFACE is used to define the contact relationship between all components. *BOUNDARY_PRESCRIBED_MOTION_NODE is used to load the model with rotary speed. The compositions of the radially sensitive components and key material parameters are listed in Table 4. The distance between node1 on the circular boss of the GND layer and node2 on the cantilever beam of the radially sensitive layer (D C ) is selected as the key parameter. The variational trends of D C with different eccentric distances (R) based on the simulated whole ballistic radial overload are shown in Figure 9. The response speed of the radially sensitive components in the internal ballistic system is faster farther away from the projectile axis. When R = 4 mm, the radially sensitive components cannot transition to the "ON" state in the internal ballistic process. When R is greater than 5 mm, the radially sensitive components can achieve the switch state transition in the internal ballistic system and maintain the "ON" state in the external ballistic system. In the simulated terminal ballistic process, as the rotary speed sharply decreases, the radially sensitive components switch from "ON" to "OFF". Figure 10 shows the Von Mises stress cloud of the radially sensitive components when =6 R mm. In the internal ballistic system, the radially sensitive components change from the "OFF" state to the "ON" state ( Figure 10a). In the external ballistic system, the radially sensitive components remain in a stable "ON" state ( Figure 10b). In the terminal ballistic system, the radially sensitive components achieve the "ON-OFF" state transition through two operating conditions, as described in Section 2.2. Figure 10c shows that when the rotary speed decreases, the radially sensitive components achieve the "ON-OFF" state transition, which helps the small-caliber projectile fuze achieve reliable SD function. Figure 10 shows the Von Mises stress cloud of the radially sensitive components when R= 6 mm. In the internal ballistic system, the radially sensitive components change from the "OFF" state to the "ON" state ( Figure 10a). In the external ballistic system, the radially sensitive components remain in a stable "ON" state ( Figure 10b). In the terminal ballistic system, the radially sensitive components achieve the "ON-OFF" state transition through two operating conditions, as described in Section 2.2. Figure 10c shows that when the rotary speed decreases, the radially sensitive components achieve the "ON-OFF" state transition, which helps the small-caliber projectile fuze achieve reliable SD function. Figure 11 shows the finished products of each layer of the designed HFDI inertial switch, as well as the complete switch obtained through assembly. All layers are manufactured using picosecond ultraviolet laser technology, with a positioning accuracy higher Figure 11 shows the finished products of each layer of the designed HFDI inertial switch, as well as the complete switch obtained through assembly. All layers are manufactured using picosecond ultraviolet laser technology, with a positioning accuracy higher than 5 um and a minimum machining feature size of 10 µm. Using 3D printing to manufacture assembly fixtures with shapes complementary to the designed HFDI inertial switch, each layer is placed in the assembly fixture in sequence, and finally, all layers are fixed with insulated bolts. The overall size of the designed HFDI inertial switch is 5.2 × 5.2 × 2.8 mm 3 (including a 1 mm thick screwhead), which reduces the height of the switch by about 44% compared with the currently used small-caliber projectile fuze. Figure 11 shows the finished products of each layer of the designed HFDI inertial switch, as well as the complete switch obtained through assembly. All layers are manufactured using picosecond ultraviolet laser technology, with a positioning accuracy higher than 5 um and a minimum machining feature size of 10 µm. Using 3D printing to manufacture assembly fixtures with shapes complementary to the designed HFDI inertial switch, each layer is placed in the assembly fixture in sequence, and finally, all layers are fixed with insulated bolts. The overall size of the designed HFDI inertial switch is 5.2 5.2 2.8 × × mm 3 (including a 1 mm thick screwhead), which reduces the height of the switch by about 44% compared with the currently used small-caliber projectile fuze. A home-built highly dynamic testing system was used to manufacture the "high peak, narrow pulse" impact component of the terminal ballistic system of the small-caliber projectile fuze for switch state transition testing. The composition of the home-built highly dynamic testing system is shown in Figure 12.
Highly Dynamic Testing
As illustrated by the system diagram in Figure 12, the testing system was built on the basis of the anti-target method. The light-gas gun utilized high-pressure gas to accelerate the sabot target, and the target was shot into the recycling bin. The target impacted with high speed the equivalent fuze carrier suspended on the fixed support. The designed HFDI inertial switch was fixed inside the equivalent fuze carrier. By using lead wires, external power was supplied to the switch testing circuit, and the oscilloscope was used to collect the switch state transition signal. A high-speed camera was used to observe the A home-built highly dynamic testing system was used to manufacture the "high peak, narrow pulse" impact component of the terminal ballistic system of the small-caliber projectile fuze for switch state transition testing. The composition of the home-built highly dynamic testing system is shown in Figure 12. speed at which the target entered. Compared with a forward-firing testing system, the home-built testing system based on the anti-target method has advantages such as easy real-time signal acquisition, easy adjustment of fuze attitude, and low time cost. Figure 12. Home-built highly dynamic testing system based on the anti-target method. Figure 13 shows the equivalent fuze carrier, switch testing circuit, and electrical connections. Owing to the inability to simulate the inertial environment of the internal ballistic system of the small-caliber projectile fuze in the laboratory, the connecting beam of axially sensitive layer #2 of the switch was manually deformed to the "ON" state by gently As illustrated by the system diagram in Figure 12, the testing system was built on the basis of the anti-target method. The light-gas gun utilized high-pressure gas to accelerate the sabot target, and the target was shot into the recycling bin. The target impacted with high speed the equivalent fuze carrier suspended on the fixed support. The designed HFDI inertial switch was fixed inside the equivalent fuze carrier. By using lead wires, external power was supplied to the switch testing circuit, and the oscilloscope was used to collect the switch state transition signal. A high-speed camera was used to observe the speed at which the target entered. Compared with a forward-firing testing system, the home-built testing system based on the anti-target method has advantages such as easy real-time signal acquisition, easy adjustment of fuze attitude, and low time cost. Figure 13 shows the equivalent fuze carrier, switch testing circuit, and electrical connections. Owing to the inability to simulate the inertial environment of the internal ballistic system of the small-caliber projectile fuze in the laboratory, the connecting beam of axially sensitive layer #2 of the switch was manually deformed to the "ON" state by gently pressing it with tweezers before testing. A 3.3 V DC power supply was provided through the power interface. According to the peripheral circuit design, the oscilloscope collected the stable low-level signal from the test interface. Figure 14 shows the level change of the oscilloscope through the rising-edge-triggering acquisition to the test interface when the target (solid wood) impacts the equivalent fuze carrier at a speed of about 300 m/s. The test interface level achieved the low-to-high transition on the µs scale, which indicated that the switch could achieve rapid response to terminal ballistic impact. This verified the correctness of the simulations. The switch remained stable at high levels for about 50 µs. This meant that the fuze microcontroller could perform a short delay to confirm the I/O level after detecting the rising edge, in order to eliminate the risk of interference signals.
Conclusions
In this paper, we have described how we designed an HFDI inertial switch to achieve the combat technical requirements of SQ initiation and reliable SD for a small-caliber projectile fuze. First, the structure and whole ballistic working logic of the switch were designed, and the axially/radially sensitive components of the switch were theoretically an-
Conclusions
In this paper, we have described how we designed an HFDI inertial switch to achieve the combat technical requirements of SQ initiation and reliable SD for a small-caliber projectile fuze. First, the structure and whole ballistic working logic of the switch were designed, and the axially/radially sensitive components of the switch were theoretically an-
Conclusions
In this paper, we have described how we designed an HFDI inertial switch to achieve the combat technical requirements of SQ initiation and reliable SD for a small-caliber projectile fuze. First, the structure and whole ballistic working logic of the switch were designed, and the axially/radially sensitive components of the switch were theoretically analyzed on the basis of elastic-plastic mechanics. The theoretical results indicate that the designed switch could achieve the "OFF-ON" state transition in the internal ballistic environment. Second, a dynamic simulation analysis of the whole ballistic process of the proposed device was carried out using LS-DYNA software. The simulation results showed that the switch could achieve the "ON-OFF" state transition in the simulated terminal ballistic system within 8 µs and complete the "ON-OFF" state transition as the rotary speed sharply decreased. This validated the working logic of the designed switch. Subsequently, the designed switch was processed and assembled. Compared with the switches currently used in small-caliber projectile fuzes, the HFDI inertial switch integrates more functions and reduces the height by about 44%. Finally, the switch was tested using a home-built highly dynamic testing system based on the anti-target method. When the target collided with the equivalent fuze carrier (with a switch inside) at a relative speed of about 300 m/s, the switch that was already in the "ON" state could achieve the "ON-OFF" state transition on the µs scale, which was consistent with the simulation results.
Compared to the "OFF-ON" state transition, the "ON-OFF" state transition in the terminal ballistic system has a shorter response time. Combined with the rapid attenuation in projectile rotary speed at any ballistic endpoint, this indicates that the HFDI inertial switch can help a small-caliber projectile fuze achieve SQ initiation and reliable SD.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,732.2 | 2023-07-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Flow equations for cold Bose gases
We derive flow equations for cold atomic gases with one macroscopically populated energy level. The generator is chosen such that the ground state decouples from all other states in the system as the renormalization group flow progresses. We propose a self-consistent truncation scheme for the flow equations at the level of three-body operators and show how they can be used to calculate the ground state energy of a general N-body system. Moreover, we provide a general method to estimate the truncation error in the calculated energies. Finally, we test our scheme by benchmarking to the exactly solvable Lieb–Liniger model and find good agreement for weak and moderate interaction strengths.
Introduction
The worlds of many-and few-body physics are generally far apart. In the former, the number of particles is often infinite, while few-body systems normally do not contain more than a handful of particles. The typical goal in many-body physics is to calculate thermodynamic quantities such as the energy per particle and the density profile. However, the large number of degrees of freedom in many-body systems usually means that various approximations and/or large computational resources are needed to achieve this goal. In contrast, it is often possible to solve few-body problems exactly, i.e., to find the full spectrum of the Hamiltonian and the corresponding wave functions. There is an interesting class of systems that are in between these two extremes. These are finite systems in which the number of particles is sufficiently large for many-body phenomena, such as superfluidity or Bose-Einstein condensation, to emerge [1,2,3]; but they are still small enough to be within reach for numerically exact ab-initio calculations that use microscopic Hamiltonians. The investigation of such finite systems is crucial to understand how many-body phenomena arise from few-body body physics and microscopic interactions of the constituents.
To investigate this progression from few-to many-body behavior theoretically one needs reliable numerical techniques in the transition region. There exists a number of suitable techniques in physics and chemistry and new methods are being developed (see, e.g., references [4,5,6,7,8,9,10]). A significant breakthrough was made with the development of flow equation methods which are also referred to as the similarity renormalization group (SRG) [11,12]. In this approach, a set of differential equations is solved to obtain unitarily equivalent Hamiltonians with desirable properties. This set is determined by a generator, which controls the change of the Hamiltonian at every step of the evolution. Note that this generator is determined dynamically. Its matrix elements depend on the flow parameter s and are calculated at every step of the evolution from the transformed Hamiltonian. This represents one of the key advantages of the SRG, which allows one to find a (block-)diagonal representation of the Hamiltonian.
Recently a new approach based on flow equations, the in-medium similarity renormalization group (IMSRG), has been proposed for nuclear physics problems where the fundamental degrees of freedom are fermionic [13]. The IMSRG is a very promising method for medium-mass nuclei, which lie exactly in the few-to many-body transition region discussed above (see [14] for a recent review).
In this paper, we develop a similar method for cold Bose gases. To this end, we write flow equations for bosonic systems with a macroscopic occupation of one state. We introduce a suitable truncation scheme that facilitates numerical calculations and discuss its accuracy. In particular, we provide an algorithm to estimate the truncation error using perturbation theory. We validate our method using the exactly solvable Lieb-Liniger model in one dimension, and show that even without preliminary knowledge of the reference state our method can be used to accurately describe systems with weak and intermediate interaction strengths.
The paper is organized as follows: in section 2, we review the foundations of the SRG method. In section 3, we introduce the Hamiltonian of interest and write down the flow equations to find its eigenvalue. Here we also discuss the accuracy of our approach and provide a way to estimate the accuracy of the calculated energies. We test the method in section 4 using the exactly solvable Lieb-Liniger model as a benchmark. Section 5 concludes the paper with a summary of our results and a brief outlook on the generalization to three spatial dimensions. For the reader's convenience, we include six appendices with technical details on the evaluation of commutators, the truncation of the three-body operator, the convergence of the two-body energy, the effective interaction used in the Lieb-Liniger model, the use of White-type generators, and the error estimation.
Preliminaries
For a self-contained discussion, we first review the SRG method as it forms the basis of our approach (cf. [11,12,14,15,16]). To this end, we introduce a real symmetric matrix M that represents a linear operator in a particular basis ‡. If we transform this basis using some orthogonal matrix Q (i.e., QQ T = I, where I is the identity matrix) then the linear operator will be represented by the new matrix M(Q) ≡ QMQ T , which is unitarily equivalent to M. The SRG equations simply describe the change of M for a small change of the basis: Q = I + ηδs (|δs| ≪ 1), In the limit δs → 0, the SRG equations can be written in the differential form: They define the evolution of matrix elements M ij driven by the skew-symmetric matrix η = −η T . By specifying η, one finds a unitarily equivalent to M matrix with some desired properties. Note that the system of equations (2) is often called the "flow equation", as it defines the "flow" of matrix elements under the SRG transformation, and the "generator" η determines the flow by defining the "direction" of the transformation at each value of s. We illustrate the evolution using a generator η that contains only two non-zero elements η ab = −η ba , i.e., η ik (s) = α(s)(δ ia δ kb − δ ib δ ka ). This matrix leads to the system of equations ‡ Two comments are in order here. First, we use bold type for matrices and operators, e.g., M, and italic type for the corresponding matrix elements, e.g., M ij . Second, we choose to work with a real matrix M to simplify the discussion. The ideas presented here can be extended straightforwardly to Hermitian matrices.
in which the element M ab = M ba is transformed as Let us assume that we want the flow to eliminate the element M ab as s → ∞, e.g., by demanding that M ab (s) = M ab (0)e −s . Inserting this ansatz into (4) we find that α = −M ab /(M bb − M aa ) fulfills this requirement §, i.e., it decouples the basis states with numbers a and b. Note, however, that to achieve this decoupling, the flow usually needs to couple states that were not coupled before. For example, if we had M cd = 0 at s = 0, then this element will attain a non-zero value if M cb δ da + M db δ ca − M ca δ bd − M da δ bc = 0. Let us give another example of how one can obtain a new matrix with some desired properties by choosing an appropriate generator η. To this end, we use a generator that contains only one row and one column, i.e., η ik = δ i0 α k − δ k0 α i with α 0 = 0. The corresponding flow equations are The prescription α i>0 = −M 0i , which is inspired by the previous example, leads to dM 0i>0 A formal solution to this equation can be found using the Magnus expansion here T denotes the s-ordering operator (see, e.g., [16,17]), and ′ means that the first row and the first column should be crossed out from the matrix. If all M 0j are initially small (i.e., much smaller than the differences of the eigenvalues of M), then the long time behavior can be estimated by examining the matrix (M(0) − M 00 (0)I) ′ . This shows that if M 00 (0) is close to the ground state then M 0i is driven to zero during the evolution, and hence M 00 (s → ∞) is the ground state of the matrix. These considerations can be useful in physics problems, as they allow one to find eigenenergies of a system by diagonalizing (block-diagonalizing) the corresponding Hamiltonian. This statement will be exemplified below.
Hamiltonian
We now consider a system of N bosons that is described by the Hamiltonian where a α 1 is the standard annihilation operator . Since the system is bosonic, H is symmetrized with respect to particle exchanges, i.e., B ijkl = B jikl = B ijlk . For our numerical calculations this Hamiltonian should be written as a finite-dimensional matrix. Therefore, we assume that the sums in every index run only up to some number n that defines the dimension of the used one body basis.
We are mainly interested in the ground state properties of systems with a macroscopic population of one state (condensate). To incorporate our intentions in the Hamiltonian, we normal order operators using the reference state Φ = N α=1 f (x α ), where f (x) is some one body function that approximates the condensate (e.g., obtained by solving a suitable Gross-Pitaevski equation): : where P α 1 α 2 exchanges the indices α 1 and These operators connect the reference state to the states that contain one and two excitations respectively.
Using the normal-ordered operators we rewrite the Hamiltonian as where ǫ is the energy per particle in the reference state, and the elements f α 1 α 2 and Γ α 1 α 2 α 3 α 4 describe one-and two-body excitations, correspondingly. We will construct the Hamiltonian matrix using the basis that contains f (x) as the zero element, therefore, from now on we use ρ
Truncated flow equations
Our goal is to find a matrix representation of H in which the couplings to the reference state vanish, i.e., f i0 = Γ ij00 = 0, so ǫ is an eigenenergy. To achieve this, we write H in a particular basis and then use the flow equations where the antihermitian matrix η eliminates the couplings. To solve this equation, we assume that during the flow the generator and the Hamiltonian contain only one-and From now on we adopt in the numbered equations the Einstein summation convention for the letters from the Latin alphabet, i.e., A ij a † i a j ≡ ij A ij a † i a j , and reserve the indices α 1,2,... for the places where this convention is not implied. two-body operators, i.e., For now we leave the parameters ξ ij and η ijkl undetermined. We just mention that they must be chosen such that the couplings vanish at s → ∞. This is usually achieved by calculating ξ ij (s) and η ijkl (s) for every s from the evolved matrix elements of the Hamiltonian. We give a possible choice of η in the next section. It is worthwhile noting that since η is antihermitian, the following relations must be satisfied ξ ji = −ξ * ij , and η klij = −η * ijkl . Moreover, we assume that η ijkl = η jikl = η ijlk , because by construction Note that equations (16), (17) and (18) do not lead in a general case to a selfconsistent system of equations. Indeed, the commutator ¶ [η, H] contains the three body operator (see Appendix A) where the superscript (3) corresponds to the piece of the commutator that contains three-body operators. This piece is apparently beyond the scheme put forward in (17) and should be omitted. To this end, we extract from [η, H] (3) the terms that contain at least one operator a † 0 a 0 , and put to zero the remaining pieces (called W). The operator a † 0 a 0 is then treated as a constant because of the assumed macroscopic occupation of the lowest state (see Appendix B).
After the three-body operator is truncated, we end up with a closed system of equations. To write it down, we equate the coefficients in front of the same operators, i.e., This system of equations can be solved using standard solvers of ordinary differential equations. During the evolution, an appropriate choice of η eliminates the couplings f i0 = 0 and Γ ij00 = 0, so that ǫ(s → ∞) approximates an eigenenergy of the system. It is worthwhile noting that it is not guaranteed that ǫ is close to the ground state energy, unless Φ describes the ground state wave function "well" (so that f i0 and Γ ij00 are much smaller than the differences of the eigenenergies of H).
Error estimation
Since the flow equations are truncated at the level of three-body operators and beyond, it is important to estimate the error induced by this approximation. Let us imagine that we have integrated the flow equations (21)-(23) up to s → ∞, and obtained the operator H(s) within our truncation scheme as well as the generator η(s). Now we assume that η is fixed for every s and use it to introduce the operator H that solves equation (16) with the initial condition H(s = 0) = H(s = 0) without any truncations. Hence, H is unitarily equivalent to H(0). We emphasize that η(s) is given from the beginning for every s and not obtained dynamically as before.
The operator H can be written as H(s) = H(s) + H a (s), where H(s) is obtained from the truncated flow and H a satisfies the equation supplemented by the initial condition H a (s = 0) = 0. Note that the operator H a is generated by W, which is the part of (20) that is neglected in our truncation scheme. Therefore, we postulate that our approximation is meaningful only if H a (s → ∞) can be treated as a small perturbation for the state of interest. In this case ǫ(s → ∞) is close to the exact eigenenergy of the operator H.
To estimate H a (s) we write two formal solutions to (25) where U is the transformation matrix generated by η dU ds = ηU → U(s) = T e s 0 η(x)dx .
Equations (26) and (27) allow us to estimate H a (s → ∞) and then use matrix perturbation theory to find the correction to the energy of the eigenstate. We will illustrate this procedure below using the Lieb-Liniger model.
Lieb-Liniger Model
To test our method, we use the exactly solvable Lieb-Liniger model [18], which describes N spinless bosons on a ring of length L. The particles interact via delta functions, so the corresponding one-dimensional Schrödinger equation is where we put = m = 1 for convenience. The parameters of the model are γ = g/ρ and e = 2E N /(Nρ 2 ), where ρ = N/L is the density of the system. Since this model is exactly solvable for any N, L and g, it gives us a good reference point for testing our approach. Note, however, that we do not expect our approach to work extremely well for large systems, as strong correlations preclude the existence of a "true" BEC in one spatial dimension.
To write the initial matrix elements and the reference state, we use the onebody basis of plane waves, i.e., φ i (x) = e ik i x / √ L, where k i ∈ {0, ±1, ±2, ...}2π/L and Φ = L −N/2 . Inspired by the discussion in section 2, we write the generator as Here we explicitly relate the parameters of the generator (17) to the parameters of the Hamiltonian (18) for every s. This generator decouples the element Φ|H|Φ from the rest; see figure 1, where we plot ǫ(s) and p>0 | Φ p |H(s)|Φ | 2 with Φ p containing oneand two-body excitations. Therefore, the latter represents the coupling to the state of interest. We see that during the flow the couplings vanish, and ǫ(s → ∞) can be interpreted as the eigenvalue of the matrix. Note that we do not plot any numbers on the y axis as this schematic plot is representative for all considered cases. In our code we use units with L = 2π, which gives a particularly simple form of the momenta, and sets the scale for the energy and s in the problem. For example, the energy difference between the two lowest non-interacting states is one, and therefore the slowest dynamics in the weakly interacting case are described approximately by e −s . Figure 1 shows that in these units the decoupling indeed occurs for s of O(1) as expected. The study of the flow for other generators is beyond the scope of the present paper. However, we did check (see Appendix E) that the results obtained with the generator (30) agree with the results obtained using White's generator [14,15], which includes additional energy denominators compared to (30).
Results
N = 4. We start with N = 4. Note that for a cutoff n ≃ 25 (in the one-body sector this corresponds to the maximal energy of 288π 2 /L 2 ), we can easily run the flow until the states are decoupled with high accuracy. Therefore, we have only two sources of error. The first is due to the truncation of the Hamiltonian at s = 0. This error vanishes in the limit of large n, but since the delta function potential has a hard core (it couples all plane waves equally strongly) the convergence to the n → ∞ limit might be relatively slow (see Appendix C). However, one can still extract accurate results either by fitting (see Appendix C) or by using an effective interaction (see Appendix D). To be on the safe side, we first solve the problem using the former method and then using the latter. The results of both methods agree well. This is demonstrated explicitly in figures D1 and D2 for two parameter sets. The second error is due to the truncation of the three-body term in Eq. (20). To estimate this error, we note that according to (26) for a weak interaction H a ≃ Wds. By definition, the operator W connects the state of interest to the states with three [19]. The yellow squares are the outcomes of the SRG. The blue circles additionally include the correction δe. The dashed line represents the ground state energy in the strong coupling limit, i.e., e(1/γ = 0). The inset shows the behavior of the correction as a function of −1/γ, the solid line is plotted here to guide the eyes.
excitations. To calculate the contribution to the energy of the perturbation H a , we use the standard second-order eigenvalue correction from perturbation theory, i.e., where the sum goes over the all states that contain three particles excited out of the condensate. For consistency, we will keep only the lowest terms in g in the denominator. We show our results in figure 2. On the scale of the figure, the results for the bare delta-function interaction and the effective interaction are indistinguishable. We see that the SRG reproduces the exact results at weak and moderate coupling strengths. However, when the interaction strength increases the energy starts to deviate noticeably. This behavior can be understood by calculating δe. We see that this term grows very rapidly (numerical analysis reveals that in the considered interval this term grows faster than γ 2 ) and already at γ = π 2 /2 it accounts for about 25% of the SRG result. This shows that the used truncation scheme is not accurate for this γ making us stop our calculations. N = 15. Our results for N = 15 are shown in figure 3. On the scale of the figure the results for the bare delta-function interaction and the effective interaction are again indistinguishable. We see a similar trend as for N = 4: The SRG reproduces well the exact results at small and moderate coupling strength, but fails to describe strongly interacting systems. The window of applicability of the SRG for N = 15 is slightly smaller than for N = 4, which is expected from our error estimation which shows that δe grows with N (see Appendix F).
Conclusions
In this paper we have developed a non-perturbative numerical procedure to address bosonic systems with a macroscopic occupation of one state. The method is based on the SRG approach in which the Hamiltonian is transformed to decouple the state of interest from the rest. This transformation is done through a sequence of infinitesimally small rotations in the state space described by a system of differential equations. To make this system solvable with the standard numerical software, we truncate it at the level of three-body operators, and present means to estimate the introduced uncertainty. To illustrate our approach we turn to the Lieb-Liniger model, which shows that our flow equations describe small systems with weak and moderate interactions well. Note that our method can be used to describe two-and three-dimensional systems and we use here a one-dimensional model because its exact solutions allow us to directly test our procedure (although studies of trapped systems in one spatial dimension are interesting on their own right, see [20] and references therein) .
Our approach will allow one to study properties of trapped bosons, systems with a static or mobile impurities [21]. Also, it will be interesting to investigate threedimensional bosonic bound clusters that appear in different branches of physics such as 4 He-clusters in condensed matter physics [22] and α-clusters in nuclear physics [23,24]. In these cases one might need to pick the basis carefully to reduce numerical effort. For instance, if the system is spherically symmetric then the basis should be chosen accordingly (cf. Ref. [14]).
With some modifications our method can be used to study other set-ups. In particular, we believe that it is possible to extend the method to bosonic systems without a condensate. To this end, one shall simply follow the steps presented above. First a reference state is used to normal order the operators. This reference state should describe an eigenstate of the Hamiltonian "well", such that higher-body excitations are suppressed. As in the present work, the normal ordering provides one with means to truncate the differential equations, opening up the opportunity to approach N-body problems using a few-body machinery. Note that a suitable reference state in onedimensional systems can be obtained by a linear superposition of weakly-and stronglyinteracting states [25], providing one with a good starting point for this investigation.
Appendix A. Evaluation of commutators
To write down the flow equations, we need the commutators of the terms in η and H. For the commutator of one-body operators and one-and two-body operators, we find: i a k :, : a † m a n :] = (λ il β ln − β il λ ln )(: a † i a n : +ρ in I), here we assume that β α 1 α 2 α 3 α 4 = β α 2 α 1 α 3 α 4 = β α 1 α 2 α 4 α 3 , the same will be assumed for λ α 1 α 2 α 3 α 4 . Note that with our definition of κ = (N − 1)/(2N) the element proportional to I in the second last row vanishes. For the commutator of the two-body operators, we find: The last term proportional to a † i a † k a † b a l a c a d does not fit in our approximation scheme and should be truncated. Our implementation of this truncation is discussed in Appendix B.
Appendix B. Truncation of the three body operator
To truncate the three-body operator, we assume that the number of particles in the lowest state is large, and thus the main contribution to the ground state energy is due to the piece of a † i a † k a † b a l a c a d which contains at least one operator a † 0 and one operator a 0 . Because of the presence of the condensate, these operators are then treated as numbers, i.e., Appendix C. Convergence of the two body energy The delta function interaction leads to a cusp in the wave function at zero separation of particles. This non-analyticity implies that accurate results for observables can be obtained only with a large number of plane wave states. We illustrate this statement by plotting the convergence of the ground state energy versus the number of the one-body basis states for the Lieb-Liniger model with just two particles, see figure C1. This plot shows that even in the two-body system the convergence with n is very slow if g is large.
For large n the convergence pattern in the figure can be well approximated by where A is some constant that depends on g. Note that this convergence is faster than in a harmonic oscillator [26,27] where it is described by ∼ 1/ √ n. As is apparent from the discussion below this difference is connected to a slower growth of the energy with n in a harmonic trap compared to a ring.
To understand this convergence pattern let us assume that we have diagonalized the matrix for some cutoff n, and obtained the energy E n and the wave function Ψ n . Now let us see what happens when we diagonalize the Hamiltonian for n + 2. The corresponding matrix includes the matrix for n coupled to the rest via g L e 2iπ(n 1 x 1 +n 2 where at least one of the states n 1 , n 2 was not included in the matrix for n. We have assumed that n is large so Ψ n ≃ Ψ ∞ . The function Ψ ∞ (x 1 , x 1 ) ≡Ψ is constant due to the rotational symmetry of the ring, and therefore we have g L e 2iπ(n 1 x 1 +n 2 Now using the second order correction from matrix perturbation theory we calculate the correction to E n due to the increase of the matrix size Summing the contributions for different n up to infinity, this equation leads directly to (C.1). In general the leading order correction proportional to n −δ is characteristic for delta function interactions and we can use it to obtain accurate results from the convergence pattern. We have observed that for larger number of particles the convergence behavior is also well described by (C.1).
Appendix D. Effective Interaction
Another way to produce accurate results for the Lieb-Liniger model is to use some effective potential that reproduces low-energy properties of the system. For relevant studies of cold atomic systems see references [28,29]. To introduce this potential, we first notice that all that we need to know about the interaction in our formalism is the following matrix element 1 where |n i | ≤ n max and n max is the truncation parameter defined by n. Apparently such a matrix element also appears when we solve the Schrödinger equation in the 'relative' coordinates by expanding the wave function ψ in the plane wave basis, i.e., ψ = 1 √ L |n l |≤nmax a l e −2πin l x/L and solving the matrix equation Here Q is the matrix that contains eigenvectors as columns, E is the diagonal matrix that contains the eigenvalues, and T is the kinetic energy. Now we can turn the question around and find the potential that within our truncation space gives some specific matrices E and Q. Such the potential then reads Let us now specify the desired low-energy properties. First of all, we fix the energies E αα to the n lowest eigenenergies of the equation this choice means that in the two-body sector we always obtain correct energies. Next, we define the matrix Q as where the matrix u is an n × n matrix defined as u . We see that if n → ∞ then u T u → 1, and we have Q → u. Therefore, the matrix Q is an orthogonal matrix that approximates the eigenstates and for n → ∞ it reproduces the exact results.
The effective interaction shows faster convergence than the zero-range interaction, see figures D1 and D2, which depict our results for a few representative cases. By comparing the fitted values for the zero-range interaction and for the effective interaction we cross-check the two methods and insure accuracy of our results. The convergence pattern for the delta function potential is usually well described by C.1. Note that we cannot directly apply the same line of arguments to find the convergence pattern for the effective interaction potential. Indeed, in this case the increase of the matrix size leads to a change of all matrix elements, and, therefore, standard perturbation theory cannot be used.
Appendix E. Other generators
In section 2, we present examples of different generators that can be used to create the flow, see also [12,13,14,15,16]. In the main text, we illustrate our method using exclusively the operator (30) and leave other generators for future studies. Note that other η can be used directly in the derived flow equations (21)-(23) after the parameters of the generator (17) are specified. In this appendix, we briefly discuss the use of the White-type generator η W hite (s) = ξ i0 (s) : a † i a 0 : + .
(E.2) Figure D2. The relative error in energy e n /e ∞ − 1 as a function of the one-body truncation n. The parameters are N = 15, L = 2π, and γ = 5π 2 /21 ∼ 2.35. The value e ∞ is obtained from the fit e n = e ∞ + cn −δ , where e ∞ , c, δ are the fitting parameters.
The values e ∞ obtained for the effective interaction and the delta potential differ by less than 1 %.
This generator is similar to the one in (30) but it has additional energy denominators. As can be infered from section 2 for weak interactions this leads to the simultaneous decay of all couplings with e −s . Without truncation, the operators η W hite and η in (30) define a unitary transformation and consequently lead to the exact energies. Our truncation scheme spoils this property, but it turns out that for the considered cases the results of the two generators are still very close to each other. We illustrate this statement in figure E1 for N = 4 and γ = π 2 /2. The correction δe for this case accounts for about quarter of the SRG result meaning that the truncation procedure is no longer accurate, still the relative difference between the two results is a fraction of a percent. Therefore, for this problem these two generators can be used interchangeably.
Appendix F. Dependence of δe on N.
We are not able to provide a simple analytical analysis if the terms with Di erence Figure E1.
The relative difference e n /e W hite n − 1 as a function of the one-body truncation n. Here e n is calculated using the generator η in (30), and e W hite n using η W hite . The parameters are N = 4, L = 2π, and γ = π 2 /2 ∼ 4.93. The fit e ∞ /e W hite ∞ − 1 + cn −δ , where e ∞ /e W hite ∞ , c, δ are the fitting parameters, leads to e ∞ /e W hite ∞ − 1 ≃ 0.002. Figure F1. The ratio δe/e as a function of N for the Lieb-Liniger model with γ = 0.1. Here e is the energy per particle and the correction δe is calculated using (31).
NS α 1 α 2 α 3 α 4 α 5 α 6 in (23) are large. Instead, we investigate this case numerically. To this end, we choose to work with γ = 0.1. We find (see figure F1) that the ratio δe/e increases with N, however, slower than N 4 . Fitting suggests a much milder ∼ N 2 scaling in this window of N. | 7,565.6 | 2017-05-08T00:00:00.000 | [
"Physics"
] |
Radial diffusion modeling with empirical lifetimes: comparison with CRRES observations
Abstract. A time dependent radial diffusion model is used to quantify the competing effects of inward radial diffusion and losses on the distribution of the outer zone relativistic electrons. The rate of radial diffusion is parameterized by Kp with the loss time as an adjustable parameter. Comparison with HEEF data taken over 500 Combined Release and Radiation Effects Satellite (CRRES) orbits indicates that 1-MeV electron lifetimes near the peak of the outer zone are less than a day during the storm main phase and few days under less disturbed conditions. These values are comparable to independent estimates of the storm time loss rate due to scattering by EMIC waves and chorus emission, and also provide an acceptable representation of electron decay rates following the storm time injection. Although our radial diffusion model, with data derived lifetimes, is able to simulate many features of the variability of outer zone fluxes and predicts fluxes within one order of magnitude accuracy for most of the storms and L values, it fails to reproduce the magnitude of flux changes and the gradual build up of fluxes observed during the recovery phase of many storms. To address these differences future modeling should include an additional local acceleration source and also attempt to simulate the pronounced loss of electrons during the main phase of certain storms.
Introduction
The radiation belts consist of electrons and protons trapped by the Earth's magnetic field.Protons form a single radiation belt while electrons exhibit a two zone structure.The inner electron belt is located typically between 1.2 and 2.0 R E , while the outer zone extends from 4 to 8 R E .The quiet time region of lower electron fluxes is commonly referred to as Correspondence to: Y. Y. Shprits<EMAIL_ADDRESS>a "slot" region.The inner belt is very stable and is formed by a slow, inward radial diffusion subjected to losses due to Coulomb scattering and whistler mode pitch angle diffusion (Lyons and Thorne, 1973;Abel and Thorne, 1998).The observed variability of electrons in the outer radiation belt is due to the competing effects of source and loss processes.Reeves et al. (2003) showed that approximately half of all geomagnetic storms result in a net depletion of the outer radiation belt or do not substantially change relativistic electron fluxes as compared to pre-storm conditions, while the remaining 50% result in a net flux enhancement.Non-adiabatic interactions with various plasma waves may result in acceleration of electrons while pitch-angle scattering causes diffusion of electrons into the loss cone where they are removed by collisions with atmospheric particles on the time scale of one quarter bounce period.
In the present study we use a data-model comparison technique to estimate electron lifetimes.This is a first attempt to derive the physical parameters which could be used as a reference for theoretical estimates.In the Discussion section we speculate on the possible theoretical interpretation of the results.
Particle motion and diffusion
High energy electrons in the radiation belts undergo three types of periodic motion: 1. Gyro motion around field lines (∼ms); 2. Bounce motion in the meridian plane between mirror points (∼s.); 3. Gradient and curvature drift around the Earth (∼10 min).
Each type of periodic motion has an associated adiabatic invariant, referred to as 1st, 2nd and 3rd adiabatic invariants (µ, J , and or J 1 , J 2 , and J 3 ), respectively.By ignoring processes which result in jumps in phase space, and neglecting diffusion with respect to the phases of the adiabatic motion, the evolution of the phase space density f can be described in terms of the Fokker-Planck Eq. ( 1) (Schulz and Lanzerotti, 1974), which has a form of a diffusion equation when written in terms of canonical variables such as adiabatic invariants, where we used Einstein's notations with summation over repeated indexes.
Losses in the inner magnetosphere create gradients in phase space density (usually directed away from the Earth).Radial diffusion driven by ULF waves acts to reduce such gradients by transporting particles radially inward, which violates the third adiabatic invariant.Since the period of ULF waves is much longer than the time scale associated with the first and second adiabatic invariants, only the third adiabatic invariant is violated.Conservation of the first and second adiabatic invariants consequently results in the acceleration of particles during the inward transport.If we ignore local acceleration and rewrite Eq. (1) in terms of L, assuming a dipole field, we obtain the radial diffusion equation in the form: where τ is the electron lifetime, and D LL is the radial diffusion coefficient.In this formulation the first two adiabatic invariants µ and J are held constant and Eq. ( 2) can be solved numerically for f (L, t).
While the equilibrium structure of high energy electron fluxes and the formation of the slot region have been accurately modeled under quiet conditions (Lyons and Thorne, 1973), the dynamics of relativistic electrons during geomagnetic disturbances is still poorly understood.In the present study we attempt to estimate the lifetime parameter by adopting an empirical relationship for the rate of radial diffusion due to magnetic fluctuations (Brautigam and Albert, 2000), which tends to dominate throughout the outer radiation zone D M LL (Kp, L) = 10 (0.506Kp−9.325)L 10 , Kp = 1 to 6 .
(3) Solutions of the time dependent code, ignoring the effects of local acceleration sources and only considering radial diffusion with losses, are compared to CRRES observations.
Model description
The inner boundary for our simulation f (L=1)=0 is taken to represent loss to the atmosphere.The outer boundary condition on the phase space density is obtained from the fluxes at L=7.Even though fluxes near geosynchronous orbit vary significantly during the storm, CRRES measurements will be highly effected by adiabatic variations (Kim and Chan, 1997).Consequently, in this study we use constant boundary conditions based on averaged fluxes at L=7 obtained from CRRES and Polar measurements (N.Meredith, P. O'Brien personal communication).We model fluxes by an exponential fit where K is kinetic energy in (MeV) obtained from the timeaveraged satellite flux measurements.Variations of the outer boundary conditions may create outward gradients in phase space density, which will drive inward radial diffusion and could result in significant electron losses during the main phase of the storm.Inclusion of L * derived time dependent boundary conditions for various existing field models will be deferred for future research.
For simplicity we first assume that the diffusion coefficients and lifetimes are independent of energy and solve Eq. ( 2) for f (L, t), normalized to unity at the outer boundary.This solution will be the same for all µ values.Consequently, to obtain f (E, L) the normalized phase space density should be multiplied by J (E * )/p * 2 , where E * and p * are kinetic energy and momentum of the particles for any prescribed value of µ at the outer boundary and J is a differential flux at the outer boundary.Following (Shprits and Thorne, 2004) lifetimes are parameterized as a function of K p .
Simulations of 500 CRRES orbits
We describe a numerical experiment which starts on 30 July, DOY 211 (the number of days since the start of 1990).We simulate MeV electron fluxes for 196 days which approximately corresponds to 500 orbits of the CRRES measurements.The second panel in Fig. 1 shows 1 MeV electron fluxes measured by the High Energy Electron Fluxmeter (HEEF) on CRRES satellite for the outer radiation belt.The 1 MeV electron fluxes show significant variability by three orders of magnitude with fluxes maximizing between 3.5 and 4.5 R E .The periods of enhanced storm time electron fluxes vary in duration from a few days to two weeks.The substantial depletions of the outer radiation belt prior to the increases in fluxes is most likely associated with increased wave activity during the main phase of the storm, but might also be caused by variations of fluxes at the outer boundary which will not be taken into account in these simulations.
The third panel shows simulated electron fluxes with a constant lifetime parameter of 10 days at all L, which is comparable to expected loss times from plasmaspheric hiss (Lyons et al., 1972;Abel and Thorne, 1998).Model results with a 10-day lifetime globally overestimate fluxes at all L.
The unrealistic refilling of the slot at almost all times of the simulation, and duration of increased storm-time fluxes for up to a month indicate that a 10-day lifetime is unrealistically long.
The top panel of Fig. 1 shows simulations with empirical lifetimes parameterized as a function of K p .The model is initiated with a quiet-time steady-state solution.In finding the best parameterizations we attempt to minimize the differences between model results and observations for the following parameters: location of the maximum in fluxes, variation in fluxes in the outer zone, and the demarcation line between high and low fluxes near the inner edge of the outer radiation belt.The best simple fit to the lifetime parameter that we visually found to minimize differences in the above parameters is τ =(3/K p ) outside the plasmapause which gives τ ≈3 days during quiet times and less than a day during storms.Inside the plasmapause we set lifetimes to 10 days.The approximate plasmapause location is computed according to Carpenter and Anderson (1992).
On a time scale of days we are able to approximately reproduce the location of the flux maxima, the radial extent of enhanced fluxes, and the post storm decay of fluxes in the outer zone.The sharp increases in fluxes during the main phase of the storm are probably due to unrealistic constant boundary conditions and our neglect of more intense wave scattering during the main phase of a storm when radial diffusion rates are the highest.The radial diffusion model also fails to reproduce the duration of flux enhancements of many storms, as well as the gradual build-up of fluxes during storms, which is described in more detail in Sect. 5.
Figure 2 shows the logarithm of the ratio of observed to modeled fluxes.During the first 15 days of the simulation there is a two orders of magnitude difference between the model and the observation due to inaccurate initial conditions.However after 20 days, the model reaches a dynamical state which is independent of the initial conditions.Prolonged orange and red areas show intervals where the radial diffusion model underestimates fluxes by an order of magnitude.We attribute this discrepancy to our neglect of a local acceleration source throughout the recovery phase of storms.Short lasting blue areas correspond to an overestimation of fluxes by the model which could be due to an underestimation of losses or unrealistic constant boundary conditions during the main phase of storms.
Simulations of October 1990 storm
We use our optimized loss time scale to model the 9 October 1990 storm (Brautigam and Albert, 2000;Meredith et al., 2002;Summers et al., 2002), (Fig. 3, top panel).Observations show a sudden drop in fluxes throughout the outer radiation belt during the main phase of the storm which starts on DOY 283 (top panel).However, since we have chosen constant boundary conditions, radial diffusion is unable to reproduce the main phase decrease of fluxes.The K p index (bottom) reaches its maximum value of 6 on DOY 283, which induces a rapid increase in modeled 1-MeV fluxes (middle panel) during the storm main phase.The radial diffusion model also predicts a decay of fluxes right after the main phase of the storm, contrary to High Energy Electron Fluxmeter (HEEF) measurements on CRRES which indicate that fluxes maximize several days into the recovery phase and stay high for almost 10 days after the onset of the storm with peak fluxes above 10 7 (cm sr s MeV) −1 .This discrepancy between the radial diffusion model and observations can be mostly explained by the influence of an additional local acceleration source which was not included in the model.Based on CRRES observations, Meredith et al. (2002) showed that this event contained prolonged substorm activity during the recovery phase of the storm with an AE index above 100 for 6 days.The VLF wave intensity was above 1000 pT 2 day over the range 3.5<L<6.5,with a peak value of more than 10 4 pT 2 day around L=5.Note that relatively high geomagnetic activity keeps the plasmapause compressed throughout the recovery phase.This combination of a compressed plasmapause and increased VLF activity creates favorable conditions for local acceleration (Horne et al., 2003) throughout the recovery phase of the storm, in a broad spatial region outside the plasmapause.
Discussion
The study reported here presents the first attempt of a datamodel derived empirical estimation of the lifetime parameter.The radial diffusion model with simplified data derived lifetimes is capable of predicting the radial extent of high energy fluxes and locations of peak fluxes for many storms, and predicts MeV fluxes within one order of magnitude accuracy for most of the time of the simulation and most L values.Our results indicate that lifetimes range from less than a day during active conditions to a few days under less disturbed conditions.
The simulation described above indicates that pitch-angle scattering (perhaps due to chorus waves) provides a dominant loss of high-energy electrons during the recovery phase of storms.Theoretical estimates of pitch-angle diffusion coefficients, as well as combined SAMPEX-Polar measurements (Thorne et al., 2005a), also suggests that losses due to chorus waves could be dominant in the outer radiation belt and result in loss time scales comparable to a day.
Our radial diffusion model fails to reproduce the gradual build-up of fluxes observed during many storms and this suggests that local acceleration is required to accurately model the dynamics of electron fluxes during storms.A combination of inward radial diffusion driven by ULF waves and local stochastic acceleration and loss, resulting from interactions with whistler mode and other waves, as well as outward radial diffusion caused by variations near geosynchronous or-bit, is responsible for the formation and variability of the outer radiation belt.Pitch-angle scattering outside the plasmasphere provides an effective loss mechanism which operates on a similar time scale as radial diffusion or local acceleration.Main phase losses due to chorus emissions are greatly enhanced with loss times falling to less than a day outside the plasmapause.Even more rapid pitch-angle scattering by EMIC waves may provide local loss on the scale of a few hours during the main phase of a storm (Albert, 2003;Summers and Thorne, 2003).The effect of losses at high Lvalues and outward radial diffusion will be a subject of future studies.As a consequence, losses can dominate over sources during the main phase of the storm and create a net depletion of the radiation belts.During the extended storm recovery, losses become less important (e.g.O'Brien et al., 2004) and the combined effect of a local acceleration source, together with radial diffusion can lead to an enhancement of radiation belt fluxes for a period of up to 10 days after the main phase of the storm.
Various feedback mechanisms become important in regions where local acceleration, losses and radial diffusion act simultaneously and on similar time scales.Radial diffusion driven by local stochastic loss at lower L shells may be an important source of relativistic electrons.On the other hand, localized acceleration may create peaks in the phase space density which will be smoothed out by the radial diffusion.In this situation outward radial diffusion may work as a local loss process.To account for various feedback mechanisms between loss and source processes, a full 3-D model of the radiation belts, solving the Fokker-Planck Eq. (1) should be used.This model should account for major loss and source processes at all L values.The results of the model should be compared to fluxes as a function of L*, so that adiabatic variations are filtered out.Future modeling should also include automated parameter estimation tools which could be applied to various source and loss mechanisms.
Fig. 1.Comparison between 0.95 MeV electron fluxes in log 10 (cm 2 sr s MeV) computed by our radial diffusion model with empirical lifetimes (first panel), electron flux measurements on CRESS satellite (second panel).Model simulations with constant lifetimes of 10 days are shown (third panel).The fourth panel shows the evolution of the K p index used for the calculation of the D LL and τ .
Fig. 2 .
Fig. 2. Logarithm of ratio of 0.95 MeV HEEF CRRES electron fluxes to those produced by the optimized radial diffusion model.
Fig. 3 .
Fig. 3. Comparison of electron fluxes in log 10 (cm 2 sr s MeV) measured by CRRESS at 0.95 MeV (top), and our radial diffusion model simulations with empirical lifetimes (middle).Evolution of the K p index (bottom). | 3,855.6 | 2005-06-03T00:00:00.000 | [
"Physics"
] |
Stability of Quartic Functional Equation in Modular Spaces via Hyers and Fixed-Point Methods
In this work, we introduce a new type of generalised quartic functional equation and obtain the general solution. We then investigate the stability results by using the Hyers method in modular space for quartic functional equations without using the Fatou property, without using the ∆b-condition and without using both the ∆b-condition and the Fatou property. Moreover, we investigate the stability results for this functional equation with the help of a fixed-point technique involving the idea of the Fatou property in modular spaces. Furthermore, a suitable counter example is also demonstrated to prove the non-stability of a singular case.
Introduction
Functional equations play a crucial role in the study of stability problems in several frameworks. Ulam was the first who questioned the stability of group homomorphisms and this opened the way to work on stability problems (see [1]). Using Banach spaces, Hyers [2] solved this stability problem by considering Cauchy's functional equation. Hyers' work was expanded upon by Aoki [3] by assuming an unbounded Cauchy difference. Rassias [4] presented work on additive mapping and these kinds of results are further presented by Gȃvruţa [5].
The concept of generalised Hyers-Ulam stability derives from historical contexts and this problem is found for different kinds of functional equations (FE). The functional equation is connected to a biadditive symmetric function (see [11,12]). Each equation is naturally referred to as a quadratic FE.Any solution of Equation (1) is a quadratic function. A function φ : E → E (E : real vector space) is said to be quadratic if there is a unique symmetric biadditive function T satisfying φ(u) = T(u, u) for all u (see [11,12]). The following functional equation was first presented by Jun and H. M. Kim [13]: which differs from Equation (1) in various ways. It is clear that the function φ(v) = cv 3 is a solution to Equation (2). As a consequence, it is natural to say that Equation (2) is a cubic FE and so every solution of Equation (2) is a cubic function. In [14], Lee et al. presented the quartic FE as: and found its solution and demonstrated the H-U-R stability. It is simple to demonstrate that φ(v) = cv 4 satisfies Equation (3) so this equality is called quartic FE, and its solution is called quartic mapping (QM). Except for direct approaches, the fixed-point method is the most often used method for establishing the stability of FEs (see [15][16][17]). In [18], the authors proposed a generalised quartic FE and investigated Hyers-Ulam stability in modular spaces using a fixed-point method as well as the Fatou property. Many research papers on different generalisations and the generalised H-U stability's implications for various functional equations have been recently published (see [19][20][21][22][23][24][25]).
To obtain our results, we define the quartic FE by We investigate certain stability results of the above quartic FE which will be based on Hyers and fixed-point methods involving the idea of the Fatou property and ∆ b -condition in the framework of modular spaces. Here, we consider the difference cases to obtain our results (i) with only the Fatou property, (ii) with only the ∆ b -condition, and (iii) without the Fatou property and the ∆ b -condition.
We begin by considering some fundamentally important concepts. Consider E to be a linear space over K (C or R). We call a functional ρ : If the inequality in (c) is replaced by (c') ρ(βu + γv) ≤ βρ(u) + γρ(v), then ρ is thus said to be convex modular.
Note that ρ is the following vector space which defined by a modular ρ: and E ρ is also known as a modular space. Let E ρ be a modular space and {u n } ∈ E ρ . One has (1) If ρ(u n − u) → 0 as n → ∞, {u n } is ρ-convergent to u ∈ E ρ and represented by The modular ρ is said to have the Fatou property if and only if ρ(u) ≤ lim n→∞ inf ρ(u n ) when the sequence {u n } in modular space E ρ is ρ-convergent to u.
In this case, k b is a ∆ b -constant related to ∆ b -condition.
Definition 2 ([34]). Suppose the sequence {v n } in a modular space V ρ . Then, we say that for any l, m ∈ A. The J orbit around a point u is Then, the quantity is known as the orbital diameter of J at u. If Υ ρ (J) < ∞ holds, J is said to has a bounded orbit at u (see [34]). Proposition 1 ([35]). In modular spaces, (1) If u n ρ − → u and is a constant vector, then u n + ρ − → u + , and It should be noted that if α is chosen from the equivalent scalar field with |α| > 1 in modular spaces, the convergence of a sequence {u n } to u does not mean that {αu n } converges to αu. Many mathematicians established additional criteria on modular spaces in order for the multiples of the convergent sequence {u n } in the modular spaces to naturally converge.
The modular ρ has the Fatou property if
Main Results
It follows, by replacing v with 3v in Equation (5), that Now, we obtain, by replacing v with 3v in Equation (6), that In general, for any n ∈ Z + (the set of positive integers), we have Thus, the function φ is even and has a solution of quartic FE. Therefore, φ is quartic. Finally, by replacing (v 1 , v 2 , v 3 , v 4 ) by (u, u, v, 0) in Equation (4), we obtain the Equation (3).
Stability of Quartic FE: Hyers Method
Consider a modular ρ as semi-convex. The Hyers-Ulam stability of Equation (4) in modular spaces is an important theorem in the absence of the Fatou condition.
For notational handiness, we define a mapping φ : E → F ρ (E: linear space; F ρ : ρ-complete semi-convex modular space) by Theorem 2. Let b ≥ 3 be an integer. Suppose F ρ satisfies the ∆ b -condition. If a mapping ψ : E 4 → [0, ∞) exists for which a mapping φ : then there is an unique QM Q : E → F ρ , defined by for all v ∈ E.
for all v ∈ E. So, even without utilising the Fatou property, the ∆ b -condition shows that the inequality holds for an integer l > 1 and for all v ∈ E. Taking l → ∞, we have the inequality (8). (7), we see that From the semi-convexity of ρ, it follows that and all non-negative integers l > 1. Taking the limit as l → ∞, we can see that Q is quartic. We suppose a QM Q : E → F ρ to demonstrate the uniqueness of Q. The function Q satisfies the inequality Taking l → ∞, we finally find that Q is unique, which completes the proof.
then there is an unique QM Q : Corollary 2. Let b ≥ 3 be an integer. Suppose that a normed space E with · and F ρ satisfies then there is an unique QM Q : An alternative stability theorem for Equation (4) in modular spaces will be proved without the ∆ b -condition, given below. Theorem 3. Let b ≥ 3 be an integer. Let F ρ satisfy the Fatou property. If a mapping φ : E → F ρ satisfies the inequality (7) and a mapping ψ : then there is an unique QM Q : Proof. By replacing v 1 = v and v 2 = v 3 = v 4 = 0 in Equation (7), we obtain Without using ∆ b -condition, the above inequality becomes for all v ∈ E and for all integers l > 1. This yields i.e., lim Then, based on the Fatou property, it follows that the inequality Now, we assert that Q satisfies the quartic FE. It should be noted that: and all l ∈ N. As a result of the semi-convexity of ρ, we can see that holds for all v 1 , v 2 , v 3 , v 4 ∈ E, and then taking l → ∞, we obtain ρ 1 361 Q(v 1 , v 2 , v 3 , v 4 ) = 0. As a result, Q must be quartic.
To demonstrate that the function Q is unique, we consider that Q : E → F ρ is an another quartic function which satisfies the inequality (10). As Q and Q are quartic, as evidenced by the previous equality, for all v ∈ E. Taking l → ∞, we conclude that Q = Q . Hence, Q is the only quartic mapping near φ that satisfies the inequality (10). Corollary 3. Let b ≥ 3 be an integer. Suppose that a normed space E with · and F ρ satisfy the Fatou property. For any λ > 0 and α ∈ (−∞, 4) are real numbers, if a mapping φ : then there is an unique QM Q : Corollary 4. Let b ≥ 3 be an integer. Suppose that a normed space E with · and F ρ satisfy the Fatou property. For any λ > 0 and 4α ∈ (−∞, 4) are given real numbers, if a mapping φ : E → F ρ such that then there is an unique QM Q : The upcoming proposition is a revised version of modular stability results of Theorem 3 in [36], which does not need the ∆ b -condition of ρ, which is given below.
Proposition 2.
Let F ρ satisfy the Fatou property. If a mapping φ : E → F ρ satisfy the inequality (7) and a mapping ψ : then there is an unique QM Q : Now, in modular spaces, we present an alternative stability Theorem 2 that does not utilise both the Fatou property and the ∆ b -condition.
Theorem 4. If a mapping φ : E → F ρ satisfy the inequality (7) and a mapping ψ : then there is an unique QM Q : for all v ∈ E.
Proof.
Letting v 1 = v and v 2 = v 3 = v 4 = 0 in inequality (7), one has and then the semi-convexity of ρ and ∑ l−1 for all v ∈ E and all l > 0. By the similar argument of the proof of Theorem 3, we have a ρ-Cauchy sequence { φ(b l v) b 4l } and the limit of function Q : E → F ρ defined as i.e., lim for all v ∈ E without employing the Fatou property and the ∆ b -condition. Furthermore, as in the proof of Theorem 2, one may show that Q satisfies Equation (4). Now, without invoking the Fatou property and the ∆ b -condition, we verify the inequality (11) of φ by Q. By utilizing the semi-convexity of ρ and ∑ l−1 for all integer l > 1 and for all v ∈ E. We arrive to the conclusion by using l → ∞.
Corollary 5. Let b ≥ 3 be an integer. Suppose that a normed space E with · . Any λ > 0 and α ∈ (−∞, 4) are real numbers if a mapping φ : then there is an unique QM Q : where v = 0 if α < 0. Corollary 6. Let b ≥ 3 be an integer. Suppose that a normed space E with · . Any λ > 0 and 4α ∈ (−∞, 4) are real numbers, if a mapping φ : E → F ρ such that then there is an unique QM Q : for all v, v 1 , v 2 , v 3 , v 4 ∈ E and for some L ∈ (0, 4). If a mapping φ : E → F ρ satisfies Equation (7), then there is an unique QM Q :
Stability of Quartic FE: Fixed-Point Method
Theorem 5. Let b ≥ 3 be an integer and a mapping ψ : for all v i ∈ E; i = 1, 2, 3, 4, with 0 < L < 1. If an even mapping φ : for all v i ∈ E; i = 1, 2, 3, 4, then there is an unique QM Q 4 : for all v ∈ E.
Proof. We define the set and ρ is a function on Υ as Now, we need to demonstrate that the function ρ is a semi-convex modular on Υ. Clearly, ρ holds conditions (a) and (b). So, it is enough to verify that ρ is semi-convex modular. Given ε > 0, ∃ λ 1 > 0 such that Since ε > 0 was arbitrary, from above, we find that ρ is semi-convex modular on Υ. Next, we want to verify that Υ ρ is ρ-complete.
for all n, m ≥ n 0 . Thus, we have for all v ∈ E, and n, m ≥ n 0 . Therefore, a ρ- Now, let us define a mapping p : E → F ρ by We arrive by taking into account Equation (15) that since ρ holds the Fatou property. Thus, {p n } ρ-converges and so Υ ρ is ρ-complete.
We now want to prove that ρ holds Fatou property.
For all ε > 0, consider a constant λ n (n ∈ N) which is real such that for all v ∈ E. We know that ρ holds the Fatou property, so we obtain Thus, we obtain since ε > 0 was arbitrary. Hence, ρ also holds the Fatou property. Let us define a mapping χ : Υ ρ → Υ ρ by Suppose p, q ∈ Υ ρ and λ ∈ [0, 1] with ρ(p − q) < λ (λ is an arbitrary constant). Employing the definition of ρ, we write Using Equations (12) and (16), we have which means that χ is a ρ-contraction. Now, we will show that χ has a φ bounded orbit. In Replacing v with bv in inequality (17), we obtain (bv, 0, 0, 0), ∀v ∈ E.
By using Equations (17) and (18), we obtain Clearly, by induction, It follows from Equation (19) that for n, m ∈ N and all v ∈ E. We conclude that by defining ρ, This means that an orbit of χ at φ is bounded. The sequence of {χ n φ} ρ-converges into Q 4 ∈ Υ ρ , according to Theorem 1.5 in [34]. Now, we have the ρ-contractivity of χ, where Taking the limit n → ∞ and apply ρ Fatou property, we get Thus, we have Letting l → ∞, we obtain Theorem 1, Q 4 is quartic. So, the inequality (19) gives (14). Let Q 4 : E → F ρ be an another QM that meets inequality (14) to prove the uniqueness of Q 4 . Thus, Q 4 is a fixed point of χ, so This yields ρ Q 4 − Q 4 = 0. Consequently, Q 4 = Q 4 . which proves the uniqueness of function Q 4 .
Corollary 7. Let b ≥ 3 be an integer and a mapping
with 0 < L < 1. If φ : E → F is an even mapping with φ(0) = 0 such that for all v i ∈ E; i = 1, 2, 3, 4, so there is an unique QM Q 4 : E → F having i=1 v i p and taking L = b p−4 in the last corollary, then we arrive at the stability result for the sum of norms as where p (p < 4) and α are constants.
Theorem 6. Let b ≥ 3 be an integer. Suppose a mapping ψ : with 0 < L < 1. If a mapping φ : E → F ρ is even with φ(0) = 0 such that the inequality (13) holds, then there is an unique QM Q 4 : E → F ρ having Proof. Consider the set Let ρ be a function on Υ, defined by We have the same evidence as Theorem 5: (a) The function ρ is a convex modular on Υ.
Let us define a mapping χ : Υ ρ→Υ ρ for all v ∈ E and for p ∈ Υ ρ by Let p, q ∈ Υ ρ and λ ∈ [0, 1] with ρ(p − q) < λ (λ is an arbitrary constant). Consequently, for all v ∈ E. We obtain by assumption and the above inequality that which proves that χ is a ρ-contraction.
We will now show that χ has a bounded orbit at φ. Setting (v 1 , v 2 , v 3 , v 4 ) by (v, 0, 0, 0) in Equation (13), we obtain It follows by replacing v with v b in Equation (21) that Again, replacing v by v b in Equation (22), we obtain Considering Equations (21)- (23), for all v ∈ E, we obtain We can easily determine by induction that (24) gives for all v ∈ E, and all n, m ∈ N. We can conclude that by defining ρ, This means that the χ orbit is limited to φ. The sequence {χ n φ} ρ-converges to Q 4 ∈ Υ ρ from Theorem 1.5 in [31].
We have from the ρ-contractivity of χ that Letting n → ∞ together with Fatou property, we have Therefore, the function Q 4 is a fixed point of χ. (13), we obtain Passing to the limit l → ∞, we obtain Therefore, Q 4 is quartic from Theorem 7. Using the inequality (24), we obtain the inequality (20).
It is only left to show the uniqueness of Q 4 . For this, consider another QM Q 4 : E → F ρ which satisfies the inequality (14). Then, Q 4 is a fixed point of χ. So, we write Corollary 8. Let b ≥ 3 be an integer and also let ψ : E 4 → [0, +∞) be a mapping such that , for all v i ∈ E; i = 1, 2, 3, 4, with 0 < L < 1. If a mapping φ : E → F is even with φ(0) = 0 satisfies the inequality (7), then there is an unique QM Q 4 : E → F satisfying i=1 v i p and taking L = b 4−p in Corollary 8, we fairy have the stability results for the sum of norms as follows: where p (p > 4) and α are constants.
Illustrative Examples
Here, in this section, we investigate a suitable example to verify that the stability of quartic FE (4) fails for a singular case. Following by the example of Gajda (see [37]), we examine the following counter-example which proves the instability in a particular conditions b = 3 and α = 4 in Corollaries 3 and 5 of Equation (4).
Remark 4.
If a mapping φ : R → E satisfies the functional Equation (4), then (C1) φ(m c/4 v) = m c φ(v), for all v ∈ R, m ∈ Q and c ∈ Z, where Suppose that the function φ defined in Equation (25) which satisfies for all v 1 , v 2 , v 3 , v 4 ∈ R. We here obtain that there does not exist a QM Q : R → R satisfying for all v ∈ R,, where λ and δ are constants.
Suppose that the function φ defined in Equation (29)
Conclusions and Discussion
Many mathematicians obtain the stability results of various kinds of additive, quadratic, and cubic functional equations in various spaces. In our investigations, we first defined a new kind of quartic FN in the first section of this paper and obtained the general solution of our newly defined quartic FN. Additionally, we explored the stability results of this quartic FN in the setting of modular space using Hyers' technique by taking into our account three cases, that are: without utilising the Fatou property, without using the ∆ b -condition, and without using the ∆ b -condition and Fatou property. Moreover, by taking into our account the Fatou property and fixed-point approach, we established some stability results of our quartic FN in the framework of modular spaces. In addition, an appropriate counter-example is provided to demonstrate the non-stability of the singular case.
It is worth mentioning that one can further determine the stability results of this quartic FN in various frameworks, namely, quasi-β-normed spaces, fuzzy normed space, non-Archimedean spaces, random normed spaces, probabilistic normed spaces, intuitionistic fuzzy normed space and so on. The findings and techniques used in this study might be valuable to other researchers who want to conduct further work in this area. | 4,904.2 | 2022-06-06T00:00:00.000 | [
"Mathematics"
] |
Perivascular Adipose Tissue-Derived Adiponectin Inhibits Collar-Induced Carotid Atherosclerosis by Promoting Macrophage Autophagy
Objectives Adiponectin (APN) secreted from perivascular adipose tissue (PVAT) is one of the important anti-inflammatory adipokines to inhibit the development of atherosclerosis, but the underlying mechanism has not been clarified. In this study, we aimed to elucidate how APN regulates plaque formation in atherosclerosis. Methods and Results To assess the role of APN secreted by PVAT in atherosclerosis progression, we performed PVAT transplantation experiments on carotid artery atherosclerosis model: ApoE knockout (ApoE−/−) mice with a perivascular collar placement around the left carotid artery in combination with a high-fat diet feeding. Our results show that the ApoE−/− mice with PVAT derived from APN knockout (APN−/−) mice exhibited accelerated plaque volume formation compared to ApoE−/− mice transplanted with wild-type littermate tissue. Conversely, autophagy in macrophages was significantly attenuated in ApoE−/− mice transplanted with APN-/- mouse-derived PVAT compared to controls. Furthermore, in vitro studies indicate that APN treatment increased autophagy in primary macrophages, as evidenced by increased LC3-I processing and Beclin1 expression, which was accompanied by down-regulation of p62. Moreover, our results demonstrate that APN promotes macrophage autophagy via suppressing the Akt/FOXO3a signaling pathway. Conclusions Our results indicate that PVAT-secreted APN suppresses plaque formation by inducing macrophage autophagy.
Introduction
Atherosclerosis is a complex chronic inflammatory and metabolic disease, which is a major contributor of morbidity and mortality in the world. In addition to lipid dysfunction and arterial lipid accumulation, immune-inflammatory response has been increasingly recognized as essential reason in atherogenesis [1,2]. Macrophages are largely accumulated in atherosclerotic plaques and play crucial roles in atherosclerotic immune responses [3]. Emerging evidence suggests that macrophage autophagy exerts protective role in atherosclerosis [4,5], which has demonstrated a novel pathway to therapeutically suppress atherosclerosis progression [6,7].
Several autophagy triggers are present in the atherosclerotic plaque, such as inflammatory mediators, ROS production and accumulation of oxidized LDL [8,9]. Recent study has reported that adiponectin (APN) could modulate the activation of autophagy in vitro and in vivo [10,11]. Adiponectin is one of several important, metabolically active cytokines secreted from adipose tissue, which exerts bio-effects on multiple type of cells and has anti-inflammatory and anti-atherosclerotic properties [12]. Previous studies have demonstrated that APN inhibits atherosclerosis by suppressing atherogenic processes within the blood vessel wall [13,14]. However, the precise mechanism by which APN regulates anti-atherosclerotic responses and macrophages function in atherosclerosis remains to be revealed.
In addition to visceral adipose tissue, perivascular adipose tissue (PVAT) secretes a great deal of APN that can act in both autocrine and paracrine fashion [15]. Although PVAT can support inflammation during atherosclerosis through macrophage accumulation, recent reports reveal that PVAT also has anti-atherosclerotic properties related to its abilities to secrete anti-inflammatory adipokines [16,17]. These paradoxical findings suggest that differences in either the type or level of a particular PVAT-derived adipokine may determine its role in atherosclerosis development. However, the molecular mechanisms maintaining that balance have not been fully identified.
In the present study, we investigated the role of PVAT-derived APN in collar-induced carotid atherosclerosis and the molecular mechanism involved in the regulation of macrophage autophagy. Our results indicate that PVAT-derived APN deficiency increased plaque volume formation in ApoE −/− mice when compared with wild-type control with sufficient PVAT-derived APN. This was associated with decreased autophagy in vascular macrophages. These results suggest that PVAT derived-APN contributes to inhibition of plaque formation by inducing macrophage autophagy.
Animal model and adipose tissue transplantation
Male APN -/mice were purchased from the Jackson Laboratory. Male ApoE -/mice and wild type mice were purchased from Peking University (Beijing, China). All mice were 8 weeks old and in C57BL/6J background.
Mice underwent perivascular collar placement after deep anesthesia with an intraperitoneal injection of pentobarbital sodium. As described previously [18], a constrictive perivascular silica collar (0.3 mm in internal diameter and 3 mm in length) was placed around the left carotid artery. Animals were fed for 12 weeks and kept on a 12 h light/12 h dark cycle. All mice received a high-fat diet (D12492 from Vital River Laboratory) throughout the experiment.
In addition, to analyze the effects of APN secreted by PVAT on atherosclerotic plaque disruption, we administered lipopolysaccharide (LPS) into ApoE-/-mice after collar replacement [19]. Four weeks after surgery, mice in the LPS groups were injected intraperitoneally with LPS (1 mg/kg, Sigma) twice a week for 8 weeks.
The adipose tissue transplantation was performed as described previously [20]. Atherosclerosis model was performed on left carotid artery with or without perivascular adipose tissue transplantation. 10 mg of perivascular adipose tissue was harvested respectively from APN -/mice and corresponding wild-type counterparts. The adipose tissue was implanted around the site of carotid artery using 9-0 Nylon after removal of endogenous PVAT. The mice transplanted with wild-type and APN -/adipose tissue were respectively named as (WT) PVAT and (KO) PVAT. All procedures were approved by the Animal Care and Use Committee of Capital Medical University.
Hematoxylin and eosin (H&E) staining
Mouse hearts were perfused with saline. The carotid artery was isolated and fixed with 4% paraformaldehyde for 30 min, embedded in paraffin and cut into 5 μm serial sections. In brief, corresponding sections were stained with hematoxylin for 4 min. Subsequently, the sections were washed with 1% hydrochloric acid alcohol differentiation liquid for 5 s and washed with running water for 5 min. Sections were then stained with eosin for 4 min. Images were captured by Nikon Eclipse TE2000-S microscope (Nikon, Tokyo, Japan) and analyzed by Image Pro Plus 3.1 (Nikon).
Immunofluorescence
The carotid artery were embedded in OCT embedding medium and cut into 7μm serial sections as described. Sections were blocked using 5% fetal bovine serum for an hour. Then the sections were stained with primary antibodies (1:500) or IgG instead of primary antibody as negative control overnight at 4°C. After incubation with FITC-or tetra-methylrhodamineisothiocyanate-conjugated secondary antibodies (Jackson ImmunoResearch Laboratories) (1:100) at room temperature for 1hour. Sections were observed by Nikon Eclipse TE2000-S microscope (Nikon, Tokyo, Japan) and analyzed by Image Pro Plus 3.1 (Nikon).
Western blot
Proteins were extracted from three carotid artery. Western blot analysis was performed as described [23]. In briefly, 50 μg protein lysates were separated by 15% SDS-PAGE and transfered to nitrocellulose membranes (Millipore). The blots were incubated with the primary antibodies (1:1000) at 4°C overnight, and then with infrared Dye-conjugated secondary antibodies (1:10000; Rockland Immunochemicals) for 1 hr at 37°C. The images were quantified by the use of the Odyssey infrared imaging system (LI-COR Biosciences, Lincoln, NE, USA).
Statistical analysis
All data are presented as mean ± SEM. Differences between groups were analyzed using the Student's t test by Newman-Keuls multiple comparison test from GraphPad Prism (GraphPad Software). P<0.05 was considered statistically significant.
APN deficiency in perivascular adipose tissue aggravated atherosclerosis development in ApoE -/mice
To examine the effects of APN secreted by perivascular adipose tissue on the development of atherosclerosis, we firstly performed the site-controlled atherogenesis on the C57/BL6 wildtype mice and the ApoE -/mice by perivascular collar placement. 12th weeks after collar insertion, a significant increase in intimal surface area had occurred in the ApoE -/mice, but the intimal surface area did not rise significantly in the corresponding sites of the C57/BL6 wild-type mice (Fig 1A and 1B). The degree of lumen stenosis was higher in collar insertion sites of the ApoE -/mice compared with surgical animals of the C57/BL6 wild-type mice ( Fig 1C). But, the collar insertion sites did not display a significant increase in intima-media ratio in either groups ( Fig 1D).
Next, we performed the perivascular adipose tissue transplantation experiments on the ApoE -/mice after perivascular collar placement. We removed PVAT around the left carotid artery after perivascular collar placement and transplanted PVAT from APN -/mice or wild-type counterparts. HE staining revealed that mice transplanted with PVAT from APN -/mice had an aggravated plaque and lumen stenosis compared with mice transplanted with wild-type tissue (Fig 1E and 1G). Intimal surface area measurement showed that the intimal surface area in mice with APN -/tissue obviously increased in comparison with mice with wild-type tissue ( Fig 1F).
Finally, to analyze the effects of APN secreted by PVAT on atherosclerotic plaque disruption, we administered lipopolysaccharide (LPS) into ApoE -/mice after collar replacement. As shown in Fig 1H and 1I, the disruption rates in LPS groups were significantly higher than those of controls, but there is no difference in the disruption rates between the ApoE -/mice transplanted with APN -/tissue and wild-type tissue. Taken together, these results indicated that APN derived from PVAT restricted atherosclerosis development in ApoE -/mice, but didn't affect atherosclerotic plaque disruption.
Deficiency of PVAT-derived APN reduced autophagy in plaque
To investigate the role of autophagy in atherosclerosis, we compared the LC3-II level in the arteriacarotis. Western blot revealed that the protein level of LC3-II in arteriacarotis of mice transplanted with APN -/tissue is lower than mice transplanted with wild-type tissue (Fig 2A and 2B). To directly observe the difference of autophagy in both group, we performed an immunofluorescence staining of p62 on arteriacarotis. Immunofluorescence assay showed that APN deficiency increased the accumulation of p62 in plaque of ApoE -/mice (Fig 2C and 2D). These results demonstrate that PVAT-derived APN promotes autophagy in plaque. Since autophagy existed on both macrophages and smooth muscle cells in the lesions [24], we examined the autophagy response in these cells. As shown in Fig 3A, the LC3-I processing was significantly increased in macrophages stimulated by APN. To further ascertain the role of APN in macrophage, we assessed the protein level of Beclin1 and p62, the other markers of autophagy. Western blot showed that APN treatment increased the levels of Beclin1 in macrophages, which was associated with down-regulation of p62 (Fig 3B). These results demonstrate that APN obviously promotes the autophagy in macrophage, but not in smooth muscle cells. To determine how APN modulates autophagy on macrophage, we firstly assessed the activation of Akt/FOXO3a pathway, which has been implicated in autophagy [25]. As shown in Fig 4, APN treatment lessened phosphorylation of the Akt and FOXO3a in macrophage, but there is no difference in the expression of pan Akt and FOXO3a. To further evaluate participation of the Akt/FOXO3a pathway, we employed 740Y-P, a direct activator of Akt pathway [26]. Furthermore, a prior study has demonstrated APN could promote autophagy via modulating PTEN/mTOR pathways [27]. Indeed, APN treatment significantly dampened phosphorylation of mTOR without affecting pan mTOR levels, and enhanced the expression of PTEN in macrophages. However, the change of PTEN/mTOR pathway didn't be abolished after adding 740Y-P. Collectively, these data suggest that the pro-autophagic effects of APN partly depended the Akt/FOXO3a pathway deactivation. Discussion APN has been recognized as an anti-atherosclerotic and anti-inflammatory protein derived from adipocytes. In the present study, we report that PVAT derived-APN suppresses lesions formation after collar-induced carotid atherosclerosis through increasing macrophage autophagy in ApoE -/mice. Furthermore, treatment of macrophages with APN enhanced the There is a growing body of evidence highlighting the protective role of APN in cardiovascular diseases [28,29], especially in atherosclerosis [30]. A study demonstrated that deficiency of APN in ApoE −/− mice promotes atherosclerosis and T-lymphocyte accumulation in the atherosclerotic lesions [13]. And, the prior study indicated that adiponectin abates atherosclerosis by reducing oxidative stress or by increasing cholesterol efflux from macrophages [31,32]. Contrary to data reported here, it has been previously documented that neither genetic overexpression nor APN knockout had any significant effect on atherosclerosis in high-fat fed ApoE −/ − mice [14]. The possible explanation for these differences is the different experimental models. In addition, it is possible that differing derived APN may have contributed to the divergent outcomes. Here, in the perivascular collar placement around carotid artery combined with feeding a high-fat diet ApoE −/− mouse model of accelerated atherosclerosis, PVAT-derived APN effectively suppressed collar-induced carotid atherosclerosis. Similar with our results, the other studies have demonstrated that perivascular adipose tissues play a role in the pathogenesis of atherosclerosis in ApoE -/mice [33,34]. However, to finally prove our concept, examined the roles of PVAT-derived adiponectin in other model of atherogenesis would be important. This is a limitation of the present study.
Autophagy in atherosclerosis has been extensively investigated with particular focus on vascular smooth muscle cells (SMCs) and endothelial cells (ECs) [35,36]. The general consensus is that basal autophagy can protect plaque cells against oxidative stress by degrading damaged intracellular material and promoting cell survival [37]. However, accumulating evidences suggest that macrophage autophagy also plays a protective role in advanced atherosclerosis [4,7]. And, complete deficiency of macrophage autophagy increased vascular inflammation and plaque formation, which was associated with elevated plaque macrophage content [3,6]. Consistent with these findings, our present study showed that PVAT derived-APN significantly increased autophagy in vascular macrophages in collar-induced carotid atherosclerosis. Moreover, in-vitro experiments indicated that APN induced autophagy in primary macrophage, as evidenced by increased LC3-I processing and Beclin1 expression, which was accompanied by down-regulation of p62. Thus, PVAT derived-APN may act as a key regulator in macrophage activation and the anti-atherosclerotic response.
It has been previously documented that the mTOR signaling pathway negatively regulates autophagy, while Akt activity increases ATP levels and reduces AMPK activity leading to mTOR activation, thereby inhibiting autophagy [38,39]. A prior study indicated that Akt is an important kinase downstream of the APN pathway and activated by APN via APN receptor [11]. In present study, we observed that treatment of macrophages with APN decreased the phosphorylation of Akt, and more importantly, inhibited the activation of FOX3a gene, which is a key regulator of macrophage autophagy [40]. These results indicate that APN stimulates macrophage autophagy through deactivation of the Akt/FOX3a signaling pathway.
Moreover, data from our current study revealed that APN significantly dampened phosphorylation of mTOR without affecting pan mTOR levels, and enhanced the expression of PTEN in macrophages. These findings are consistent with the crucial role of PTEN/ mTOR in regulation of autophagy. But, activation of Akt pathway by 740Y-P didn't be abolished the change of PTEN/mTOR pathway. Therefore, it appears that PTEN/mTOR signaling pathway could also regulate autophagy at least partially independent from Akt [41]. However, much more experiments would be necessary to identify all the molecular details of the specific pathway.
Besides regulation of autophagy, Akt is also a key cell survival factor with reduced Akt activation directly contributing to apoptosis [42]. Here, our findings reveal that Akt as a main mediator of autophagy that is controlled by APN, but we did not exclude other functions of Akt, which could be carried on in future. It is increasingly clear that the tumor suppressor PTEN is a negative regulator of cell survival [43]. In present study, activation of Akt pathway didn't affect the expression of PTEN in macrophage after APN treatment. These results showed that APN may promotes apoptosis of macrophage via modulating PTEN/mTOR pathway at least partially independent from Akt. Although autophagy could contribute to cell death under certain experimental settings [44], but the balance between apoptosis and FOXO3a modulated autophagy still needs further study.
This study provides important evidence that PVAT derived-APN exerts profound anti-atherogenic actions to effectively inhibit collar-induced carotid atherosclerosis and increased macrophage autophagy activation in vascular tissue. Furthermore, treatment of macrophages with APN markedly increased Akt/FOX3a activation-mediated autophagy. Our results suggest that APN protects against collar-induced carotid atherosclerosis at least in part through Akt-dependent autophagy activation. The results of the present study may provide a novel therapeutic target against atherosclerosis. | 3,567.2 | 2015-05-28T00:00:00.000 | [
"Biology",
"Medicine"
] |
Sequence-only Based Prediction of -turn Location and Type Using Collocation of Amino Acid Pairs
Development of accurate-turn (beta-turn) type prediction methods would contribute towards the prediction of the tertiary protein structure and would provide useful insights/inputs for the fold recognition and drug design. Only one existing sequence-only method is available for the prediction of beta-turn types (for type I and II) for the entire protein chains, while the proposed method allows for prediction of type I, II, IV, VII, and non-specific (NS) beta-turns, filling in the gap. The proposed predictor, which is based solely on protein sequence, is shown to provide similar performance to other sequence-only methods for prediction of beta-turns and beta-turn types. The main advantage of the proposed method is simplicity and interpretability of the underlying model. We developed novel sequence-based features that allow identifying beta-turns types and differentiating them from non-beta-turns. The features, which are based on tetrapeptides (entire beta-turns) rather than a window centered over the predicted residues as in the case of recent competing methods, provide a more biologically sound model. They include 12 features based on collocation of amino acid pairs, focusing on amino acids (Gly, Asp, and Asn) that are known to be predisposed to form beta-turns. At the same time, our model also includes features that are geared towards exclusion of non-beta-turns, which are based on amino acids known to be strongly detrimental to formation of beta-turns (Met, Ile, Leu, and Val).
INTRODUCTION
The secondary structure of a protein consists of helices, beta-strands and coils, where the coil region comprises tight turns, bulges and random coil structures [1].Tight turns are believed to be important structural elements in regards to protein folding and molecular recognition processes between proteins, which has lead to interest in mimicking beta-turns for medicine synthesis [2,3].Tight turns are classified asturns, -turns, -turns, -turns and -turns, where aturn (beta-turn) is a four-residue reversal in the protein chain that is not in an -helix.While characterization and prediction of -turns attracted some research attention [4][5][6][7], our research focuses on beta-turns.We observe that beta-turns are the most common turn type, and make up, on average one quarter of all residues in proteins [8].Formation of betaturns is also a vital stage during the process of protein folding [3].Therefore, development of accurate beta-turn prediction methods would be a valuable step towards the overall prediction of the three-dimensional structure of a protein from its amino acid sequence and could provide insights and inputs for the fold recognition and drug design.
The beta-turns can be classified into nine different types based on the and angles of the two central residues [9].As a result, prediction of the location of beta-turn types, in contrast to a binary prediction that would identify location of beta-turns, would provide additional, structural, information that concerns the and angles.A commonly used benchmark dataset of 426 non-homologous protein chains [10], which have been used to rank and test several methods for prediction of beta-turn types [11,12], reveals that some of these types are infrequent and thus they are commonly combined together [11].To this end, we focus on prediction of beta-turn types I, II, IV and VIII, while the remaining types I', II', VIa1, VIa2 and VIb, which only make up 304, 165, 44, 17 and 70 respectively out of the total 7153 betaturns in the aforementioned dataset have been combined into one set referred to as non-specific (NS), which is consistent with [11].The challenging aspect of the beta-turn type prediction is that these turns are not isolated in a chain.Quite the opposite, in fact, Hutchinson and Thornton (1994) report that 58% of beta-turns overlap with another beta-turn, i.e., they share one or more residues with another beta-turn [9].
There exist a number of recent works that address prediction of beta-turn types, which can be divided into two categories, statistical methods and machine learning based methods.Statistical methods utilize probabilities computed using information regarding the preference of individual amino acid types at each position of the beta-turn to form a turn.The most promising of which is COUDES [13], which is based on propensities of individual residues augmented with the information coming from multiple sequence alignment.The position-specific score matrix (PSSM), which is calculated with PSI-BLAST [14], was used to weigh propensities for a given residue, so that all the residues present in the multiple alignment at this position are taken into account.Secondary structure information predicted by PSIPRED [15], SSPRO2 [16], and PROF [17] and the flanking residues around the beta-turn tetrapeptide were utilized to improve the prediction accuracy.The COUDES method uses a window of size 12 with prediction being made on the four central (with respect to the window) residues.
The machine learning methods include BETATURNS [11] and BTPRED [12], which are based on artificial neural networks (ANN), and a hybrid multinomial logistic regression and ANN model [18].BTPRED encodes the sequence using a large window of 11 residues centered over the predicted residue together with secondary structure predicted with PHDsec [19] to perform predictions.The window is used to incorporate the effects of neighboring residues on the formation of beta-turns.BETATURNS is an improved neural network design, in which two networks are used.The first network uses the sequence together with the PSSM as the input, and its output is fed along with the PSIPRED predicted secondary structure of the central residue into the second network that produces the predictions.BETATURNS employs a window of 9 residues where prediction is made on the central residue.Finally, the multinomial logistic regression model uses a two-stage hybrid approach and considers only beta-turns.The latter method is not used for the prediction of the location of beta-turns, but it allows differentiating different types of beta-turns based on the underlying tetrapeptides while it does not consider non-beta-turn sequence segments, i.e., it predicts a beta-turn type for a given tetrapeptide that corresponds to a beta-turn.
In comparison with the methods that predict beta-turn types for entire protein sequences outlined above, which include [11][12][13] and which use significant amount of auxiliary information such as PSSM and predicted secondary structure, it is clear that a method based solely on the protein sequence would be simpler to design and execute.However, this may lead to reduced quality as only limited information (sequence) relative to the competition would be used.The motivation for our sequence-based design comes from work of Chou and colleagues who found that support vector machine (SVM) classifier can be used to express the relation between different beta-turns types or non-beta-turns and the underlying tetrapeptides [20].They observed that the accuracies of self-consistency (prediction on the training set) test for beta-turn types I, I', II, II', VI and VIII and non-betaturns are over 97%.This was a follow up on their previous study in which they verified that the relation between the tetrapeptides and the beta-turns types can be expressed using a probabilistic approach [21].The authors applied their sequence-coupled model [5,21,22] to perform prediction for a selected set of tetrapeptides, and applied this model to predict beta-turn types for the rubredoxin protein [1].This is similar to work done in [18], except that in this case the nonbeta-turns were considered in building the predictive model.We also note that only two sequence-only based method (method that uses as the input only the protein sequence and no sequence derived information such as PSSM or predicted secondary structure) are available for prediction of beta-turn types [1,23].The method in [23] addresses prediction of only type I and type II beta-turns, while the method in [1] predicts beta-turn types I, I', II, II', VI, and VIII, which are different than targets addressed by newer prediction methods [11].This provides additional motivation for the development of the proposed method.The consideration and employment of the window is a fundamental difference in our approach relative to those listed above.BTPRED, BETATURNS, and COUDES methods predict the beta-turn type of individual residues (using a sliding window centered over the predicted residue), whereas our method predicts entire tetrapeptides as either a given beta-turn type or a nonbeta-turn.Unlike the other methods, this results in features that are more biologically relevant and that are better for describing full beta-turns vs. non-beta-turns as opposed to simply identifying residues that are apt to be in beta-turns.
Our intention is to develop a method that gives similar performance to the aforementioned methods but with a main goal of creating a simple predictive model that allows derivation of sequence based factors which facilitate differentiation between different beta-turn types and non-beta-turns.
Datasets
Three nonredundant datasets were used through the course of this study.The first, which was used for feature selection, was prepared in [18] to design method that differentiates different types of beta-turns (excluding non-betaturns) and was based on 565 non-homologous protein chains (and will be hereby referred to as 565).The chains were selected using the PAPIA system [24], contain no chain breaks, have structure determined by X-ray crystallography at 2.0A resolution or better, and no two chains have more than 25% sequence identity.The PROMOTIF program was used to assign the beta-turns in protein chains [25].The original dataset includes only the tetrapeptides that correspond to all beta-turns in the 565 non-homologous proteins, i.e. it does not include the entire protein chains.We augmented the original beta-turn tetrapeptides with randomly chosen set of tetrapeptides that correspond to non-beta-turns assuming that the number of the selected non-beta-turns should approximately equal the number of the most frequent beta-turn type.More specifically, the 565 dataset includes 4115, 1442, 4128, 1100, and 1028 beta-turns of type I, II, IV, VIII, and NS and 4448 non-beta-turns.
The second dataset, used for testing and comparing the prediction method, was comprised of 426 protein chains and 95,289 residues and was prepared in [26].This dataset (hereby referred to as 426) has been widely used to validate and compare beta-turn prediction methods [10,11,13,[26][27][28] and includes chains that are non-redundant at 25% and that have been resolved with X-ray crystallography at 2.0A resolution or better.Again, the PROMOTIF program [25] was used to assign the beta-turns in protein chains using the classification scheme proposed by Hutichinson and Thornton (1994) [9].Every chain in this dataset includes at least one beta-turn.In order to assess the accuracy of the proposed model and to remain consistent with recent beta-turn prediction literature [10,11,13,[26][27][28], sevenfold cross-validation was employed on dataset 426.The dataset was divided into 6 folds of 61 sequences and 1 fold of 60 sequences.Six of the folds were then used to train the model, while the seventh was used to test it, and the process was repeated seven times.
The third dataset, which is used for model parameterization, involves 183 sequences from the 426 dataset.These sequences constitute three of the seven folds of the 426 dataset.Additionally, these 183 sequences were randomly down sampled to 20% of the original residues.This dataset will be referred to as 183.
Quality Indices
To assess the accuracy of the prediction method, as well as for comparison purposes, the standard quality indices of beta-turn prediction literature were employed [10-13, 27, 28].
The percentage of correct predictions for each beta-turn type is defined as follows: where TP (true positive) is the number of residues observed and predicted as a given beta-turn type, TN (true negative) is the number of residues observed and predicted as not the given beta-turn type, FP (false positive) is the number of residues not observed but predicted as a given beta-turn type, and FN (false negative) is the number of residues observed but not predicted as a given beta-turn type.
When describing accuracy, Q total tends to overestimate predictive performance due to the high number of true negatives, which underemphasises the false negatives and false positives [12,13,28].Therefore, it is better to use the Matthews Correlation Coefficient (MCC) [29] which takes underprediction and overprediction into account: Underprediction can be evaluated using Q obs , which is the fraction of observed given beta-turn types predicted correctly: Finally, overprediction is evaluated using Q pred , which is the fraction of correctly predicted given beta-turn types: While these quality indices are used consistently, they will be applied in two different ways, first comparing predicted beta-turn type to actual beta-turn type on a residue by residue basis denoted as follows, res total obs pred Q / / and second comparing predicted beta-turn type to actual beta-turn type on a turn by turn basis, denoted as turn total obs pred Q / / .In the latter case, the unit of the prediction is a tetrapeptide.
Model Overview
Fig. (1) compares the proposed prediction system and the existing methods [11][12][13].The competing methods use a window centered on the predicted residue as the input information that is processed with PSI-BLAST and a secondary structure prediction method.These inputs are converted into features and next fed into a classifier that predicts a given beta-turn type / non-beta-turn for a single residue.In our design, the processing unit is the tetrapeptide, i.e. four adjacent amino acids, that forms a given beta-turn type or a nonbeta-turn.Thus, the sequences in each dataset were broken down into four residue fragments via a sliding window.Next, these segments are represented using a feature set that consists of three vectors, which is tagged by turn type if the start of the window was also the start of a turn.The resulting vector is passed to the classifier and the prediction is applied to all four residues in the window.As each residue is predicted four times as part of four separate possible turns, in the case of overlap between turn and non-turn, a turn prediction overrides a non-turn prediction.We believe that this design results in features that are more biologically relevant as they describe full beta-turns (and non-beta-turns) as op-
Classifier Prediction for
Residue A i Feature Vector
Feature Generation Classifier Prediction for Residues A i -A i+3
Feature Vector posed to describing information concerning a window centered over the predicted residue.
Composition Vector
The composition vector is a simple sequence representation which is widely used in prediction of various aspects of protein structure [30][31][32][33][34][35][36][37].The vector is composed of the twenty amino acids, alphabetically ordered, and stores the number of occurrences of the amino acid in the sequence window (in our case the tetrapeptide).With 20 amino acids, this results in 20 corresponding features.
Positional Vectors
The positional vectors are similar to the compositional vector in that they are a simple sequence representation composed of the twenty amino acids, alphabetically ordered, and identify the presence/absence of a given amino acid in a given position in the tetrapeptide.As the window size considered includes 4 amino acids, this results in 80 corresponding features.
Collocation Vector
Finally, a relatively new representation based on the frequency of collocation amino acid pairs [38][39][40][41] in the sequence window was applied.Our motivation is that the composition and positional vectors are insufficient to represent the sequence and the interactions between local amino acid pairs.As interactions between short-range amino acid pairs, not just dipeptides have the potential to impact beta-turn formation [9,10], the representation considers collocated pairs of amino acids, which are separated by p amino acids.Collocated pairs for p = 0, 1 and 2 are considered, where p = 0 pairs reduce to dipeptides and p = 1 and 2 can be understood as dipeptides with gaps.For each value of p, there are 400 corresponding features that store the number of occurrences of the collocated pairs.We emphasize that this feature set was not yet utilized for prediction of beta-turns types.
As a result, we consider a feature set which includes a total of 400 * 3 + 80 + 20 = 1300 features.
SVM Classifier
We employed a support vector machine (SVM) classifier [42] which was previously applied to beta-turn prediction [43,44] and was shown to provide promising results in identifying beta-turn types [20].Given a training set of data point pairs (x i , c i ), i = 1, 2, … n, where x i denotes the feature vector, c i ={-1, 1} denotes binary class label, n is the number of training data points, finding the optimal SVM is achieved by solving: where w is a vector perpendicular to wx-b=0 hyperplane that separates the two classes, C is a user defined complexity constant, i are slack variables that measure the degree of misclassification of x i for a given hyperplane, b is an offset that defines the size of a margin that separates the two classes, and z= (x) where k(x,x')= (x) (x') is a user defined kernel function.
The SVM classifier was trained using Platt's sequential minimal optimization algorithm [45] that was further optimized by Keerthi and colleagues [46].The prediction that includes multiple types of beta-turns and non-beta-turns is solved using pairwise binary classification, namely, a separate classifier is build for each pair of classes (beta-turns types and non-beta-turns).We used RBF kernel and performed parameterization (selection of the value of the complexity constant C and RBF kernel width ) based on 3-fold cross validation on the dataset 183.The final classifier uses C = 3 and the RBF kernel , ' The classification algorithm and feature selection algorithms used to develop and compare the proposed method were implemented in Weka [47].
Feature Selection
As the proposed representation includes a relatively large number of features, three feature selection methods were employed in tandem to reduce the dimensionality and potentially improve the prediction: an Information Gain based method (IG) [48]; a Chi-Squared method (CHI) [49]; and the Relief algorithm (REL) [50].We used three different methods in order to reduce the bias introduced by each of the methods.In these algorithms, each feature was ranked based on its merit (etc., information gain in IG, the value of the chi-squared statistic in CHI and the weights in REL), and next they were sorted by their average rank across the three algorithms.The measurement of the merit for the three algorithms is defined below.
Information gain (IG) measures the decrease in entropy when a given feature is used to group values of another (class) feature.The entropy of a feature X is defined as where {x i } is a set of values of X and P(x i ) is the prior probability of x i .The conditional entropy of X, given another feature Y (in our case the beta-turn type or non-beta-turn) is defined as
H X Y P y P x y P x y =
where P(x i |y j ) is the posterior probability of X given the value y j of Y.The amount by which the entropy of X decreases reflects additional information about X provided by Y and is called information gain According to this measure, Y has stronger correlation with X than with Z if IG(X|Y)>IG(Z|Y).
Chi-Squared statistic (CHI) is a statistical test that measures divergence from the expected distribution assuming that the occurrence of a given feature is independent of the class value.Given that X is a feature with m = 6 possible outcomes x 1 , x 2 , …, x m , which correspond to the type I, II, IV, VIII, and NS beta-turn as well as non-beta-turn, with probability of each outcome P(X=x i ) = p i .The Pearson-chi-squared statistic is defined as: ) where n i is the number of instances which will result the outcome x i .A feature that gives higher value of receives lower rank.
Relief algorithm (REL) is based on the feature weighting approach, which estimates the features according their performance in distinguishing similar instances.REL searches the two nearest neighbors for each instance: one from the same class (nearest hit) and another from any other class (nearest miss).The algorithm to calculate the weights as follows (1) where X n is the feature set, y n is class label, and N is the number of instances, set w i = 0, 1 i I where I is the number of features and T is the number of iterations.
(2) For { t = 1:T Randomly select an instance x from D; Find the nearest hit NH(x) and miss NM(x) of x; Using the average rank generated by the three methods outlined above, an SVM model using an RBF with default parameters C = 1.0 and = 0.1 was used with 9-fold crossvalidation on dataset 565 in order to select a subset of the ranked features.We note that the same cross validation was used in [18].We selected the top ranked features in increments of 10 and computed where i denotes a given prediction outcome (a given betaturn type and non-beta-turns), TP i denotes the number of true positive predictions for i th outcome, and total denotes the total number of the tetrapeptides in the dataset.The Accuracy turn , which quantifies aggregated (over all prediction outcomes) quality of prediction, was computed over the 9 crossvalidated folds, see Fig. (2).
We observe that the 50 highest average ranked features give high Accuracy turn values relative to the number of used features.In considering the trade off between Accuracy turn and the number of features selected, we attempted to minimize the feature count in an endeavor to be able to better explain them and to obtain less complex classification model.The selection of 50 features allows for a large improvement in Accuracy turn , i.e., 1.4%, when compared with using 40 features, while the subsequent improvements (when using more features) are relatively small when compared with the additional number of employed features.Although the highest Accuracy turn = 46.2% was obtained with 250 features, this is only 1.3% higher than Accuracy turn = 44.9%obtained with 50 features that corresponds to a reduction of 200 features.
SVM Parameterization
The most relevant 50 features, as determined by average ranking of three different feature selection methods and reduced with a SVM model on dataset 565, were then used with dataset 183 to parameterize the SVM classifier.This reduced dataset was used since parameterization is computationally expensive.We apply 3-fold cross-validation as the 183 dataset corresponds to three out of seven folds of the dataset 426.Classifier parameterization was done in greedy fashion using three phases: 1.
The extent of down sampling of the non-beta-turns was estimated to force the model to predict beta-turn types.This step is necessary due to highly skewed nature of the dataset.More specifically, the dataset 426 includes 72064 non-beta-turn residues which corresponds to 75.6% of all residues.In contrast, the fraction of residues that constitute type I, II, IV, VIII, and NS beta-turns equals 9.5%, 3.8%, 10%, 2.8%, and 2%, respectively.Note that the beta-turns of different types may overlap and thus the total number does not equal 100%.The down sampling was performed at random and was applied to the training set, while the original (no sampling) test set was used for the prediction.
2.
Parameter C was optimized using the down sampled training sets.In this case, the default value of = 0.1 was assumed and we varied values of C to optimize the prediction performance.
3.
Finally, Parameter was optimized using the down sampled training sets and the optimized value of C.Although there is no truly optimal configuration (there is no common optimum for both quality indices), two different downsamplings of non-beta-turns were selected.First, 3% of the non-beta-turns were kept (3%NT) as it resulted in the number of non-beta-turns being approximately the same size as the largest beta-turn type, type IV.Additionally, 8% of the non-beta-turn were randomly selected (8%NT), as this resulted in the closest number of predicted beta-turns when compared with the actual count of beta-turns in the 183 dataset.beta-turns and non-beta-turns.This means that the method overpredicts beta-turns at the expense of underpredicting non-beta-turns.At the same time, 8%NT is associated with the best trade-off between the prediction of beta-turns and non-beta-turns, i.e., the corresponding turn pred Q and turn obs Q lines cross at this point.Therefore, in the case of our application, the 8%NT downsampling is considered optimal.
In the second and thirst parameterization steps, the C was optimized for both 3%NT and 8%NT and found to be 3.0 for both cases.Then, was optimized and found to be 0.15 for 3%NT and 0.3 for 8%NT.
Analysis of the Proposed Prediction Model
Of the 50 features selected, 32 were collocated pair features, 15 were positional vector features, and 3 were composition vector features.Of the 32 collocated pair features, 30 included Gly (G) at one of the two positions, see Fig.According to Hutchinson and Thornton [9], Gly (G) has the highest potential of any residue to form a beta-turn.Gly is also characterized by the highest potential to form betaturn when it occupies positions 3 (position i+2 in the tetrapeptide with starting position i) and 4 (i+3).Additionally, Gly at position 3 of a beta-turn type II is experimentally observed to occur at least four times as often as any other amino acid [51].Hence, Gly is present more often as the second residue in the collocated pairs when compared with its occurrence as the first residue in the pair.This is particularly transparent when considering that Gly is the first residue in the collocated pair with p = 0 (dipeptide), while larger gaps sizes are observed when it constitutes the second residue in the pair.Among the residues that are involved in 3 or more collocated pairs, Asp (D) and Asn (N) are characterized by the highest potential to form beta-turns for positions 1 and 3, Pro (P) by the highest potential for position 2, and Gly (G) by the highest potential for positions 3 and 4, as presented in [9,26].
The novel features considered in this work include 12 collocated pairs with p > 1, with ten pairs that have p = 1 and two with p = 2.The two pairs with p = 2 include DxxG and NxxG, and we note that both of them are based on amino acids that are known to be predisposed to form beta-turns [9,26].The only collocated pair that does not include Gly is DxN, which incorporates the same amino acids as the above two pairs for p = 2.We emphasize that Gly (G), Asp (D), and Asn (N) are the three amino acids that have the highest potential to form a beta-turn and that Asp, Asn, and Gly also have the highest positional potentials, as discussed above [9,26].
Fig. (4).
Selected collocation vector features.The rows show the first amino acid and columns show the second amino acid in the pair.Values in cells show the corresponding p value of the selected collocated pair, while empty cells show features that were not selected.
We also observe that many of the selected pairs formed with Gly involve hydrophilic residues.More specifically, total of 13 pairs involve Asn (N), Asp (D), Glu (E), Lys (K), and Gln (Q).In contrast, only 6 pairs are formed with hydrophobic residues that include Ile (I), Leu (L), Phe (F), and Val (V).This is again consistent with [12], where authors show that beta-turns tend to be found at the solvent-exposed surface, which explains the prevalence of hydrophilic residues.In Fig. (5), the features selected from the composition and position vectors are summarized.Most notably, Asp (D), Gly (G), Pro (P), and Asn (N) make up 14 of the 18 features.These four amino acids also have the highest overall potential to form beta-turns according to Hutchinson and Thornton [9] and Guruprasad and Rajkumar [26].This selection is also motivated by the fact that amino acids with short and polar side chains, e.g.Ser (S), Asp (D), and Asn (N), are preferred at the position 3 of a type I beta-turn [51].
According to [23], type I beta-turns favor Asp (D), Asn (N), Ser (S) and Cys (c) at position 1, Asp (D), Ser (S), Thr (T) and Pro (P) at position 2, Asp (D), Ser (S), Asn (N) and Arg (R) at position 3, and Gly (G), Trp (W) and Met (M) at position 4.At the same time, type II beta-turns prefer Pro (P) at position 2, Gly (G) and Asn (N) at position 3, and Gln (G) and Arg (R) at position 4.These preferences have been explored statistically and explained by specific side-chain interactions observed within the X-ray structures [23].They also motivate selection of 9 out of the 15 positional vector features: According to [9,26], the residues with the lowest potential to form beta-turns include Met (M), Ile (I), Leu (L), and Val (V).These residues are shown to be strongly detrimental to formation of beta-turns when in position 3 (i+4), which is consistent with their selection shown in Fig. (5).This shows that our method uses not only features that allows identify particular beta-turn types but also those that can identify non-beta-turns.
Table 1 estimates contribution of each of the three feature sets, i.e., composition vector, positional vectors, and collocation vector, on the prediction of beta-turn types and compares these predictions against results when using the complete set of 50 features.The best results are obtained with the use of the positional features.The second best set is based on the collocation features, while composition vector features contribute very little.We observe that very few predictions are made when using only the composition vector features, i.e., only about 250 residues were predicted as type I turn and 36 as type IV (with corresponding res pred Q equal 26% and 14%, respectively), and no residues were predicted to assume the remaining turn types.This poor result is expected due to very low number of features, i.e., 3, in this set.We note that high res total Q values are due to large number of true negative predictions.On the other hand, both the collocation vector and the positional vector features strongly contribute to the prediction.Although predictions with positional vector features have the highest MCC values, i.e., overall they are Fig.(5).Selected composition and positional vector features.Rows show the type of the feature, i.e., CV stands for composition vector and Pi denotes positional vector for i th position in the tetrapeptide, while columns correspond to amino acids.
better than the predictions with the other sets of features, the collocation features are characterized by higher res pred Q , which indicates that they generate fewer false positives when compared with the number of true positives, i.e., they are more selective than the positional features.This is again expected since the collocation features are based on information about two positions in the beta-turn tetrapeptide, while positional vector features are associated with only one position.
We observe that for type VIII beta-turn TP = 0 in case of all individual feature sets, and when using the complete feature set.This shows that the proposed method is not capable of predicting this type of beta-turns.We hypothesize that this is due to lack of features that would allow differentiating these turns from other beta-turn types.As observed in [13], type VIII turns are characterized by high conformational heterogeneity (they cannot be stabilized by backbone hydro-gen bond), and thus the underlying conformational preferences of amino acid are much harder to capture when compared with other beta-turn types.This is even more difficult when considering that beta-turns of type VIII have to be differentiated from non-beta-turns.We note that competing prediction methods are characterized by similarly poor predictions for this beta-turn type [11][12][13], see Table 2.
Quality of Beta-Turn Type Prediction
Using the parameterized SVM classifiers, the training sets of the 426 dataset were downsampled to 3%NT and 8%NT, and 7-fold cross-validated tests were run on the complete test sets.Although our predictions were run at the level of tetrapeptides, the predictions were aggregated and each residue was tested with respect to prediction of a given beta-turn type and non-beta-turn in order to allow compari- son with competing predictors.The resulting quality indices, which were computed for prediction of each beta-turn type, are summarized in Table 2.The proposed method is based on the 8%NT sampling, while the results for the 3%NT sampling are provided for comparative purposes.
Considering the MCC, we observe that all methods are characterized by relatively poor performance on type VIII beta-turns.The proposed method has comparable performance when predicting NS turns against BETATURNS, and when predicting type IV beta-turns when compared with BTPRED and COUDES.However, the proposed method is outperformed when predicting all other beta-turn types.The most similar quality of predictions is provided by BTPRED.
It is important to bear in mind the limited, and easily explicable information, that is used in the proposed model.The proposed method exhibits res obs Q over 40% for type I betaturns, over 50% for type II beta-turns, and over 25% for NS and IV types of beta-turns.The results for type I and II betaturns are relatively consistent with the competing methods, however, the proposed method surpasses COUDES and BTPRED for type IV beta-turns.res pred Q is relatively compara- ble across the methods, excluding type VIII beta-turns, with the proposed method ranging between 12 and 20% of the predictions being the actual beta-turns, where COUDES ranges from 20 -30%, BTPRED 9 -14% and BETATURNS 18 -26%.We observe that the proposed method outperformed BTPRED with respect to this quality index.
Quality of Beta-Turn vs. Non-Beta-Turn Predictions
We also present comparison of results when combining prediction of all beta-turn types into the prediction of generic beta-turns.In this case, any predicted beta-turn type is considered as a generic beta-turn and the proposed method is compared against several related methods [13,22,23,27,43,44,[52][53][54] based on 7-fold cross validation on dataset 426, see Table 3.We note that several other beta-turn prediction methods, which are not included in our comparison, were also developed [55,56].
We observe that the proposed method matches the prediction quality (measured using MCC) of the most accurate sequence-only based methods, and as expected has quality lower than the quality of methods that utilize multiple alignments and predicted secondary structure.We note that the Chou-Fasman method [52] which is a sequence-only based method characterized by similar MCC as the proposed method, is based on a set of probabilities assigned to each residue, conformational parameters, and positional frequencies determined by computing the relative frequency of a given secondary structure type as well as the fraction of residues appearing in that type of secondary structure.This means that the design of this method was based on data with known secondary structure, while the proposed method was based purely on knowledge of beta-turns and non-beta-turn, and without of knowledge of any other secondary structures.Most importantly, we note that only two sequence-only based methods, i.e., Thornton's method [23] and Chou's method [1], are available.In contrast to Thornton's method that addressed prediction of only type I and II beta-turns, the proposed method addressed prediction of five turn types, and provides quality comparable with that of the Thornton's method.Direct comparison with Chou's method, which predicts six beta-turn types (types I, I', II, II', VI, and VIII), is relatively difficult since these beta-turn types differ from the types predicted by the proposed method.
Table 3 shows that the proposed method finds 63% of all beta-turns, and that 36% of the predicted beta-turns are the actual beta-turns.This indicates that the selected features that were applied in the proposed method are in fact associated with beta-turns.We note that the res obs Q values of the proposed method are similar to values obtained by the competing methods that utilize the PSI-BLAST and/or predicted secondary structure, while we suffer lower values of res pred Q .The latter indicates that inclusion of the predicted secondary structure and evolutionary information allows for more selective predictions, i.e., removal of some false positive predictions.
Fig. (6) shows ROC curve (TP rate = TP / (TP + FN) vs. FP rate = FP / (FP + TN)) for the beta-turn predictions performed with the proposed method.The Figure shows that our results are substantially better than a random prediction, i.e., the ROC curve is above the diagonal line.For example, the results show that our method obtains 26.8% TP rate for 10% FP rate.We note that we could not draw ROC curve for beta-turn type predictions since different turn types overlap in the sequence, i.e., the same residue is often classified into several beta-turn types.
DISCUSSION AND CONCLUSION
The proposed method succeeds in providing similar performance to other methods that utilize the same information (only the sequence).The results for beta-turn type prediction have proven similar in certain regards to other machine learning methods that use additional information and the results for beta-turn vs. non-beta-turn predictions are consistent with the sequence-only based methods.At the same time, we observe that only two sequence-only methods are available for the prediction of beta-turn types from the entire protein chains, i.e., Thornton's method [23] that predicts only type I and type II beta-turns and Chou's method [1] that predicts beta-turn types I, I', II, II', VI, and VIII.In contrast, the proposed method predicts types I, II, IV, VII, and nonspecific (NS) beta-turns, which are consistent with the targets of modern prediction methods [11].We also observe that inclusion of additional information such as predicted secondary structure and PSI-BLAST profiles provides reduction of false positive predictions.
The main advantage of the proposed method is simplicity and interpretability of the underlying model.It uses only on the input protein sequence and it does not rely on additional methods.The main contribution of this work is the development of novel sequence-based features that allow identifying different beta-turns and differentiating them from non-betaturns.The computed features, which are based on tetrapeptides (entire beta-turns) rather than a window centered over the predicted residues, provide a more biologically sound model.They include 12 novel features that are based on collocation of amino acid pairs with a single or double gap (which denotes inclusion of any amino acid) between them.The selected features further reaffirm the biological relevance of the model, focusing on amino acids that are known to be predisposed to form beta-turns. Virtually all collocated pairs used by the proposed method include Gly (G) that has the highest potential of any residue to form a beta-turn [9,26].The two motifs based on the double gap include DxxG and NxxG, and the only motif that does not include Gly is DxN.The above three amino acids, i.e., Gly (G), Asp (D), and Asn (N), are the top three that have the highest potential to form beta-turns, and additionally Asp and Asn are characterized the highest positional potential at positions 1 and 3, while Gly has the highest potential at positions 3 and 4 [9,26].At the same time, our model also includes several features that are geared towards exclusion of non-beta-turns.According to [9,26], the residues with the lowest potential to form beta-turns include Met (M), Ile (I), Leu (L), and Val (V), and they are shown to be strongly detrimental to formation of beta-turns being in position 3.To this end, the proposed model includes features that encode occurrence of Ile, Leu, and Val at the position 3.
The datasets used to develop and test the proposed method can be freely accessed at http://biomine.ece.ualberta.ca/BTcollocation/BTcollocation.htm
Fig. ( 1 ).
Fig. (1).Comparison of existing and the proposed beta-turn prediction methods.The upper portion of the figure represents design of the classical methods, while the bottom portion shows the proposed design.
Fig. ( 2 ).
Fig. (2).The values of Accuracy turn (y-axis) against the number of the selected top features (x-axis) for 9-fold cross validation on dataset 565.
Fig. ( 3 )
Fig. (3) shows results associated with different degrees of downsampling performed on dataset 183.Varying downsampling rates results in a trade-off between turn obs Q and turn pred Q .
Fig. (3A) shows the values of turn obs Q and turn pred Q weighted by the beta-turn type counts when considering all beta-turn types and non-beta-turns, while Fig. (3B) shows the same but when non-beta-turns are excluded.The Figure shows that 3%NT results in low turn pred Q / high turn obs Q for the beta-turns and high turn pred Q / low turn obs Q when considering both ( 4).
Fig. ( 3 )
Fig. (3).The turn obs Q and turn pred Q values (y-axis) for different downsampling amounts (x-axis) obtained based on 3-fold cross validation on dataset 183 using SVM with default parameters C = 1.0 and = 0.1.Panel A shows results when considering all beta-turn types and nonbeta-turns.Panel B shows results when considering only all beta-turn types.
-
selection of Asp (D) and Asn (N) at position 1 is explained by their abundance in type I beta-turns, -selection of Pro (P) at position 2 is motivated by its abundance in type I and II beta-turns, -selection of Asp (D) and Asn (N) at position 3 is associated with their abundance in type I beta-turns, and selection of Gly (G) and Asn (N) by their abundance in type II beta-turns, -selection of Gly (G) at position 4 is explained by its abundance in type I and II beta-turns.
Considering the res total Q , the results are high and vary little between different prediction methods due to the high number of true negatives.Therefore, res obs Q and res pred Q are examined. | 9,452.8 | 2008-08-06T00:00:00.000 | [
"Biology",
"Chemistry",
"Computer Science"
] |
THE FRIENDSHIP OF MATTHEW AND PAUL : A RESPONSE TO A RECENT TREND IN THE INTERPRETATION OF MATTHEW ’ S GOSPEL
David Sim has argued that Matthew’s so-called Great Commission (Mt 28:16–20) represents a direct anti-Pauline polemic. While this thesis may be theoretically possible and perhaps fi ts within the perspective of an earlier era in New Testament research, namely that of the Tübingen School, the evidence in both Matthew and the Pauline corpus does not support such a reading of early Christianity. In this paper, I argue that an antithetical relationship between Matthew’s Great Commission and Paul’s Gentile mission as refl ected in his epistles is possible only (1) with a certain reading of Matthew and (2) with a caricature of Paul. In light of the most recent research on both Matthew’s Great Commission and the historical Paul, these two traditions can be seen as harmonious and not antithetical in spite of the recent arguments to the contrary. My argument provides a further corrective to the view of early Christianity, which posits a deep schism between so-called Jewish Christianity and Paul’s ostensibly Law-free mission to the Gentiles. Author: Joel Willitts1 Affi liation: 1Biblical and Theological Studies, North Park University, USA
inTRoDuCTion
Over a decade ago, Luz wrote the following about Matthew and Paul: 'had they known one another, [they] would certainly not have struck up a strong friendship ' (1995:148).While Luz clearly did not think that Matthew knew of Paul or that he was directly engaging Paul's theological perspective, he nevertheless believed that Matthew and Paul's theologies were incompatible.Luz's point of view is not, of course, unique but, for the majority of Matthean scholars, it is fair to say that it is Stanton's assessment that is the common one: 'Matthew's gospel as a whole is neither anti-Pauline, nor has it been strongly infl uenced by Paul's writings; it is simply un-Pauline ' (1993:314; also see Mohrlang 1984).In the last decade, however, a formidable, albeit largely singular, voice (which does appear to be gaining some traction), 1 has taken Luz's perspective to the extreme.Beginning in his doctoral dissertation (which was to be published later) and following on in a series of articles as well as in a lengthy monograph, Sim has attempted to show that Matthew and Paul, more than simply having a non-relationship as Luz imagined, were in fact adversaries, that is at least from Matthew's perspective (1995:4; 1996a:210-219; 1996b; 1998:165-213, also see 69, 19-27, 63-107, 236-256; 2002; 2009 [forthcoming]; Sim & Repschinski 2008).
Sim, by his own admission, has attempted to 'resurrect the [failed] thesis' of Brandon (1957), who, over a half century ago, unconvincingly argued, as acknowledged by Sim, that Matthew was 'intensely anti-Pauline' (Sim 2008:380). 2The title of Sim's 2002 article, Matthew's anti-Paulinism: A neglected feature of Matthean studies, serves my point.One immediately notices Sim's unqualifi ed assertion, which is not that Matthew's Gospel might contain themes that could be understood as anti-Pauline but rather that the First Gospel is anti-Pauline: Matthew's Jewish Christian perspective, his support for a Law-observant Gentile mission and the presence of anti-Pauline texts in his Gospel . . .pointed inevitably to the conclusion that Matthew was engaged in a bitter and sustained polemic against Paul himself.(Sim 2002:777) Here Sim has listed three primary reasons for his view: Matthew's Jewish-Christian perspective.• Matthew's support for a Law-observant Gentile mission.
•
The presence of anti-Pauline texts in Matthew's Gospel.• These points emerged out of his 1998 The Gospel of Matthew and Christian Judaism: The history and social 1.See Harrington (2008:24-26), who appears sympathetic to Sim's view, although he did offer somewhat of a backhanded compliment when he stated, 'While the evidence for Sim's hypothesis may not seem totally convincing to all, at the very least he has provided a stimulus for us to rethink our largely canon-infl uenced tendency to harmonize Paul and Matthew'.
Catchpole also advocated an approach to Matthew like Sim's when he fi guratively suggested that 'the ghost of Paul' lurked on the stage on which Matthew's drama played out (2002:33).He posited that Matthew's 'all the nations' (Mt 28:20) 'necessarily involved him in taking a position on the Pauline version of Christianity ' (2002:33).Furthermore, after establishing the universality of Matthew's understanding of mission, he argued on the basis of Matthean redaction that 'we are pressed toward the conclusion that Matthean Christianity is fundamentally at variance with Pauline Christianity' and that 'the real Christian threat [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]] that concerns the evangelist may well come from the direction of the Pauline tradition ' (2002:44).Catchpole did, however, diverge from Sim in his understanding of Matthew's positive outlook on the Gentiles.While agreeing with Sim that Matthew's community would have required Gentiles to become Jews to be full members of the people of God, Catchpole argued that Matthew's universalism 'implied dutiful and determined mission whose goal was faithful recognition of the resurrected Lord by persons of any and every ethnic background ' (2002:62).Sim, on the other hand, thought that Matthew was not only anti-Pauline but also anti-Gentile.According to Sim (at least in his earlier work: his most recent article dealing with the Great Commission has implied that Matthew was involved in a Gentile mission, which seems to evince a contradiction), although Matthew's community may have recognised a Gentile mission, it neither actively conducted mission to nor was in regular contact with Gentiles: 'the members of this Christian Jewish group avoided the Gentile world and were not conducting or even contemplating a mission to the Gentiles' (Sim 1998:28;236-256; also see 1995).Catchpole's arguments are addressed indirectly by my critique of Sim below.
2. Davies (1964:316-341) provided the most devastating and defi nitive critique of Brandon's views in print in his The setting of the sermon on the mount.
setting of the Matthean community, in which Sim provided a detailed case for Matthew's anti-Paulinism. 3Sim published two additional articles (another is soon to appear), in which he expanded discussions that he first set out in his 1998 monograph in an attempt to grow the list of Matthean texts that demonstrate an anti-Pauline perspective (2007; 2008; 2009 [forthcoming]).In I have contended that Matthew's major emphasis on the Torah sets him at odds with the Law-free position of Paul, and that a number of [16][17][18][19] were included and/or redacted in order to counter either the person or the theology of the apostle.The Great Commission that concludes the Gospel can be added to the growing list of anti-Pauline Matthean texts.(Sim 2008:380; also see Sim 2007:343) The purpose of my paper is to assess Sim's interpretation of as an overtly anti-Pauline polemic.While my paper is narrowly focused, I hope that it will nevertheless have wider implications for the hypothesis that Matthew's Gospel is anti-Pauline.
The Historical Paul
The point where I would like to begin my assessment of Sim's proposal is his presentation of the Apostle Paul.I have two reasons for this: firstly, Sim's study of the Great Commission begins with a sketch of Paul's view of the Gentile mission; and, secondly and more importantly, it is fair to say that Sim's interpretation of an anti-Pauline Matthew rises or falls on the question of who the historical Paul was.
Sim has described Paul's position on the origin and nature of the Gentile mission with five points based on his interpretation of the first two chapters of Galatians (2008:380-383).Against this interpretative grid, Sim has read Matthew's Great Commission to be 'explicitly or implicitly ' refuting Paul (2008:388-389).Given the grid's foundational nature for Sim's argument, I will briefly analyse the most significant of these points.
Firstly, Sim has asserted that, according to Paul, there were 'two separate and independent' missions in the early Christian movement (2008:382).This characterisation of Paul's words is arguable.While there can be no debate that Paul spoke of a mission to the circumcised and of one to the uncircumcised, Sim's interpretation of Paul's statement was more than what Paul said: Paul did not assert two separate and independent missions.Two missions, yes, but the antithetical characterisation of them does not follow.In fact, one can easily -perhaps more easily -characterise the two missions that Paul mentioned as being conjoined and complementary.The mention of Barnabas by Paul in Galatians 2:9 is not insignificant in this regard: 'they gave to Barnabas and me the right hand of fellowship.'Furthermore, not only is it interesting that Barnabas' name was mentioned first in the possible implications for the relative status of the two figures; the recognition of Barnabas' role in the mission of the early church is equally interesting.According to the New Testament, Barnabas was a liminal figure, stretching across both the circumcised and the uncircumcised missions.His very presence in this context speaks against taking the missions as 'separate and independent' because figures like Barnabas and, dare we say, many more nameless figures, were regularly bridging the two missions. 4 3.Sim attempted to establish the anti-Pauline perspective in Matthew by an appeal to Matthew's treatment of (1) the disciples, (2) James and the relatives of Jesus, (3) Peter and (4) anti-Pauline texts, where 'Matthew vigorously attacks Paul and his law-free gospel' (Sim 1998:199; also see 188-212).
Secondly, Sim has contended that Galatians 1 to 2 make the point that the two missions 'conveyed different gospels to their respective missionary targets ' (2008:382).While I agree with Sim to an extent, namely that the preaching of the gospel to the Gentiles (alternatively to Israel) carried unique implications for Torah observance for the respective groups, I do not agree that these distinctions warrant Sim's conclusion of 'different gospels'.Paul insisted on the one gospel (Gl 1:6-8), the same gospel entrusted to both Peter and himself (Gl 2:7).There seem to be differing implications of the one gospel for Jews and Gentiles however, which therefore necessitated a two-pronged missional strategy (also see 1 Cor 7:17-21).This 'one gospel, but different implications' seems to be the point of the two accounts in Galatians 2. Paul and the Jerusalem 'pillars' agreed on the gospel and expressed the variegated implications based on ethnic distinction (Gl 2:1-10).Furthermore, when Peter acted in a way contrary to the one agreed-upon gospel and its implication for Gentiles, Paul stood against him (Gl 2:11-21).Of particular note is Paul's accusation against Peter: Paul accused Peter of hypocrisy, not apostasy or heresy.In so doing, Paul implied that the problem was Peter's situational adjustment of his behaviour and not that Peter taught another gospel. 5 Finally, Sim has stated that the 'two independent missions came under the authority of different people, namely Paul and Peter' (Sim 2008:382).He has drawn the following conclusion from this interpretation: Paul had no responsibility for or authority over the Jewish mission headed by Peter.Conversely, and more importantly, Peter and the others in Jerusalem church were to have no involvement in the Gentile mission and certainly no authority over it.(Sim 2008:382) Both the conclusion and the interpretation upon which it is based are problematic and cannot be sustained by a plain reading of Galatians 2. Evident in the text is Paul's recognition of the authority of the Jerusalem church.This recognition can be seen in three ways: (1) Paul stated that he 'laid before them the gospel which he preached among the Gentiles' (Gl 2:2); (2) Paul remembered that he was given 'the right hand of fellowship' and a consequent recognition of the legitimacy of his mission to the Gentiles (Gl 2:9); and (3) Paul referred to a stipulation that was given to him by the Jerusalem church that he not only agreed to but was, in fact, already enacting (Gl 2:10).5.For a thorough discussion of this point, see Nanos (2002).
6.In addition to the evidence here, other indications can be observed from Paul's letters that, at the very least, hint at Paul's recognition of the authority of the Jerusalem church.For example, Paul's rationale for the offering for the Jerusalem church was suggestive (Rm 15:26-28).Therefore, the comment by Davies about the relationship between Paul and the Jerusalem church remains valid: 'While there were differences in the early church between Paul and 'the Judaizers', which cannot be ignored, the fundamental fact remains that according to Galatians and Acts the Jerusalem leaders accepted the Gentile mission of Paul with few conditions ' (1964:325).
What is more, it is simply false to allege that Peter and the Jerusalem church were to have 'no involvement in the Gentile mission'.This claim not only excessively overreaches what the text says but is also at odds with evidence within and outside of Paul's letters.Firstly, nowhere did Paul claim to be the only or even the central apostle to the Gentiles.He did, in fact, acknowledge that he was just one of perhaps many when he stated, 'Inasmuch as I am an apostle to the Gentiles' (Rm 11:13-14; also see 2 Cor 3:4-6). 8Secondly, as already noted, there were liminal figures who crossed back and forth between ethnically distinct missions.Thirdly, Paul's statement in Romans 1:16 that '[the gospel] is the power of God for salvation . . . to the Jew first and also to the Greek', along with the Jew/Gentile issue discussed in Romans 14 to 15, suggests that Luke's presentation of Paul's missionary strategy of preaching in synagogues in the cities that he visited was not far from the truth. 9It is even possible to suggest, as New Testament archaeologist McRay (2003) does, that Paul's choice of cities was the result of the presence of a Diaspora synagogue. 10 It seems, then, that Sim's interpretation of Paul's view of the origin of the Gentile mission, based as it is on his interpretation of Galatians 1 to 2, contains significant enough weaknesses to call into question his assertion that the Great Commission refuted Paul.It would appear that the Paul whom Matthew supposedly refuted disappeared.
The point here is not to deny that some of Paul's contemporaries, both Jesus-believing and not, misunderstood him or that Paul had a number of enemies -this is beyond question from the evidence.However, the historical reconstruction of Paul essential to Sim's understanding of early Christian history, to the extent that it resembles a baurite perspective, continues to suffer severe criticism. 11What is more, some of the most recent trends in Pauline scholarship increasingly render such a picture untenable.
One such trend that is gaining broad international support is the 'Torah-observant Paul'. 12Within this line of interpretation, the Paul of history did not hold that 'the ritual requirements of Judaism, the observances which marked the Jews as a race apart from other peoples, were no longer appropriate in light of the coming of Christ' (Sim 1998:21-22;2008:385;Sim & Repschinski 2008:4); had not himself 'abandoned' Torah observance while conducting his mission (Sim 1998:22; 2008:386) 13 ; did not behave in a chameleon-like manner, observing the law only 7.See Acts 21.Also see the discussions by Bauckham (1990;1995;2006).
11.See the critique by Davies of Brandon (1964:324-325) and, most recently, by Bockmuehl (2006).In contrast, Marcus (2000) observed the re-emergence of Baur's thesis on postwar Pauline scholarship, which, to his mind, provided part of the motivation for recent projects on Paul and Mark.No doubt of great influence in this regard was the work by Martyn (1997a;1997b).Also clearly influential, particularly for Sim, was Lüdemann's the Opposition to Paul in Jewish Christianity (1989).In a chapter titled 'The Matthean community and Pauline Christianity' in Sim's 1998 monograph, he approvingly cited Lüdemann more than 12 times (1998:165-213).
13.Mohrlang's study of Matthew's and Paul's ethics carried the same bias: 'That Paul the Christian continues to observe the traditional practices of the Jewish law as a Pharisee is beyond belief . . .There is nothing in his writings to suggest that his ordinary daily life and conduct are governed by legal regulations halakic-style ' (1984:39-40).Perhaps contributing to such a view was Mohrlang's insistence on maintaining the false distinction between 'moral' and 'ritual' requirements of the Law in Paul's thinking (1984:33-34).
when missionally convenient (Sim 1998:22, 24); or did not deny 'the very fundamentals of Judaism' (Sim 1998:23).Rather, the historical Paul continued to identify himself as 'an adherent of the Jewish faith' and lived 'within the confines of Judaism'; and remained a Jew after his Damascus-road experience as a matter of 'religious commitment' (Sim 1998:23-24).
Had this alternative Paul of history and Sim's hypothetical Matthew been contemporaries, this Paul would surely have been in conflict with him over his insistence that Gentiles needed to be circumcised to be counted among participants in the Messianic restoration with Jesus-believing Israelites.Yet this Paul would not have disagreed with this Matthew on the abiding nature of the Torah for Israel or on the necessity for Israelites to keep the Torah as believers in Messiah Jesus. 14 Furthermore, this alternative Paul and his Gospel would not have been characterised as 'Law-free' according to this view, since (1) he stated that Gentile followers were under the 'law of Christ' (Gl 6:2; 1 Cor 9:21) and that his apostleship was for the purpose of bringing about 'the obedience of faith among all Gentiles' (Rm 1:5) and ( 2) as several scholars have shown, he appeared to use the Torah as the ethical framework for his Gentile churches (see earlier Davies 1980; also more recently see Bockmuehl 2003;Nanos 1996;Tomson 1990;Van Bruggen 2005).
In presenting this alternative reconstruction of Paul, I wish only to show that Sim's proposal is entirely based on a perspective of Paul that is hardly assured. 15Sim must adjudicate his rather old-fashioned view of Paul with argumentation that takes into account all the evidence and interacts with the recent research and the new ways of reconstructing the historical Paul.In addition, his exegesis of Galatians is less than convincing and is inadequate as the basis for his discussion of Paul's and Matthew's understanding of the origin and nature of the Gentile mission.
While a more convincingly argued hypothesis of Paul is necessary, even the one suggested by recent research of a Torahobservant Paul still shows some tension with the Matthean perspective that has Sim advocated.It is to this second issue that I now turn my attention.
Matthew's Mission(s) to Jews and Gentiles
Is it really the case that, with the Great Commission, Matthew's Gospel promulgated a Torah-observant mission to the nations, as Sim has ardently affirmed?While a host of Matthean scholars would reject such an assertion outright on the basis of an extra muros view, those of us who agree with Sim's view of the Jewish Christian character of the First Gospel must, at the very least, entertain this possibility.
Additionally, Sim is to be commended for rightly having pointed out the implausibility of the prevalent interpretative approach to the Gospel that holds, on the one hand, the Jewish Christian nature of the Gospel, while, on the other, interprets Matthew 28:16 to 20 as universalising Matthew 10:5 to 6.This universalising interpretation, which assumes a single Torahfree mission to both Jew and non-Jew, according to Sim, proves ultimately 'implausible' because it 'too often ignores the Jewish dimension of the universal mission ' (2008:386).
Sim has rightly argued that a universal mission that is Law-free stands in tension with the Gospel's emphasis on the Torah (Mt 5:17-19).Sim's alternative interpretation has maintained the universalising element of the prevailing interpretation but has turned it on its head by taking the single mission to be a Torahobservant one (2008:385-388).Is it true that we are left with only 14.See Rudolph (2008:10).Also see Tomson (2001b:267-268).
15.See, for example, the useful Wirkungsgeschichte by Bockmuehl (2006:121-136) of Peter and Paul in the earliest Christian writings and iconography, which revealed a very different history of the early church to the dialectical approach followed by Sim.
these two alternatives: a universalised mission that is either Law-free or Law-observant?
In this section, I wish to assess the claim that Matthew advocated a Torah-observant mission to the Gentiles.I begin with Sim's characterisation of Matthew's first mission (Mt 10:5-6) as a 'Lawobservant mission'.Sim has stated that '[v]ery few scholars would dispute that the original mission to the Jews in Matthew's narrative was Law-observant' (2008:385).Sim's point seems both reasonable at first and incontrovertible, since the original mission was directed towards the 'lost sheep of the house of Israel'.On second thought, however, I am not so sure that most scholars would agree that Jesus' Galilean mission, as Matthew described it, was justifiably characterised as a 'Law-observant mission' in the way that Sim has meant, since Law observance hardly appears integral to the mission or message of Jesus and his disciples according to Matthew's story.
In the Matthean portrayal, Jesus did not send the Twelve out on mission for the purpose of enforcing Torah obedience among the disenfranchised in the Greater Galilean region.He sent them rather to proclaim the soon-coming Kingdom and to dispense the blessings of that Kingdom as they travelled from city to city (Mt 10), as he himself had done (Mt 8-9).It is true that Jesus' and the Twelve's mission was directed to the 'lost sheep of the house of Israel' (Mt 10:6; 15:24) 16 and that the Matthean Jesus believed following him and preparing for the coming of the Kingdom would produce a surpassing righteousness (5:20).But how can this mission be justifiably characterised as 'Law-observant'?Of course it can, if, by this, you mean that the target audience of the original mission was Israelites.This, however, is much less than that which Sim has implied with this adjectival phrase.I have a difficult time seeing where in Matthew's narrative of the Galilean mission (Mt 4:12-19:1) one finds an emphasis on the Law-observant nature of the mission, where one finds a focus on the enforcement of Law observance.
In fact, quite to the contrary, Matthew portrayed Jesus' mission as one that, rather than enforcing scrupulous Law observance, served segments of society that were ostracised by 'Torahobservant' Pharisees.I have in mind here Jesus eating with 'tax collectors and sinners' (Mt 9:10-13).Jesus' response to the Pharisees seemed to reveal a mission that would not be best characterised as 'Law observant': 'Go and learn what this means, "I desire mercy, not sacrifice."For I have come to call not the righteous but sinners.'With this statement, Jesus did not intend to undermine the importance of the Torah for Israel but it does, nevertheless, suggest that Jesus' mission did not begin with matters of Torah observance.
It is apparent that what Sim meant by his characterisation of the mission is that Jews were expected to follow Jesus and keep the Mosaic Law.For the Galileans and Judeans of the early first century, following Jesus meant keeping the Torah, thereby remaining firmly within their covenant obligations to Yahweh.
In fact, Matthew took great pains to show that Jesus upheld the Torah and that the conflicts that he had with his contemporaries over Torah observance were related to its interpretation and not to its continuing validity (Mt 5:17-48).Nevertheless, what warrant justifies this observation as evidence for the claim that Jesus had a 'Law-observant mission'?It seems that Sim has overreached the evidence in asserting that Matthew's mission to the Jews was 'Law observant'.While it is true that his Galilean and Judean followers saw no contradiction between following Jesus and keeping the Messianic Torah, to characterise Jesus' Galilean mission as a 'Law-observant mission', at least in the way that Sim has done, does not emerge naturally from Matthew's story.Matthew's point of emphasis in his presentation of Jesus' mission does not seem to be on Torah observance.The moniker, it seems to me, is therefore inappropriate.
The point is: the claim that a mission to the nations is by definition a Law-observant mission because the first mission was a Law-observant mission does not convince.What Matthew no doubt affirmed was the continuity between following Jesus and Israel's historic covenant.And it is certainly true, as Sim has pointed out, that, whatever we make of the Great Commission (Mt 28:16-20), it cannot be said that discipleship for Israelites represented an abrogation of their covenantal responsibilities as the prevalent universalising approach does.
Furthermore, if it can be shown -against Sim and the consensus of Matthean scholarship -that the final mission command in Matthew was not a revision of the first, then, perhaps, the problem created by the interpretation of a single mission with a single message evaporates all together.In other words, if the target audience was ethnically distinct from Israel in the second mission, one might expect there to have been some difference in the nature of the two missions.
As it turns out, some recent but, as-yet, to be appreciated, voices have argued for this very point.These scholars have asserted that the two mission statements, when compared, reveal significant differences that likely imply two distinct missions, with distinct ethnic target groups and, consequently, distinct missional tasks.I have argued elsewhere in greater detail than is possible here that the 'universalising' or salvation-historical interpretation of the Great Commission is problematic because of its tendency to create theological abstractions foreign to Matthew's historical context (see Willitts 2007).In addition, the German scholar Von Dobbeler (2000; also see Wilk 2002:129-130) has presented a convincing alternative interpretation of the relationship between the two mission commands in Matthew. 17 The essence of Von Dobbeler's argument was that the final mission command should not be seen as either replacing or expanding the first mission command (2000:24-27).Rather, they should be seen as complementary (Komplementarität), even if distinct, expressions of the one mission of Jesus, the Messiah.The Von Dobbeler interpretation began with the observation that the two mission commands revealed a distinction in target groups (Zielgruppen), goals (Ziele) and tasks (Aufträgen).Von Dobbeler then explained: Sie stehen freilich nicht einfach nebeneinander, sondern sind aufeinander bezogen als komplementäre Wirkungen des Messias Jesus und der in seiner Nachfolgen messianisch wirkenden Jünger.
(Von Dobbeler 2000:27-28) The aim of the mission of Jesus and his disciples was accordingly ethnically distinct: two different groups entailing two different missionary tasks.The mission to Israel involved the announcement of the coming of the kingdom of God and Israel's restoration (Mt 10).In contrast, the mission to the nations meant the extension of the kingdom of God throughout the whole earth and implied the conversion of the nations to the living God (Mt 28).The Jewish Scriptures envisaged a time when Israel would be restored and, as a consequence, the nations as nations would turn from idolatry and worship Yahweh.The perspective of a complementary relationship between the two mission commands more or less outflanks Sim's arguments for a Law-observant mission by rendering it unnecessary.With a complementary approach, we are able to maintain the thoroughly Jewish perspective and its apparent emphasis on the continuing validity of Torah observance for Jewish believers in Jesus, while, at the same time, reflect the equally Jewish perspective that the nations as nations will worship Yahweh as a result of Israel's restoration.This approach allows Matthew's Gospel to offer a bifurcated and complementary mission to both Jews and non-Jews that is consistent with messianic perspectives in the Jewish Scriptures and in some segments of Second Temple Judaism. 18 One final factor informing Sim's understanding of the Torahobservant nature of the Gentile mission is his belief that Jesus' command to 'make disciples of all nations . . .teaching them to observe all that I command you' (maqhteu& sate pa& nta ta_ e1 qnh . . .dida& skontej au) tou_ j threi= n pa& nta o# sa e0 neteila& mhn u( mi= n) (Mt 28:19-20) implies Jesus teaching about the Torah in Matthew 5:17 to 19.He has reasoned as follows: These three sayings must be taken literally and seriously.When we do so, it becomes almost inconceivable that the risen Jesus at the end of the Gospel simply dismissed the necessity for circumcision (or any other ritual requirement of the Torah) and replaced it with baptism.If Matthew was consistent on the fundamental subject of the Torah, then we have to conclude that the universal mission enjoined by the risen Lord, which was to be conducted prior to the parousia, must have proclaimed a Law-observant gospel.Circumcision as well as baptism must have been required of Gentile converts.(Sim 2008:386-387) A careful reading of Matthew 15:21 to 28 reveals several relevant points in this regard.Firstly, the moniker 'Canaanite' is not merely a matter of Matthew 'archaising' in order to evoke images of Israel's enemies. 20More likely, Matthew could be said to be scripturalising 21 the woman's identity to reveal concern for the status of non-Israelite subjects within the restored kingdom of Israel. 22 18.This interpretation provides a more convincing explanation than Davies himself gave for his observation: 'There can be no question that . . ."universalistic" no less than "particularistic" sayings are congenial to Matthew; the former no less than the latter were an expression of his interests ' (1964:330).
19.The appeal by Sim (2008:387) to Qumran as a parallel to Matthew for a contemporary Jewish sectarian group remaining silent about circumcision, although obviously implying its validity, came across as special pleading.That no Gentiles were members of the community or that the Qumranites had any interest in a Gentile mission make any comparison on this issue unconvincing.
21.This is a term that I created to distinguish my view from those who use 'archaising'.While the function of the two terms is the same, in other words a familiar scriptural term to designate the identity of the woman, I wanted to avoid the word 'archising' because it is so closely linked with the idea that 'Canaanite' evokes images of Israel's enemies.Also see Davies (1993:115).
22.See Levine's perceptive comment, although she clearly arrived at different conclusions evinced by the following: 'By labeling the woman a Canaanite, Matthew refuses to dismiss the non-Jewish population of the land ' (2001:40).Also, Kick (1994:110-111) recently argued that Matthew's term 'Canaanite' should be understood as a reminder to his readers of YAHWEH's land promise Secondly, in view of Matthew's belief in the soon-coming (and present) kingdom of God/Israel, the exchange between Jesus and the Canaanite woman likely provided confirmation to a Jewish reader of Jesus' Messianic identity.The Canaanite woman was portrayed as submitting to Jesus' authority as the Davidic Son in an area where the rule of David once reached.While the leadership of Israel rejected Jesus' identity and authority, the Canaanite woman acknowledged and appealed to it.
Thirdly, Matthew's portrayal of the Gentile woman was one in which the woman exhibited 'an exemplary Jewish faith' in that she recognised 'the saving intervention of the God of Israel through his messiah'. 23Indeed, on the basis of this faith (mega& lh sou h( pi/ stij), Jesus granted her request (Mt 15:28). 24In other words, as Kick (1994) has persuasively argued, the Gentile woman 'stands near' (nahesteht) the Jewish eschatological outlook of Matthew's Jesus; the woman shared the same perspective and saw her salvation as tied up with Jesus' successful completion of his vocation to shepherd Israel.
Evidence of her Jewish faith is seen in two ways. 25Firstly, her approach and address before Jesus were appropriate to his identity as God's Messiah: she prostrated herself (15:25), acknowledged him to be Israel's legitimate king (15:22) and recognised his lordship (note the use of 'Lord' three times in the context: 15:22, 25 and 27).In addition, with the accompanying parable about the children and dogs (15:24, 26), she acknowledged her nationality and willingly submitted herself to Israel's Messianic Shepherd-King. 26Her response to Jesus' rebuttal revealed that, although acknowledging the centrality of Israel, she asserted that she was included at Israel's table, albeit as one of Israel's 'puppies' (kuna& ria).
Her agreement with Jesus' parable, however, was to a different effect (15:26).She showed that she understood herself to be a part of the 'house of Israel'; 27 admittedly not as one of the lost sheep, but she asserted that she was nonetheless allowed access to the breadcrumbs from the master's table. 28Applying the very parable that Jesus used, the woman asserted that she could participate as a Gentile within the 'house of Israel'.Just as a puppy participates in the household of a family around the master's table, receiving what is appropriate to it, so the woman participated as a Gentile within the house (or kingdom) of Israel, receiving the share of the Messianic Kingdom appropriate to her.In this way, Hill (1972) was probably right to have suggested that (footnote 22continues...) to Israel, seen in texts like Deuteronomy 11:12 and Leviticus 25:23.
23.Both quotations are from Nolland (2005:632).Some commentators, like Love (2002:17-18), had difficulty accepting that a Gentile woman would have understood the significance of the terms 'Lord' and 'Son of David'.Hence, they suggested that she understood them in a way other than free from the Jewish Messianic meaning.The fact of the matter is that whether or not the woman in actuality understood the Messianic significance of the terms is unknowable and irrelevant.Clearly, Matthew exploited their full Messianic implications.
25. Jackson (2002) made the case that this story represented a conversion to Judaism; in effect, the woman had become a proselyte.She writes that 'the evangelists redaction of this story places proselytism into Judaism at the very center of Matthew's concerns ' (2000:946).For a concise summary of her thesis, see Jackson (2000:945-946).Yet Nolland's critique is legitimate: 'despite her very Jewish faith, the Canaanite woman becomes a beneficiary of Jesus' ministry not as a freshly made Jewess, but as a Gentile' (2005:636; n217; also see Nolland [2004]).Nanos (2009a [forthcoming]) has an interesting alternative interpretation of the woman's mixed identity being both Israelite and Gentile and the term 'Gentile', in his view, is therefore not perhaps best.
26. Nolland (2005:635) was right to translate the opening words of Matthew 15:27 (nai\ ku &rie, kai\ ga _r) as 'Yes, Lord, to be sure'.As rationale for his translation, he states that 'following a linking kai / ('and'), it introduces what is to be seen as an implication drawn out from what has been affirmed.
'crumbs' did not imply that the woman received only a fragment of what was given to Israel.The point, according to Hill, was that 'their needs are adequately met'. 29 What was insightfully recognised by Kick (1994) is that he affirmed the points of view of both main characters. 30Matthew's story placed the vocation (die Aufgabe) of Jesus for 'the lost sheep of the house of Israel' alongside the Canaanite's request for the life of the Messianic age.The latter was not superseded or abrogated by the former but was the very basis on which the latter was made possible.Together, they were the complete picture of the coming of the kingdom of God according to Matthew. 31 Matthew told a Jewish story about Israel's Davidic Messiah, Jesus, in which he extended mercy to a non-Jewish subject, granting her request.His action was, perhaps to the surprise of some, the result of the woman's resolute act and proper political Israel-centric outlook. 32She acknowledged her subordinate national identity vis-à-vis Israel and addressed Jesus in those terms without once doubting her right to a share in the powers of the Messianic age. 33The narrative, then, revealed that Gentiles had a right to exist and participate in the Messianic age by adopting the appropriate posture towards Israel's Messiah.Bacon (1930) concluded something similar about this episode nearly a century ago: To Matthew the Canaanite woman is as typical an example of the stranger adopted among the people of God as Rahab the Canaanite harlot and Ruth the Moabitess, whom he specially mentions in his genealogy of Christ.Along with the believing Centurion, she is to Matthew the type of many who are to come from the East and from the West to "sit down with Abraham Isaac and Jacob" at the messianic feast.(Bacon 1930:227) In view of this story and others like it in the Gospel (e.g.Magi [2:1-12], Centurion [8:5-13]), it is difficult to be convinced of a view that Matthew's outlook on Gentiles and their entrance into the kingdom of Heaven presupposed proselytisation. 34 Instead, it seems more likely that making disciples of all nations involved instructing them in the Lord's teaching that specifically applied to them.This more nuanced approach to Gentiles and their eschatological fate would be at home in the variegated perspectives of first-century Judaism about the destiny of Gentiles. 35One such view was that of the so-called 'righteous Gentiles', who had a place in the age to come by keeping the Torah that applied to them. 36While I am not arguing that this was what Matthew presupposed, I think that it is at least as much, if not more likely, an interpretation than Sim's, given the evidence in Matthew.However, caution on this question is the most proper posture and it should not, in the end, be made a foundation of any reconstruction of Matthew's understanding of the Gentile mission because, as Bockmuehl rightly observed, 'although Matthew clearly tries to formulate a "Jesus halakhah" (e.g. in 5.21-48; 19:3-9), many questions remain wrapped in diplomatic silence ' (2003:163).
31.Far from a replacement of Israel by an abstract idea of 'faith', Kick (1994:114) rightly thought that this Matthean text described the coexistence (ein Miteinander) of Jewish faith and Gentile Christian faith on the foundation of Israel's faithfulness to YAHWEH and YAHWEH's promise of faithfulness to Israel.
35.Sim has shown his awareness of the variety of views held by first-century Israelites concerning Gentiles' relationship to the Torah.See Sim (1996b:174-177;1998:17 19).
The Comparison of Matthew and Paul
One final point that I would like to raise concerning Sim's argument for an anti-Pauline Matthew relates to his comparative methodology.Sim has chided Matthean scholars for not seriously considering the question of Matthew's view of Paul (2002:768).However, in light of recent research, it is perhaps more legitimate to question a modern interpreter's ability to offer anything by way of a convincing answer to just such a question.Several factors, which are more seriously appreciated in contemporary scholarship than in the past, conspire against claims that are based on a comparison of Matthean and Pauline literature.
In the early 1980s, Mohrlang published a comparative study of Matthew's and Paul's ethics, concluding the entire discussion with a section titled the 'Factors underlying their differences ' (1984:128-132).There he outlined seven factors that he believed went a long way to explaining the differences between the two figures on the question of ethics. 37What Mohrlang seemed not to appreciate at the time was that the factors that he listed did not simply make Matthew and Paul different but also actually revealed the near impossibility of coming to anything resembling a convincing claim based on a literary comparison of the two.
It would be one thing if Matthew and Paul shared a common social context -which they did not -if they dealt with similar rhetorical concerns -which they did not -or if they wrote in a similar genre -which they did not.Because Matthew and Paul shared none of these, claims of stark theological difference and, certainly, claims of explicit refutation are highly speculative and therefore unconvincing.
Let us take the issue of genre as an example.Mohrlang admitted that the issue of genre 'provides perhaps the single greatest difficulty for any attempt to compare the two writers' thought comprehensively ' (1984:130).Since Mohrlang's observation, significant progress in the area of genre criticism has only strengthened his assertion.Led by the work of Burridge (1992;2004), 38 genre criticism has not only largely settled the issue of the Gospel's genre but also clarified the interpretative limits within a genre.If the gospels are Greco-Roman bi/ oi, their concern is singularly Jesus of Nazareth.Given the focus of the Gospel's genre, however, it becomes much more difficult to use the Gospel as a window into concerns of the Matthean community.This does not, of course, mean that we cannot know anything about the author and his community's historical context from the concerns observable from the Gospel but it does mean that anything more than a description of their general contours is going to be less convincing.
On the other hand, given the severely situational nature of Paul's letters, very little can be known about Paul's views beyond the rhetorical context of his pastoral concern for his Gentilebelieving communities. 39No longer is it therefore justifiable to universalise Paul's statements, especially about the Torah, beyond their Gentile horizon.Paul clearly believed that Gentiles should not be circumcised and take on the yoke of the Torah but it is very likely that this would not have been his view for Israel in light of his 'rule' stated in 1 Corinthians 7:17 to 24: However that may be, let each of you lead the life that the Lord has assigned, to which God called you.This is my rule in all the 38.In an autobiographically orientated reflection, Burridge remarked that 'It is now clear that this approach has won widespread acceptance and that most scholars on both sides of the Atlantic and across the disciplines accept that the gospels are in a form of ancient biography'.Therefore, 'our arguments for biological genre of the gospels have rapidly become part of a new consensus ' (2004:269; 306).
39. Burridge (2004:248-249) pointed at the significance of the distinction between the genres of Gospel and Paul's letters but does not develop this.
churches. Was anyone at the time of his call already circumcised?
Let him not seek to remove the marks of circumcision.Was anyone at the time of his call uncircumcised?Let him not seek circumcision.Circumcision is nothing, and uncircumcision is nothing; but obeying the commandments of God is everything.Let each of you remain in the condition in which you were called.Were you a slave when called?Do not be concerned about it.Even if you can gain your freedom, make use of your present condition now more than ever.For whoever was called in the Lord as a slave is a freed person belonging to the Lord, just as whoever was free when called is a slave of Christ.You were bought with a price; do not become slaves of human masters.In whatever condition you were called, brothers and sisters, there remain with God. 40 Recently, Rudolph has usefully commented as follows: 'Paul's statement . . .required Jesus-believing Jews to continue to live the circumcised life as a matter of calling and not to assimilate into a Gentile lifestyle' (Rudolph 2008:10; also see Tomson 2001b:267-268).
Therefore, the issues of social context, rhetorical concerns and genre present significant methodological obstacles that may very well undermine the kind of comparison that Sim has undertaken.While it is possible to describe Matthew and Paul's outlook on questions that arise out of their literary creations, it is altogether another thing to use these as definitive statements on shared topics that can then be legitimately compared with the other.
ConClusion
In light of the foregoing discussion, I cannot agree with Sim that the Great Commission in Matthew 28:19 to 20 promulgated a Torah-observant mission to the Gentiles or, more fundamentally, that Matthew implicitly and explicitly refuted Paul.I can understand how Sim came to this conclusion -his argumentation is coherent, well argued and supported with evidence -but I think that neither the Gospel's plain sense nor Paul's own statements about the Gentile mission and apostolic career lead a reader to this conclusion.
To me, Sim has marshalled evidence from the First Gospel that did not directly or immediately refer to Paul or his mission.And only after one accepts Sim's assumption that Matthew's outlook was anti-Pauline does the evidence connect to the claim. 41 But this is question begging..... because nowhere did Matthew specifically mention Paul.Furthermore, the so-called allusions to Paul pointed out by Sim were, at best, veiled and subjective with little to anchor such claims in the narrative of Matthew.Too often has Sim appealed to evidence either that cannot be substantiated with a high degree of certainty or whose warrant -assumptions that connect the evidence with the claim -are highly speculative.Furthermore, Sim's baurite reconstruction of early Christian history, within which his hypothesis has convincing power only, requires a certain kind of historical Paul that is becoming a less convincing historical portrait of the Apostle to the Gentiles.
Sim's proposal then appears to me to be severely overstated and, at crucial points, to be overreaching the evidence that he has cited both in Matthean and Pauline literature.Therefore, Stanton's assessment seems to be the most convincing statement on the relationship between Matthew and Paul.However, while the early position of Davies (1964) is in need of revision, as Sim himself has pointed out (2002:771), Davies's central point must also remain a live option.To put it plainly, it is possible 40.This Scripture quotation was taken from the NRSV.
41.Perhaps telling in this regard is his soon-to-be published essay in the Journal for New Testament Studies, where he has overtly stated in his abstract that 'An intertextual relationship between the Gospel and the Pauline corpus becomes clear once we understand that Matthew, as a Law-observant Christian Jew, was opposed to the more liberal theology of Paul' (2009).
that, if Matthew and Paul had been contemporaries -contrary to both Luz and Sim -they could have struck up a splendid friendship. 42 his most recently published article, titled Matthew, Paul and the origin and nature of the Gentile mission: The Great Commission in Matthew 28:16-20 as an anti-Pauline tradition (2008), he sought to show that Matthew's so-called Great Commission (Mt 28:16-20) should be included among the anti-Pauline texts in Matthew.He wrote: 37.Mohrlang's list consisted of the following seven factors: (1) social factor;(2) polemical factor; (3) motivational factor; (4) psychological factor; (5) Christological factor; (6) literary factor; and (7) interpretative factor(1984:128- 131).While several of these are open to critique, factors 1, 2 and 6 are clearly unassailably fundamental. | 10,523.8 | 2009-07-24T00:00:00.000 | [
"Philosophy"
] |
Metric Building of pp Wave Orbifold Geometries
We study strings on orbifolds of $AdS_{5}\times S^5$ by SU(2) discrete groups in the Penrose limit. We derive the degenerate metrics of pp wave of $AdS_{5}\times S^5/\Gamma$ using ordinary $ADE$ and affine $\wildtilde{ADE}$ singularities of complex surfaces and results on ${\cal N}=4$ CFT$_4$'s. We also give explicit metric moduli depencies for abelian and non abelian orbifolds.
Introduction
It is by now possible to derive spectrum of string theory from the gauge theory point of view not only on a flat space but also on plane wave of AdS 5 × S 5 [1]. This is amongst the fruits of the AdS/CFT correspondance [2,3,4,5] relating the spectrum of type IIB string theory on AdS 5 × S 5 to the spectrum of single trace operators in 4D N = 4 super Yang Mills theory. The idea is based on considering chiral primary operators in the conformal field side and look for their corresponding states in the string side but here on pp-wave backgrounds. For instance, a field operator like T r[Z J ] with large J is associated with the string vacuum state in the light cone gauge |0, p + l.c with large momentum p + . Both objects have a vanishing ∆ − J interpreted on the string side as the light cone energy E c of type IIB string on the pp wave background and on the field side as an anomalous dimension. The correspondance between the whole tower of string states r,s a †nr S †ms |0, p + l.c with E c = n and gauge invariant conformal operators T r [O] is deduced from the previous trace T r[Z J ] by replacing some of the Z's by monomials involving the gauge covariant derivative DZ or/and the remaining four transverse scalars φ j and fermions χ a of the N = 4 multiplet. The BMN correspondence rule between superstring states creation operators a † and S † and CFT 4 field operators is.
For more details, see [1]. Soon after this discovery, an intensive interest has been devoted to further explore this issue; in particular the extension of the BMN results to pp wave orbifolds with U (N) symmetries [6,19] and orientifolds of D-brane system with an Sp (N) gauge invariance [7], see also [8,9]. In [6], the BMN proposal has been extended to type IIB superstring propagating on pp-wave Z k orbifolds. There, it has been shown that first quantized free string theory on such background is described by the large N, fixed gauge coupling limit of N = 2 [U (N)] k quiver gauge theory and have proposed a precise map between gauge theory operators and string states for both untwisted and twisted sectors. For ∆ − J = 0, the BMN correspondence for the lowest string state reads as, where |0, p + q is the vacuum in the q-th twisted sector and where One may also write down the correspondence for the other states with ∆ − J = n > 0.
Here one has a rich spectrum due to the presence of k − 1 twisted sectors in addition to the usual one. This analysis remains however incomplete since it concerns only a special kind of N = 2 CFT 4 model; the more familiar supersymmetric scale invariant theory one can have in four dimension. In fact there are several others N = 2 CFT 4 's in one to one correspondence with both ordinary and Affine ADE singularities of the ALE space. These models have very different moduli spaces; and then it would be interesting to explore how the BMN correspondence extends for general N = 2 CFT 4 models and how the machinery works in general. The aim of this paper is to further develop the analysis initiated in [6] by considering all possible kinds of abelian and non abelian pp wave orbifolds. Here we will focus our attention on the type IIB string side by deriving explicity the moduli dependent metrics of all kinds of pp wave orbifolds preserving sixteen supersymmetries. We will put the accent on the way the analogue of the field moduli, of the quiver gauge theory, enters in the game in the string side. In [14], we will give the details concerning its correspondings N = 2 CFT 4 side.
The presentation is as follows: In section 2, we recall some aspects of the pp wave geometry in the BMN limit of AdS 5 × S 5 . In section 3, we study ordinary SU (k) pp waves geometry. In section 4, we consider its SU (k) affine analogue and also give the explicit derivation of the moduli dependent metrics. In section 5, we derive the results for Affine SO (k) pp wave geometries and make comments regarding the other kinds of orbifolds.
pp waves orbifold geometries
To start recall that the general form of the plane wave metric of the AdS 5 × S 5 in the BMN limit reads as, Here the symmetric matrix A ij (x + ) is in general a funtion of x + . For simplicity, we will take it as A ij = µδ ij . In this case, the five form field strength is given by , and the previous metric reduces to, The study of type IIB strings on the Penrose limit of AdS 5 × S 5 orbifolds, AdS 5 × S 5 /Γ, depends on the nature of the discrete finite group Γ. Group theoretical analysis [12] on the types of discrete symmetries Γ one can have in such kind of situation, shows that Γ has to be contained in a specific SU (2) subgroup of the SO (6) R-symmetry of the underlying It follows from this constraint eq that Γ may be any finite subgroups of the ADE 1 classification of discrete finite subgroups of SU (2) [12,13]. These finite groups, which are well known in the mathematical litterature, are either abelian as for the usual cyclic Z k group or non abelian as for the case of binary DE groups defined by, together with analogous relations for E 7 and E 8 . In what follows, we shall address the question of metrics building of pp wave orbifolds with respect to some of these groups; more details on the way they are involved in the BMN correspendence will be exposed in a subsequent paper [14].
To derive the moduli dependent metric of orbifolds of the pp wave geometries, we start from eq(2.2) and use the local coordinates (x;z 1 , z 2 ) of the space R 4 × C 2 ∼ R 4 × R 4 where x =(x 2 , x 3 , x 4 , x 5 ) and where z 1 = (x 6 + ix 7 ) and z 2 = (x 8 + ix 9 ). In this coordinate system, the metric of the pp wave background has a manifest SO(4) × SU (2) × U(1) ⊂ SO(4) × SO (4) isometry group and reads as where |z i | 2 = z i z i . In type IIB closed string theory where these coordinates are interpreted as two dimension world sheet bosonic periodic fields, the above relation leads to a very remarkable field action which, in the light cone gauge, is nothing but the action of a system of free and massive harmonic oscillators, The field eq of motion (∂ α ∂ α − ν 2 ) x and (∂ α ∂ α − ν 2 ) z are exactly solved as, 2ω n e −iωnτ +inσ a n − e iωnτ −inσ a † n , (2.8) and similarly for the z's and for fermionic partners. In this equation, a n and a † n are, roughly speaking, the harmonic annihilation and creation operators of string states and the ω n frequencies are as follows, Due to the presence of the background field, these ω n 's are no longer integers as they are shifted with respect to the standard zero mass results.
To study the field theory associated with pp waves orbifolds with ADE singularities and their complex deformations, we consider the form (2.5) of the metric and impose where U is an element of the orbifold symmetry group and where UzU −1 elements belong to the same equivalent class as z. For the special case of the Z k abelian discrete symmetry where UzU −1 = z exp i 2qπ k , the analogue of the expansion (2.8) has twisted sectors and reads as and similar relations for z 2 (τ , σ) with the upper indices 6 and 7 replaced by 8 and 9. Note that the metric (2.5) allows to realize manifestly the orbifold group actions and, as we will see, permits to read directly the various types of fundamental and bi-fundamental matter moduli one has on the N = 2 field theory side and too particularly in the CFT 4 model we are interesed in here.
SU (k) pp waves geometry
There are two known kinds of N = 2 supersymmetric CFT 4 s associated with A k singularity. This is due to the fact that there are two kinds of A k singularities one may have at the origin of complex surfaces: the first kind involving the ordinary SU (k) Lie algebra classification and the other implying SU (k) affine one. In this section, we focus our attention on the first case. In the forthcoming section, we consider the affine one. Note in passing that while conformal invariance presents no problem in the second case; there is however extra constraint eqs one should take into account for the first class of models. This feature, which is mainly associated with the inclusion of fundamental matters in addition to bifundamental matter, will be considered in [14].
Degenerate orbifold Metric
To start recall that the nearby of the singular point x 6 = x 7 = x 8 = x 9 = 0 of the real four dimension space R 4 /Z k ∼ C 2 /Z k is just an ALE space with a SU (k) singularity. In other words in the neighbourhood of the singularity, the space R 4 × C 2 /Z k may be thought of as R 4 × T * CP 1 . The cotangeant bundle T * CP 1 is known to have two toric actions [18]: z → e iθ z; w → e −iθ w, (3.1) and z → e iφ z; w → w, (3.2) where z is the non compact direction and w is the coordinate of CP 1 . If we denote by c and c ′ the two one dimensional cycles of T 2 corresponding to the action θ and φ, then T * CP 1 may be viewed as T 2 fibration over R 2 + with coordinates |z| and |w|. The toric action has three fixed loci: At the singular point where the two cycles shrink, the product zw goes, in general, to zero as with ζ → 0. Eq(3.4) tells us, amongst others, that the local variables to deal with the geometry of the Penrose limit of AdS 5 × S 5 /Z k are Using these new variables as well as the explicit expression of the differential, the metric of the pp wave orbifold geometry (2.5) reads near the singularity as, where the metric factors G ij are given by, This metric has degenerate zeros at z = ζ = 0, which may be lifted by deformations of G ij 's. In what follows, we describe a way to lift this degeneracy by using complex deformations of ALE space singularity.
Moduli dependent pp wave geometry
To lift degeneracy of eq(3.8), one can use either Kahler or complex resolutions of the SU (k) singularity. In the second case, this is achieved by deformation of the complex structure of the orbifold point of C 2 /Z k which amount to repalce eq(3.4) by where a i are complex numbers. Note that if all the a i 's are non zero, the degeneracy is completely lifted, otherwise it is partially lifted. Note also that from the field theory point of view, the a i moduli are interpreted as the vev's of the hypermultiplets in the bifundamental representations of the N = 2 supersymmetric i U (N i ) quiver gauge theory [15]. The ratio z i = a i−1 a i+1 a 2 i 4. Affine ADE pp waves geometry Here we are interested in the Penrose limit of AdS 5 × S 5 orbifolds with affine ADE singularities. We will focus our attention on the metric building of the pp wave geometry with affine A k singularity. Then we give the results for the other cases. To start recall that affine A k singularity is given by the following holomorphic eq describing a family of complex two surfaces embedded in C 3 . A tricky way to handle this singularity is to use elliptic fibration over the complex plane considered in [15,16,17]. In this method, one considers instead of eq(4.1), the following equivalent one, v z 2 1 + z 3 2 + ζ 6 + az 1 z 2 ζ + ζ k+1 = 0, (4.2) where the parameter a is the torus complex structure. Note that these eqs are homogeneous under the change (z 1 , z 2 , ζ, v) → λ 3 z 1 , λ 2 z 2 , λζ, λ k−5 v allowing to fix one of them; say v = 1. With this constraint, one recovers the right dimension of the SU (k) geometry near the origin. In what follows, we consider the case k = 2n − 1, n ≥ 2 and set v = 1.
Degenerate Orbifold metric
Starting from eq(4.2), which we rewrite as where f and g are holomorphic functions given by, one can solve eq(4.3) as a homogenous function of z 2 , ζ and v 2 Using this relation, one may compute the differential dz 1 in terms of dz 2 , dζ and dv as follows where χ stands for the functions f and g. In the coordinate patch v = 1 where dv = 0, the metric of pp waves background, near the orbifold point with affine SU (2n − 1) singularity, reads as, where now, To get the metric of the pp wave geometry, one does the same analysis as we have done in subsection 4.1 for the case where a i = b i = 0. The relations one gets are formally similar to those we have obtained before; one has just to replace f and g by F and G respectively.
Conclusion
The analysis we have described above may be applied for the remaining other kinds of pp waves orbifolds with ordinary ADE and affine ADE singularities. All one has to do is to identify the explicit expressions of the analogue of the holomorphic functions F and G and redo the same calculations. For the case of affine SO (k) pp wave orbifolds, the analogue of eqs(4.13) is given by, fo k = 2n − 1. One may also write down the expression of these holomorphic functions F and G for the other remaining geometries. More details on these calculations including Kahler deformations method as well as the corresponding CFT 4 models will be exposed in [14]. | 3,610.6 | 2002-10-17T00:00:00.000 | [
"Mathematics"
] |
Identification of hub genes and immune infiltration in ulcerative colitis using bioinformatics
Ulcerative colitis (UC) is a chronic inflammatory disease of the intestine, whose pathogenesis is not fully understood. Given that immune infiltration plays a key role in UC progression, our study aimed to assess the level of immune cells in UC intestinal mucosal tissues and identify potential immune-related genes. The GSE65114 UC dataset was downloaded from the Gene Expression Omnibus database. Differentially expressed genes (DEGs) between healthy and UC tissues were identified using the “limma” package in R, while their Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways were determined with the clusterProfiler package. Protein–protein interaction network analysis and visualization were performed with STRING and Cytoscape. Immune cell infiltration was calculated with CIBERSORT. The relationship between hub genes and immune-infiltrated cells in UC was determined by Pearson correlation. A total of 206 DEGs were identified, of which 174 were upregulated and 32 downregulated. GO and KEGG functional classification indicated DEG enrichment in immune response pathways, including Toll-like receptor signaling, IL-17 signaling, and immune system process and chemokine signaling. 13 hub genes were identified. Infiltration matrix analysis of immune cells showed abundant plasma cells, memory B cells, resting CD4 memory T cells, γδ T cells, M0 and M1 macrophages, and neutrophils in UC intestinal tissues. Correlation analysis revealed 13 hub genes associated with immune-infiltrated cells in UC. 13 hub genes associated with immune-infiltrated cells in UC were identified; they included CXCL13, CXCL10, CXCL9, CXCL8, CCL19, CTLA4, CCR1, CD69, CD163, IL7R, PECAM1, TLR8 and TLR2. These genes could potentially serve as markers for the diagnosis and treatment of UC.
IL7R
Interleukin-7 receptor subunit alpha TLR2 Toll-like receptor 2 TLR8 Toll-like receptor 8 PECAM1 Platelet and endothelial cell adhesion molecule 1 IBD Inflammatory bowel disease NK Natural killer Ulcerative colitis (UC) is a chronic inflammatory disease of the intestine, whose etiology remains poorly understood 1 . The diagnosis of UC is based on a combination of nonspecific symptoms, endoscopic findings, and histological features, which makes it sometimes difficult to discriminate UC from other diseases 2,3 . Although several drugs are available for the treatment of UC 4 , up to 15% of patients do not respond to drug therapy or have chronic colitis secondary to dysplasia, which requires surgery 5 . Therefore, there is an urgent need to better understand the pathogenesis of UC and identify more effective treatments. Autoimmune mechanisms have long been hypothesized to be involved in the pathogenesis of UC 4 . Previous studies have suggested the presence of immune cells in the intestinal mucosa of UC patients 6 . Multiple environmental factors can interfere with the microbial ecosystem of the colon and determine how gut microbes interact with immune cells, thereby provoking an uncontrolled inflammatory response and exacerbating UC symptoms 7 . Infection or dysbiosis may disrupt the natural immune tolerance of genetically susceptible individuals, causing immune imbalance in the intestinal mucosa and further affecting the onset and progression of UC 8 . It has been shown that the development of new therapeutic approaches to modulate the gut microbiota through the administration of bacterial strains that produce probiotic immune metabolites and the addition of specific prebiotics and fecal microbiota transplants can treat UC patients requiring ileal pouch ano-anastomosis (IPAA) 9 . All these findings suggest a critical role of immune cells in the pathogenesis of UC. Therefore, understanding the pathogenesis of UC from the perspective of immune infiltration may hold the key for early treatment and prevention of UC-induced deterioration. In particular, it may provide a new approach for targeted immunotherapy of UC.
Bioinformatics can reveal the molecular mechanisms of disease through large-scale gene or protein expression profiling in diseased vs. healthy tissues. Expression data are especially useful to study dynamic regulation among multiple immune cells. CIBERSORT is a commonly used analytical tool in studies of tumor immunity, whereby information about cell subsets is derived from bulk gene expression data. The method, however, has not been applied widely to investigate non-tumor immunity or to analyze immune infiltration in UC. The present study aimed to identify potential UC biomarkers and assess the level of immune cells in intestinal mucosal tissues of UC patients using CIBERSORT. Furthermore, to identify immune-related genes suitable for the diagnosis and treatment of UC, the correlation between immune cells and hub genes was calculated.
Materials and methods
Microarray data. The Gene Expression Omnibus (GEO; www. ncbi. nlm. nih. gov/ geo/) 10 Identification of differentially expressed genes (DEGs). The gene expression matrix of the GSE65114 dataset was analyzed with the "limma" package in R to obtain DEGs between UC and healthy samples. Briefly, |log2 fold change (FC)| > 1 and P < 0.05 were set as the selection criteria for DEGs, with |log2FC| < 0 indicating downregulated genes and |log2FC| > 0 indicating upregulated genes when UC vs healthy individuals.
KEGG and GO enrichment in DEGs.
To predict the biological function of DEGs, we performed functional enrichment analysis. GO analysis revealed that the DEGs were enriched mainly in extracellular region, regulation of response to stimulus, inflammatory response, immune system process and defense response (Fig. 3). KEGG pathway analysis indicated that upregulated genes were significantly enriched in viral protein interaction with cytokine and cytokine receptor, cytokine-cytokine receptor interaction, complement and coagulation cascades, chemokine signaling pathway, Toll-like receptor signaling pathway, and IL-17 signaling pathway (Fig. 4A); whereas downregulated genes were enriched mainly in proximal tubule bicarbonate reclamation and PPAR signaling pathway (Fig. 4B).
Distribution of immune-infiltrated cells.
The microarray was screened using the CIBERSORT inverse convolution method with P < 0.05, resulting in 12 healthy intestinal tissues at the top of the heat map and 16 UC intestinal tissue groups at the bottom. Plasma cells, memory B cells, resting CD4 memory T cells, γδ T cells, M0 macrophages, M1 macrophages, and neutrophils were all more abundant in colonic mucosal tissues of patients with active UC than in healthy controls (Fig. 5A). Figure Differences in immune infiltrating cells between the intestinal tissues of healthy and UC patients were visualized by a bar plot, with statistically significant differences at P < 0.05. Plasma cells, memory B cells, resting CD4 memory T cells, γδ T cells, M0 macrophages, M1 macrophages, and neutrophils were all differentially elevated in the intestinal tissues of UC patients (Fig. 6B).
Correlation between hub genes and immune-infiltrated cells in UC. The relationship between
hub genes and immune-infiltrated cells in UC, which differed between UC and control samples, was evaluated by Pearson correlation (Fig. 7). M1 macrophages displayed a positive correlation with CXCL13 (r = 0.
Discussion
UC originates from a disruption in the balance between the host's mucosal immunity and intestinal bacterial flora, resulting in an abnormal immune response to commensal non-pathogenic bacteria 6 . In the present study, gene expression data were obtained from the GEO database, which identified 174 upregulated and 32 downregulated DEGs in UC tissues. GO and KEGG functional classification indicated that DEGs were enriched mainly in immune response pathways, such as Toll-like receptor signaling, IL-17 signaling, and immune system process and chemokine signaling. Toll-like receptor signaling plays an important role in the pathogenesis of inflammatory bowel disease (IBD) 18 ; whereas IL-17 signaling is involved in the development of colonic tissue damage and inflammation during UC 19 . CD4 T cells and natural killer (NK) T cells can promote the release of Th2-associated cytokines and Th17-associated pro-inflammatory cytokines, aggravating the UC intestinal 31 . Earlier studies suggested that, in the presence of T cells, numerous types of immune cells might be triggered to produce more chemokines, followed by neutrophils infiltration into the colonic mucosa, thus directly and/or indirectly exacerbating the severity of UC-like chronic colitis 32 . γδ T cells expressing the Vδ2 chain produce IL-17 in the intestine of patients with long-standing IBD and are involved in the chronic inflammatory process 33 . Recent studies have shown that the global immune cell landscape of UC tissues is characterized by an increase in M0 macrophages and neutrophils 34 .
Macrophages are key effector cells of the innate immune system and are crucial for intestinal mucosal stability 35,36 .
In addition, they may serve as antigen-presenting cells and play a critical role in the initiation of the immune response 37 . Zhuang et al. highlighted the abnormalities of M1/M2 macrophage polarization at the onset and during development of UC 38 . Activation, migration, and degranulation of neutrophils are important mechanisms of intestinal injury in IBD 39 . Here, the identified UC-associated immune-infiltrated cells were involved in the progression of UC, as indicated by correlation between such cells and the colonic mucosal tissue of patients with UC. Nevertheless, the intricate network of interactions and regulation among immune cells means that more research is required to determine their exact role in UC. According to the PPI network of DEGs, 13 out of 446 genes displayed an elevated degree of interaction and were upregulated in UC patients: CXCL13, CXCL10, CXCL9, CXCL8, CCL19, CTLA4, CD69, CD163, CCR1, PECAM1, IL7R,TLR8 and TLR2. Chemokines can significantly increase chronic inflammation and intestinal tissue destruction in IBD through their ability to induce chemotaxis and leukocyte activation 40
. CXCL8 (IL-8)
is an ELR + chemokine secreted by neutrophils, macrophages, and intestinal epithelial cells [41][42][43] . CXCL8 has a strong chemotactic effect on neutrophils and activates their metabolism and degranulation 44 . In addition to being a feature of UC pathophysiology, neutrophils infiltration in the intestinal mucosa is also a functional indicator of adaptive immunity 45 . Both CXCL9 and CXCL10 belong to the family of ELR-chemokines, and target CXCR3 as their receptor 46 . CXCL9 is highly expressed in the intestinal mucosa of mice with experimental colitis and in UC patients (especially in lymphocytes, macrophages, and epithelial cells) 22 . CXCL10 is a potent chemokine that is primarily secreted by monocytes and macrophages (including M1 macrophages) in IBD 47,48 . CXCL9 and CXCL10 recruit mostly Th1 cells, monocytes, and NK cells 49 . CXCL13 belongs to the ELR-chemokine family and targets CXCR5 as its receptor, and its mRNA expression levels are elevated in intestinal tissues of UC models 50 . In contrast to ELR + CXC chemokines, ELR-CXC chemokines lack chemotactic activity on neutrophils; instead, they are highly responsive to memory T cells and NK cells 50 . A humanized anti-CCL21 monoclonal antibody has been identified as a potent marker for the diagnosis of active IBD and as a possible therapeutic agent for the prevention of IBD recurrence 51 . CCL19 is involved in the progression of UC 52,53 . Given that CCL19 induces the activation of MEK1-ERK1/2 and PI3K-AKT cascades in M1 macrophages 54 , CCL19 is believed to exacerbate the progression of UC by inducing chemotaxis in M1 macrophages. CCR1 is expressed on neutrophils, contributing significantly to tissue damage and mucosal dysfunction in UC 53,55 . In summary, the family of chemokines/ www.nature.com/scientificreports/ chemokine-receptors was found here to correlate positively with M0/M1 macrophages and neutrophils. This indicates that immune-infiltrated cells and UC hub genes collectively influence UC through immune factors. CTLA4 serves as a negative regulator of the immune system, and is highly expressed on Tregs and activated T cells 56,57 . Currently, the level of CTLA4 expression in UC remains unclear. Wang et al. 58 and Magnusson et al. 59 suggested that CTLA4 expression in T cells was lower in UC patients than in healthy individuals. However, other studies have shown that CTLA4 mRNA expression was significantly higher in colonic mucosal tissue from patients with active UC compared with controls devoid of inflammation 60 . Interestingly, in our study, CTLA4 was upregulated in patients with UC and correlated positively with Tregs. Although Abatacept™ (CTLA4-Ig) has been effective against psoriasis 61 and rheumatoid arthritis 62 through its inhibitory effect on T cell activation, there is no evidence from a phase III clinical trial 63 of its beneficial effect on UC. CTLA4 may be upregulated in patients with UC via a feedback mechanism, but may fail to exert negative immune regulatory effects due to defects in its downstream pathways or reduced activation. Furthermore, blocking CTLA4 or a Treg-specific reduction of CTLA4 expression lead to increased numbers of plasma cells and memory B cells after vaccination 64 . This evidence points to the involvement of CTLA4 in regulating the above immune cells, but whether it interacts and participates in the immune regulatory mechanisms of UC remains to be determined.
CD69 is one of the first surface antigens expressed by T lymphocytes after activation, and its expression can act as a co-stimulatory signal to promote further activation and proliferation of T cells 65 . CD69 is expressed on different leukocytes, including newly-activated lymphocytes, certain subtypes of memory T cells, infiltrating lymphocytes isolated from patients with chronic inflammatory disorders, Tregs, and NK cells [65][66][67] . The immunoregulatory function of CD69 involves controlling the differentiation balance of Th/Treg cells and enhancing the suppressive activity of Tregs 68 . However, the exact mechanism of the interaction between CD69 and UC requires further study. Soluble CD163 is a specific macrophage activation marker that is reduced by anti-TNF-α antibody treatment in active inflammatory bowel disease 69 . Defining the landscape of mononuclear phagocytes in mesenteric lymph nodes provides evidence for the expansion of CD163 + Mono/MΦ-like cells in UC, highlighting the distinction between UC and CD 70 .
PECAM1 (CD31) is expressed on the surface of endothelial cells, platelets, monocytes, neutrophils, T cell subsets, B cells, and dendritic cells, and has been reported also in plasma cells [71][72][73] . The interaction between endothelial cells and leukocytes is a key step in the inflammatory response, and PECAM1 enables leukocytes to enter the site of inflammation and cause tissue damage 74 . Thus, PECAM1 has an important role in inflammatory microcirculation injury. In the present study, PECAM1 correlated positively with plasma cells, which suggested that plasma cells probably contributed to inflammation and tissue damage in UC through PECAM1 expression. A further possibility is that PECAM1 may promote plasma cells infiltration into inflammatory sites of the intestine in UC.
Interleukin-7 (IL-7) is a cytokine produced mainly by epithelial and stromal cells that regulates T lymphocyte homeostasis 75 . Almost all conventional mature T lymphocytes express high levels of IL-7 receptor (IL-7R), with naturally occurring Tregs being a special exception 76 . In healthy colon biopsies, intestinal epithelial cells produce IL-7 and mucosal T lymphocytes express IL7R 77 . A study has shown that genetic locus variants in the IL7R gene are associated with UC susceptibility 78 . In addition, elevated expression of IL-7 signaling pathway genes in blood CD8 + T cells at diagnosis was significantly associated with the course of IBD disease 78 .
TLR2 upregulation in the gut of UC patients has been associated with increased antigenic stimulation of inflammatory and immune pathways activated by ligand binding 79 . Macrophages in the intestinal mucosa can rapidly engage in Toll-like receptor-mediated inflammatory responses to prevent pathogen invasion, but these innate immune responses can also trigger UC 80 . Documented increased expression of TLR2 in macrophages from IBD patients 81 corroborates our finding showing a positive correlation between TLR2 and M0/M1 macrophages. Hence, we hypothesize that M0/M1 macrophages may mediate the inflammatory response to UC through TLR2. TLR8 is a key component of innate and adaptive immunity, and it has been shown that its expression is increased in UC patients and that mRNA levels are positively correlated with the severity of intestinal inflammation as well as the severity of inflammation 75 .
We have mapped a proposed mechanism for the main results of this study (Fig. 8). Certainly, the above speculation requires further studies to verify the role of immune response in UC via the mutual regulation between hub genes and immune-infiltrated cells. The present study presented also some limitations. First, it was based on the GEO database, which is a secondary mining and analysis database of previously published datasets. Hence, the experimental results may differ from the conclusions of previous experiments, most likely due to biased data analysis caused by the small sample size. Second, the CIBERSORT deconvolution algorithm is based on limited genetic data, which may lead to inaccurate results due to different disease predisposing factors and the plasticity of disease phenotypes. Here, CIBERSORT was used to identify potential immune-related genes or immune infiltrating cells in UC. In addition, group centrality metrics 82 are particularly useful because they describe the importance common to all hub genes and can be used to identification of hub genes by tools such as keyplayer 83,84 (http:// www. analy ticte ch. com/ keypl ayer/ keypl ayer. htm) or Pyntacle 85 . Although we did not use other network tools or methods in this study, perhaps we will enrich our study with these tools for other studies in the future. Nevertheless, our study may still provide compelling evidence for further research on the potential of the identified immune-infiltrated cells or immune-related genes for the treatment and diagnosis of UC (Supplementary file).
Colonic lumen
Epithelial cells | 3,955 | 2023-04-13T00:00:00.000 | [
"Medicine",
"Biology",
"Computer Science"
] |
Effective Prediction of Rheumatoid Arthritis (Ra) Diagnosis Using Hybrid Harmony Search with Adaptive Neuro Fuzzy Inference System
One of the severe auto immune diseases that affects the entire human body is Rheumatoid Arthritis (RA), the disease triggers one’s immune system to attack the inner linings of bones and causes severe inflammation of the synovium. The continuous erosion of bone lining leads to permanent loss of the joint, accounting this severity the early prognosis of the disease is a significant and inevitable process. But, the sign and symptoms of the disease are always uncertain. The symptom of RA disease is similar to other inflammatory diseases, so highly experienced experts can identify the disease in its early stage. To support the clinicians and technicians for early prognosis of the disease, a computer-aided decision support model based on Harmony Search –Adaptive Neuro Fuzzy Inference System is presented in this study. The Harmony search algorithm is employed to select the optimal features, and ANFIS is adopted to perform classification. To demonstrate the effectiveness of the model, metrics such as Accuracy, Sensitivity, Specificity, Precision, Recall, F-measure, Positive Predictive Value, Negative Predictive Value, Root Mean Square Error, and Mean Absolute Error are employed and evaluated in MATLAB simulation environment. The proposed HS-ANFIS outperformed other models developed in this research and existing works of literature.
INTRODUCTION
Rheumatoid Arthritis (RA) is a chronic systemic inflammatory sickness [1&2] that occurs on joints and muscles, resulting in apparent interference of joint structure and capacity.
According to the World Health Organization (WHO) report, the prevalence rate of RA is 0.3% to 1% worldwide, and in developed countries the disease is becoming a major threat. In India, RA's prevalence is estimated to be 0.92%, and women are highly vulnerable to the disease than men [3]. According to the WHO report, it was estimated that there are more than 100 types of arthritis [4], arthritis affects more than one among four adults in the population [5]. According to arthritis foundation America, one in three adults have arthritis. Inspite of encountering greater challenges in treating RA disease, clinicians have a high degree of challenges to diagnose the RA disease.
The RA disease is basically diagnosed by satisfying the American College of Rheumatology (ACR)/European League Against Rheumatism (EULAR) classification criteria for RA [6]. The diagnosis of RA over suspected patients is carried out by undertaking a series of laboratory tests and analysing the brief case history. Initially, the RA started with severe pain without any swelling, the symptom is a non-specific and also appears to be in conjunction with other immune diseases. The major clue for suspecting RA disease is a prolonged period of morning stiffness in limited number of joints. The laboratory test results can improve the diagnostic sensitivity, and they are carried out by examining the level of ESR (Erythrocyte sedimentation rate), CRP (C-reactive protein), RF (Rheumatoid factor), anti-CCP (Anti-cyclic citrullinated protein), ANA (Antinuclear antibody), blood count rate, and other imaging tests.
The ESR presents the blood settlement rate through a liquid column, and the ESR rate helps to distinguish the inflammatory and non-inflammatory diseases. Still ESR level varies with age [7] and also it may include other diseases like malignancy, pathogenic infections, and other related conditions. The RA factor represents auto immune proteins present in our body, about 5% to 10% of peoples have a higher level of RA factor, which get rises with age [8]. The most effective laboratory test for RA diagnosis is counting the anti -CCP anti-bodies. ELISA is the test used to calculate the anti-CCP rate, and it is more specific test to report positive RA than Rheumatoid factor [9]. The test based on estimating the C-reactive protein, the protein which is released by the liver during inflammatory diseases. With this test, the patient may reported to be affected with Systemic Lupus Erythematosus (SLE) or RA. The ANA has a specific disease association with RA, and about 98% of patients affected with SLE reported with positive ANA, and those who affected with other connective tissue disease show 40% to 70%.
Still, it was reported that ANA might present about 5% in many healthy individuals [10]. So, with ANA test the patient health history also should be analysed to report positive RA. Thus, it is clearly understandable that the single laboratory test is not enough to diagnose the RA disease, the patient is supposed to undergo series of various laboratory tests. Based on all the test results investigated with a patient health history, age, genetic factors and so on, and the clinician can come to a valid conclusion of reporting positive RA or Non-RA. Meanwhile, if the diagnosis procedure is delayed, the course of the disease may become severe, which may pay the way for other risk factors and even permanent loss of life.
The designing of intelligent -making model for early diagnosis of RA is always an open field of research. Numerous research studies have been presented in existing literature. Still, the thirst of finding the best model has not elapsed because of its challenge to meet better classification accuracy. In this research article, a computer-aided decision support model based on Harmony Search -Adaptive Neuro Fuzzy Inference System is developed, and the performance is evaluated by employing metrics such as classification accuracy, sensitivity, specificity, precision, recall, F-measure, PPV, NPV, RMSE, and MAE.
Research Contribution
The contributions made in the proposed work are as follows: To collect and preprocess the real-time dataset includes both RA and Non-RA affected patients.
Feature selction is performed by employing Harmony search strategy. The remaining section of this paper is organized as follows: Related works are presented in section 2, the methodology is discussed in section 3, the experimental modelling is discussed in section 4, the performance metrics are presented in section 5, the simulation results are briefed in section 6 and based on the investigation of results, inferences and conclusion are presented in section 7.
RELATED WORKS
The RA is an auto-immune disease which affects the joints of hands, legs, wrist, knees, angles etc. The disease is also termed as systematic disease as its nature of affecting is not only the joints but also the other organs of the body such as lungs, heart etc. The essential domains and tests of clinical examinations in rheumatology to report a patient with RA, numerous clinical examinations are needed. Indeed the experience and more skilled man power is needed to avoid late and wrong diagnosis [11]. The mortality risk associated with ILD and RA has been made and it was identified that the patients affected by ILD and RA reported high moratlity than the patients affected by RA without ILD [12]. The risk factors associated with RA include, development of malignant lymphoma, cardio vascular disease, atherosclerosis, Temporomandibular disorders, depression among women's voice disorder [13][14][15][16][17][18][19][20][21][22][23][24].
Considering the major stress patients undergo, researchers have proposed many intelligent decision support models for the past few decades for prognosis of the disease.
Machine learning methods for early prediction of RA based on electronic health records [25][26][27][28][29], deep learning strategy on X-ray images [30], an ensemble approach for disease gene identification, where EPU achieved an accuracy of 84.8% [31]. The Decision Stump as weak Learner, and Cuckoo search named CS-Boost for early prognosis of the disease [32]. Adaboost based classifier model for early diagnosis of fibromyalgia and arthritis [33], Numerous Neural Network based diagnosis model for arthiritis diagnosis. Neural network based RA diagnosis is investigated in [34]. Ajava based software tool has been presented which preformed pairwise alignment and analysis based on mutability score [35].
The risk factors associated with the RA diseases and the existing models on clinical decision support system for RA disease diagnosis are presented in the survey. The presented models are mainly concentrated on classification accuracy as performance matric to evaluate the model that is inadequate to validate a medical diagnosis procedure. In this article, numerous metrics are employed to validate the model. Further, there is no much studies available in this field of RA diagnosis research because of the complex nature of disease symptoms which imposes a huge burden to attain the outcome. In this article ANFIS model is developed to perform classification because of its adaptive nature to handle the high level of uncertainty in disease diagnosis [36][37][38][39][40][41][42][43][44] , and feature selection is a significant step in the design of classification model. Here, Harmony search algorithm is employed to select the necessary features, due to its advantage of simple implementation, better learning ability and improved convergence [45][46][47][48][49].
Harmony Search algorithm
Harmony Search (HS) algorithm is a meta-heuristic calculation that emulates the ad-lib procedure of a music player. Every performer plays a note while discovering the best notes of Harmony from end to end. The fundamental goal of the calculation is to eliminate the overall complexity occur during the search process. The HS method is inspired by the underlying principles of the harmony improvisation [50]. The calculation stream demonstrates parameter HMCR called concordance memory tolerating or thinking about rate. If the HMC rate is excessively low, just a couple of best harmonies are chosen and they lead combine too gradually. If this rate is to a great degree high (close to 1), every one of the harmonies are utilised in the concordance memory, consequently prompting different harmonies are not investigated well conceivably wrong arrangements acquired. Subsequently, ordinarily, we use HMCR=0.9. In standard, the pitch can be balanced straight or nonlinearly, and direct alteration is utilised.
Where old X is the current pitch or arrangement from the concordance memory, pitch adjustment determined by a pitch bandwidth PB and a pitch adjusting rate PAR. Here ε is a random number generator in the range of [-1, 1]. We can relegate a pitch-changing rate PAR to control the level of the alteration. Along these lines, we generally use PAR =0.1 ~0.5 in many applications. Three parts in harmony search can be outlined as the pseudo code appeared in Algorithm 1 and flow diagram in Figure 1. We can see that the likelihood of randomisation Figure 1 Harmony search flow diagram
Adaptive Neuro-Fuzzy Inference System (ANFIS)
Adaptive Neuro-Fuzzy Inference System (ANFIS) is a hybrid structure containing the neural framework and the fuzzy logic [51]. The standard base contains the fuzzy on the chance that guidelines of Takagi and Sugeno's sort as seek after: If x is A and y is B then z is f(x, y) where A and B is the fluffy sets in the antecedents and z = f(x, y) is a fresh capacity. For the most part f(x, y) is a polynomial for the data factors x and y. Regardless, it can be whatever other limits that can portray the yield of the system inside the fuzzy. At whatever point f(x, y) is predictable, a zero demand Sugenofuzzy model is formed which may be seen as a remarkable example of Mamdanifuzzy reasoning system where every standard coming about is controlled by a fuzzy singleton. If f(x, y) is taken to be a first demand polynomial, Sugeno fuzzy model is encircled. For the first demand two guidelines Sugeno fuzzy derivation structure is given in Figure 2, the two precepts may be communicated as: Rule 1: If x is A1 and y is B1 then f1 = p1x + q1y + r1 (4) Rule 2: If x is A2 and y is B2 then f2 = p2x + q2y + r2 (5) In this above derivation system, the yield of every standard is a direct blend of the information factors included by a steady term. The last yield is the weighted normal of each standard's yield.
Layer 5: Overall Output
Where n is the dimension of the problem. Stage 5: Improvise a selected feature from the HM by using three principles, the congruity memory thought, the pitch alteration, and the arbitrary determination.
Dataset Construction:
The reliable dataset construction is an essential part of developing a disease diagnose model using artificial intelligence techniques. Now, there is no existing dataset available for Rheumatoid Arthritis, so data collection is the first set of the proposed study. In this work, the clinical data is collected from the various outpatient units in Coimbatore, India. The dataset is Table 2. The dataset has 20 features, whereas the proposed strategy has optimally selected six features.
Normalisation
The purpose of Normalisation techniques is used to map the data to a diverse scale. In this work using Z score normalisation, also called as Zero mean normalisation. Here, the RA data is normalised based on the mean and standard deviation.
Data Segregation
After consummating the Normalisation process, the dataset is divided into two portions using the percentage (%). Dataset splitting is the procedure of dividing the entire dataset into two portions. The first portion is training data and it contains 80% of N is represented as
Training and Testing phase
The Pre-processed data is feed into Harmony Search Algorithm, the algorithm tends to select the optimal features. The selected features should follow the constrain such that, the selected number of features should be minimum. Meanwhile, maximising the accuracy, the cost function is presented as follows: Where ' Acc ' is the accuracy of the classification model, 'L' is the selected attribute length, N represents the number of features, and indicates the weight of classification accuracy and feature selection quality, ∈ [0, 1] and = 1 − .
In the testing phase, each new RA data is analysed and its principal features are located and compared with the principal features of trained RA data. If some matches are found, the data is classified by the HS-ANFIS according to the previously defined rules. Initially, the test query RA data is to be received from the user, and then feature extraction is done. Next, the proposed HS-ANFIS classifier approach is applied on the given query data to flag the data into 'Normal' or 'RA'. The parameters for proposed HS-ANFIS technique for RA disease classification is presented in Table 3.
PERFORMANCE METRICS
The The PPV and NPV portray the execution of an analytic test. Figure 5. The accuracy of the HS-ANFIS is 4.47% higher than that of GWO-ID3, 7.47% than that of PSO-ID3, 32.84% than that of GWO-SVM. The sensitivity of the HS-ANFIS is 2.18% higher than that of GWO-ID3, 3.38% improved than PSO-ID3 and 5.2%. The specificity is 22.18% improved in HS-ANFIS to GWO-ID3, about 32.79% higher than that of PSO-ID3 and 44.39 % better than that of GWO-SVM.
The Recall is improved to the value of 2.18%, 3.38% and 5.214 % than that of GWO-ID3, PSO-ID3 and GWO-SVM. The precision is 2.082% higher than that of GWO-ID3, 4.89% higher than that of PSO-ID3 and 33.34%. The F-measure is 2.43% improved than that of GWO-ID3, 4.14% improved than that of PSO-ID3, 21.84% higher than that of GWO-SVM strategy.
Similarly, the RMSE value is significantly decreased than that of other models, with better kappa score. The performance of the HS-ANFIS is compared with the existing works of literature, and the obtained matric results for the given training and testing dataset is reported in Table 6 and 7. The training accuracy of the proposed HS-ANFIS is 17.1% better than that of C4.5, 8.8% higher than that of PSO-C4.5, 9.3% higher than that of GWO-C4.5 and 3.7 % improved to the HGWO-C4.5 for the same datasets employed [25] 16.9 % higher than that of CSBoost [27]. The sensitivity of the proposed HS-ANFIS is 20% better than that of C4.5, 11.8% higher than that of PSO-C4. 5, 9.9% better than that of GWO-C4.5, 3.9% higher than HGWO-C4.5, 10.1% to REACT and 19% higher than that of CS-Boost. The specificity is 6.6% better than that of C4.5, 7.4% to PSO-C4.5, 5.6% better than that of GWO-C4.5, 5.7% higher than HGWO-C4.5 and 4.6% higher than the CS-Boost strategy. The computational time of the proposed HGWO-C4.5 is significantly reduced than that of other models employed in the study.
1.48
The training response of the Proposed HS-ANFIS is presented in Table 6, the testing accuracy is 24% better than that of C4.5, 14% higher than of PSO-C4.5%, 13.3% improved to GWO-C4.5, 14.4% better than that of HGWO-C4.5, 20.2% higher than that of REACT, 23.3% higher than the CS-Boost strategy employed for comparison. The sensitivity is 24.1% higher than that of C4.5, 16.9% better than that of PSO-C4.5, 15.2% improved to GWO-C4.5, 6% higher than that of HGWO-C4.5, 15% higher than that of REACT, 23.7% improved to CS-Boost. The sensitivity of Proposed HS-ANFIS is 4.06% better than that of C4.5, 1.36% higher than that of PSO-C4.5, 1.1% better than that of GWO-C4.5, 1.46% higher than that of HGWO-C4.5, 4.06% higher than that of REACT, 2.36% improved than that of CS Boost. The Testing time is considerably improved for the proposed strategy. Moreover, on comparing the training and testing performance the proposed HS-ANFIS has better generalisation ability than the other models in the existing works of literature.
AUTHOR CONTRIBUTIONS
The authors contributed to each part of this paper equally. The authors read and approved the final manuscript.
COMPLIANCE WITH ETHICAL STANDARDS
Funding: No funds, grants, or other support was received. | 3,987.2 | 2021-03-02T00:00:00.000 | [
"Medicine",
"Computer Science",
"Engineering"
] |
Removal of BTX Contaminants with O3 and O3/UV Processes
The legal basis for the monitoring of priority and priority hazardous substances in water, sediment, and biota follows from Directive 2008/105/EC which defines the good chemical status to be achieved by all Member States together with the Water Framework Directive 2000/60/EC. The BTX compounds are considered to be the most toxic components of gasoline. Thus, organic petroleum components can induce a serious problem to public health and the aquatic environment. The effect of ozone and ozone/UV on degradation of the BTX in a model water was studied. The results indicate that the highest BTX removal rates were observed during the first 5 min of the process for all investigated pollutants. The treatment efficiencies above 90% were observed in all investigated pollutants after 40 min of ozonation. The results show a significant proportion of stripping in the removal of BTX components. Higher overall efficiency was observed by O3/UV process after abstracting share of stripping process. Application of investigated processes appears to be a promising procedure for removal of petrol aromatic hydrocarbons from aquatic environment. However, for practical application, an improvement of process removal efficiency and investigation of impact of ozonation intermediates and products on aquatic microorganisms are required.
Introduction
The adoption of the Framework Directive on water [1] provides a policy tool that enables sustainable protection of water resources. The Decision No. 2455/2001/EC of the European Parliament and the Council of November 2001 [2] established the list of 33 priority substances or group of substances, including the priority hazardous substances, presenting a significant risk to water pollution or via the aquatic environment including such risks to water used for the abstraction of drinking water.
Hazardous substances are defined as substances or groups of substances that are toxic, persistent, and liable to bioaccumulation, and other substances or groups of substances which give rise to an equivalent level of concern. The EC member countries have extended this list with relevant pollutants for individual countries. Thus, in the supplement of the Water Act [3] there have been identified altogether 59 relevant substances for SR.
The BTEX contaminants consist of benzene, ethyl benzene, toluene, and three isomers of xylene. These compounds are the volatile organic compounds (VOCs) found in petroleum derivatives such as petrol (gasoline). They represent one of the main groups of soluble organic compounds that is present in wastewater from refinery. They are the most toxic components of gasoline. These substances can lead to serious health problems ranging from irritation of eyes, skin, and mucous membranes and ending with weakened nervous system, decreased bone marrow function, and cancer. Benzene in particular is highly toxic. The World Health Organization classifies the substance as carcinogenic. It is also on the list of the priority substances [4].
Many oil substances have acute toxic effect on aquatic microorganisms with possible chronic consequences [5].
Commonly used wastewater treatment processes usually apply physical and physiochemical processes. Thus, the discharge of organic pollutants may create some environmental problems, particularly at microlevel. The aromatic oil fraction consists mainly of polyaromatic hydrocarbons (PAHs) and is more toxic and persistent than the aliphatic hydrocarbons [6]. Leakages including release of petroleum products, e.g., gasoline, diesel fuel, lubricating, and heating oil, from leaking oil tanks are the most frequent sources of soil and groundwater contaminations with BTEX substances. They are polar and readily soluble, and thus they are able to penetrate into soil and groundwater and cause serious environmental problems. These compounds dispose of acute and long-term toxic effects [4].
BTEX are among the most frequently detected contaminants in US public drinking-water systems that rely on groundwater sources [7]. These organic compounds make up a significant percentage of petroleum. The most contaminated locality of hazardous BTEX substances in the Slovak Republic is the airport at Sliač, Sliač-Vlkanová territory, contaminated by the Soviet Army, and the gas station in Rajecké Teplice where BTEX contaminants were identified as dominant in the groundwater. BTEX were also found in groundwater in Bratislava due to poor technical conditions of technological equipment (old stocks of aviation fuel) and the subsequent uncontrolled release of oil into the rock mass at the Airport of M. R. Štefanik [8].
Ozone is a very powerful oxidizing agent (E° = 2.07 V). Ozone may react with organic compounds in two ways: by direct reaction as molecular ozone or by indirect reaction through formation of secondary oxidants like free radical species [9][10][11]. In practice, both mechanisms may occur depending on the type of chemical wastewater pollution.
At low pH, the predominant reaction mechanisms are the direct electrophilic attack by molecular ozone [12], i.e., ozonolysis.
Under such conditions, ozone is a selective oxidant and reacts with multiple bonds (C=C, C=N, N=N, etc.) , but only at low rates with single bonds (C-C, C-O, O-H). At high pH, indirect reaction occurs, i.e., organics are degraded by secondary oxidants/chain reaction involving powerful radicals including OH, which are produced by ozone decomposition. These radicals are very strong and nonselective oxidants. Hydroxyl radicals can be formatted by increasing pH or by decomposition of O 3 with homogeneous and heterogeneous catalysts.
The main goal of our research was to study the feasibility of ozone and combination of O 3 /UV processes for removal of selected benzene, toluene, and xylenes (BTX) from water/ wastewater. Investigation of process kinetics and stripping of volatile substances were also performed.
Experimental equipment
Ozonation trials were performed in a laboratory ozonation reactor. A schematic of the ozonation apparatus is illustrated in Figure 1 [13]. Ozonation jet-loop reactor was operated in batch mode with regard to wastewater and in continuous mode with regard to gas. Active volume of the reactor was 3.5 L. The treated wastewater was transferred into ozonation reactor before starting operation of the reactor. A membrane pump was used to maintain external circulation of liquid reaction mixture. Pulsation of recirculated external flow was minimized with diaphragm pulsation damper (SERA 721.1, Seybert & Rahier, Immenhausen, Germany). Ozone was generated using a Lifetech generator with maximum production of 5 g h -1 . Ozone was generated at 50% of the maximum ozone generator's power and maintaining continuous oxygen flow of 60 L h -1 . A mixture of O 3 and O 2 was injected into a wastewater sample through a Venturi ejector. At the same time, the ejector sucked the mixture of O 3 and O 2 from the reactor headspace.
This, together with external circulation, should improve the efficiency of ozone utilization in the ozonation reactor. Pen-Ray UV lamp with wavelength 254 nm was used to generate hydroxyl radicals in the reactor. The outfall of reaction-gas mixture was transported into a destruction glass column by a fine-bubble porous distributive device. The destructive reactive column contained a potassium iodide solution. The active volume of the destructive reactive column was 1.0 dm 3 . An excess O 3 was destructed in this device [13].
Analytical methods
For determination BTEX compounds in model wastewater, gas chromatography was used with MS detector in connection with headspace autosampler.
Gas chromatographic method after static headspace extraction was used for quantification of the BTX in water. Headspace part was analyzed by gas chromatography with MS detector (Agilent Technologies 7890A GC Systems). All substances used for preparation of model wastewater and standard stock solutions were purchased from Dr. Ehrenstorfer.
Processing of experimental data
Desorption of volatile pollutants from water solution is proportional to its concentration in water solution where k/h -1 is desorption rate.
Experimental data of BTX degradation were fitted by the zero (Eq. (2))-, first (Eq. (3)-, and second (Eq. (4))-order reaction kinetic models. For a batch reaction system, under the assumption of a constant reaction volume, the following relationships were obtained. ( ) where S t (g m -3 ) stands for the content of BTX substances in model wastewater in time t, S 0 (g m -3 ) is the beginning content of BTX substances in model wastewater, k 0 (g m -3 h -1 ), k 1 (h -1) , and k 2 (g -1 m 3 h -1 ) are the rate constants for the kinetics of the zero, the first, and the second reaction order, respectively [13].
The grid search optimization method was applied to calculate values of parameters of the used mathematical models. The objective function was defined as the sum of squares between the measured and calculated values of BTX components divided with the number of measurements reduced by the number of estimated parameters [13,14].
Results of the work
The removals of studied compounds with the ozonation time are presented in Figure 2. The initial concentrations of benzene, toluene, o-xylene, and p-xylene were 800, 1600, 800, and 600 µg l -1 , respectively. From Figure 2, it is obvious that for all studied pollutants measured, the highest removal rates were observed within the first 5 min of the process. The highest affinity of ozone was measured toward p-xylene (59.6% removal efficiency). On the other hand, the lowest treatment efficiency was measured for benzene (20%) within the same ozonation time. The treatment efficiencies of BTX components increased with the increase of ozonation time [15]. The highest treatment efficiency was observed for p-xylene (81.3% during the first 20 min of ozonation). The second highest treatment efficiency (72.2%) was measured for o-xylene.
Final removal efficiencies of BTX constituents were observed in the range from 86.4 to 90%. The values of removal efficiency during O 3 process are summarized in Table 1.
The best fit of experimental degradation data of all studied pollutants was obtained by the first-order kinetic model.
The removals of studied BTX by O 3 /UV treatment time are presented in Figure 3. It is obvious that the removal rates of BTX by O 3 /UV process are higher in comparison with the removal rates observed with ozone alone. This is confirmed also by the values of removal efficiency given in Table 2.
Comparisons of p-xylene and o-xylene removals using O 3 only for oxidation and O 3 /UV treatment of model wastewater are presented in Figures 4 and 5, respectively. Slightly higher removal rates of these two pollutants were measured when treated with ozone alone. However, insignificant differences in the removal rates and treatment efficiencies follow from the treatment of the investigated BTX compounds using O 3 and O 3 /UV. The data presented above represent overall removal of pollutants during O 3 , or O 3 /UV treatment, i.e., an effect of stripping of pollutants content is also included in the data.
The effect of gas striping of the investigated BTX compounds at the conditions of ozonation and O 3 /UV trials was also studied. Volatility of substances depends on the size of molecules as well as on the vapor pressure [16]. With the increase of the molecular weight, the solubility of substance in water decreases. Important factor influencing solubility in water is hydrophobicity of substances. Solubility in water decreases with increase hydrophobicity of substance [17]. The information on hydrophobicity gives octanol-water partitioning coefficient value. Evaporation of substances correlates with vapor pressure [17] and is strongly influenced by temperature and the pressure of the system [18].
Volatility of substances can be quantified by the values Henry low. With the increase of the Henry's constant value, solubility of substance in water decreases. The values of basic physical-chemical properties of BTX components are given in Table 3 [19].
Comparison of o-xylene, p-xylene, benzene, and toluene concentration profiles measured during the stripping only and ozonation treatment of model water are presented in Figures 6-9, respectively. As it was already mentioned, 10 min of ozonation corresponds to input of 45 mg O 3 per liter of active volume of the jet-loop ozonation reactor.
It is obvious from the presented results that stripping can significantly contribute to the removal of investigated compounds during the ozonation and O 3 /UV treatments. The higher contribution of stripping to the overall removal of the component during the ozonation is for benzene. This observation correlates very well with the physical properties of the different components ( Table 3). Similar results were obtained for toluene (36.8 μg L -1 min -1 ) in comparison to benzene (32.0 µg L -1 min -1 ). [20]. b Regulation of the Slovak Government [21] List a EQS of total xylenes of 10 μg L -1 . Concentration profiles of the studied BTX substances during ozonation, i.e., after excluding removal of the compounds by stripping process during ozonation of the model wastewater, are presented in Figure 10. In other words, the values plotted in Figure 10 were obtained by subtraction of the concentrations of individual components due to stripping by oxygen flow from the total concentrations obtained during ozonation of the model wastewater. Ozonation and stripping experimental trials were performed at the same operational conditions except presence of ozone in the system for stripping tests. The removal efficiencies for BTX components due to ozone oxidation of the model wastewater sample are given in Table 4.
Benzene Toluene o-Xylene p-Xylene
Experimental data were processed using kinetic models to evaluate an order of the reaction.
The calculated concentration profiles obtained by kinetic models corresponded to the best fit of experimental data ( Table 4) for ozone oxidation, i.e., after excluding contribution of stripping to overall BTX concentrations during ozonation. The highest removal rates were observed during the first 5 minutes of ozonation for all investigated pollutants. However, there is approximately 10% difference in removal efficiency ( Table 1) and it is caused by ozonation reaction only ( Table 4).
The best fit of experimental degradation data of benzene and toluene was obtained by the first-order kinetic model. On the other hand, the second-order kinetic model was more appropriate for description degradation of xylenes. The rate constant values and the values of the correlation coefficient XY r corresponding to treated kinetic data after subtracting volatilized portions due to oxygen stripping are summarized both for O 3 and O 3 /UV treatments in Table 6. Similar concentration profiles for the studied BTX compounds during the O 3 /UV treatment of the model wastewater are presented in Figure 11. The values presented in Figure 11 were also obtained by subtraction of concentrations of individual components caused by gas stripping.
The calculated concentration profiles were also obtained by kinetic models corresponding to the best fit of experimental values ( Table 6) for O 3 /UV treatment, i.e., after excluding contributions of stripping of individual compounds. The removal efficiencies values are given in Table 5.
Removal efficiencies are very close to those given in Table 2. In the case of o-xylene and pxylene, the best fit was obtained by the second-order kinetic model ( Table 6). The first-order kinetic models for benzene and toluene may indicate significant influence of gas stripping on total removal of these compounds from solution and the process is probably determined by physical phenomena rather than chemical. Table 6. Kinetic parameters and statistical characteristics-striping excluded. Lower ozone concentration in water or higher ozone consumption in the system is obvious for O 3 /UV treatment. Lower experimental ozone concentration for O 3 /UV reaction system can be explained by decomposition and hydroxyl radicals formation. However, insignificant increase of removal rates was observed as a result of radical reaction mechanism.
On the other hand, the values of kinetic constant for all compounds ( Table 6) are slightly higher when ozone alone was applied in comparison to O 3 /UV treatment. Thus, the higher removal rates for ozone treatment of BTX were observed in comparison to the O 3 /UV treatment process.
Conclusion
Effect of ozone and O 3 /UV treatments on BTX components were investigated in this study. Investigation of stripping of volatile substances was also performed.
The highest removal rates for all investigated BTX components were observed during the first 5 min of processing for both ozonation and O 3 /UV treatment processes.
Ozone showed the highest affinity to p-xylene. The lowest removal efficiency was measured for benzene. Treatment efficiencies above 90% were observed for all investigated pollutants after 40 min of ozonation. Longer ozonation time resulted in very low enhancements of removal efficiencies of both ozonation and O 3 /UV treatment processes.
Application of O 3 /UV treatment had no significant effect in comparison with ozonation only, particularly for benzene. In case of o-xylene and p-xylene removal efficiencies, over 90% were observed after 20 min of the process. Forty minutes of the process were needed for more than 90% removal efficiency of ethylbenzene.
Due to high volatility of BTX components their removal from liquid phase can be significantly influenced by stripping. According to physical characteristics, the highest stripping can be expected for benzene and toluene [15].
From the processing of experimental data by simple kinetic models [13,14], removal of oxylene and p-xylene was best achieved by the second-order kinetic model. On the other hand, best fit of experimental data for benzene and toluene was obtained using the first-order kinetic model.
From the result of the study one can conclude that ozonation is a prospective process and a promising procedure for the removal of BTX components from aquatic environment. However, further research should be performed to enhance process efficiency and to study the impact of reaction intermediates and products on aquatic ecosystem. | 4,015.2 | 2017-05-03T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
ETALON IMAGES : UNDERSTANDING THE CONVOLUTION NEURAL NETWORKS
In this paper we propose a new technic called etalons, which allows us to interpret the way how convolution network makes its predictions. This mechanism is very similar to voting among different experts. Thereby CNN could be interpreted as a variety of experts, but it acts not like a sum or product of them, but rather represent a complicated hierarchy. We implement algorithm for etalon acquisition based on well-known properties of affine maps. We show that neural net has two high-level mechanisms of voting: first, based on attention to input image regions, specific to current input, and second, based on ignoring specific input regions. We also make an assumption that there is a connection between complexity of the underlying data manifold and the number of etalon images and their quality.
INTRODUCTION
For the last past years in computer vision society there were introduced tremendous variety of different neural networks architectures (Redmon et al., 2016), (Girshick, 2015), (He et al., 2016).There are different ideas behind these nets, which are explained by intuition and a good guess rather than strict theory, but the core blocks all of them are the samethey all utilize convolution and pooling layers.Such first CNNs as AlexNet and VGG, using only simple convolutions and pooling blocks without any additional connections (that exist in ResNet and DenseNet), haven't shown such good quality, compared to the modern CNN architectures.But they have demonstrated the ability to fit data and to outperform previous state of the art algorithms.Given this, it is rather obvious that a good start point for deep neural network understanding is the first, simple CNNs.
To analyze CNN behavior, we select MNIST challenge and one of the common nets with good results on it (LeNet (LeCun et al., 1988), but we use its slight modification, replacing sigmoid activation units with ReLU).This net has simple consecutive convolution and pooling layers with ReLU activation functions.It also has 10 output neurons for final predictions (each neuron predicts class specific probability for input digit).The schematic network's figure present in Fig. 1.
For explaining the notion of etalon it's useful to consider CNN from functional point of view.According this interpretation CNN represent parametric family of functions which depends on net architecture and neuron's activation functions.During training network parameters are tuned and as a result we get one instance form this family.For our particular network with ReLU activations if we don't take into account the last SoftMax layer, then whole network represents the piecewise linear function in high dimensional input space.Each input image lies in some flat region where network behaves like affine transformation.This means that there is small neighborhood around any input image where CNN behaves like an affine transformation.Its size and topology depend on the complexity and form of the data manifold.As known CNN gives effective implicit representation of this manifold and etalons give way to look at it.
RELATED WORKS
For now, deep learning methods, especially based on deep neural networks, play an important role in information processing of visual data.At the heart of these models lies a hypothesis, that deep models can be exponentially more efficient at representing some functions than their shallow counterparts (Bengio, 2009).
There is no strong theoretical justification, but there is a lot of practical experiments and promising results.It based on assumption that higher layers in a deep model can use features, constructed by the previous layers in order to build more complex functions and not to learn low level features again.For example, in CNNs for image classification (object detections and other vision tasks), first layer can learn Gabor filters that are capable to detect edges of different orientation.These edges are then put together at the second layer to form part-of-object shapes.On higher layers, these part-of-object shapes are combined further to obtain detectors for more complex part-of-object shapes or objects.Such a behavior is empirically illustrated, for instance, in (Zeiler and Fergus, 2013), (Lee et al. 2009).On the other hand, a shallow model has to construct detectors of target objects based only on the detectors learnt by the first layer.
There are also some theoretical justifications which prove the possibility for Deep Neural Nets to reconstruct functions with exponential number of regions.One example of such work is (Montufar et al., 2014).In this work authors investigate deep feed forward neural net with piecewise linear activation units.The intermediary layers of these models are able to map several pieces of their inputs into the same output.The layer-wise composition of the functions computed in this way re-uses lowlevel computations exponentially often as the number of layers increases.As a result, deep networks are able to identify an exponential number of input neighborhoods by mapping them to a common output of some intermediary hidden layer.The computations of the activations of this intermediary layer are replicated many times, once in each of the identified neighborhoods.This allows the networks to compute very complex looking functions even when they are defined with relatively few parameters.There are also some works dedicated to understanding how neural networks perceive images.
One well known method, described in work (Zeiler and Fergus, 2013), shows the way of getting regions in input image which are responsible for activation one or other neuron.In our work we try to investigate the other side of the problem.We concentrate on the mechanism which neural net obtains during training to perform its tasks (in this work we consider the classification task).We don't consider the problem of representation capability, however we suppose that there is a connection between complexity of the underlying data manifold and the number of etalon images and their quality.We guess that analyzing etalons can give some insight into problem of under and overfitting and models comparison based on last ones.Our work is similar to work (Zeiler and Fergus, 2013) in sense of visualization of regions, which neural network concentrates on.
However, etalon images aren't regions in input image.They are affine maps, defined on the input space and represents the behavior of the network on particular image.
Image etalon definition
As was mentioned above, we consider CNN without last SoftMax layer.The output of such net is 10-dimensional vector with unnormalized class scores (one for each digit class).Let be any input examplea vector of size (for our consideration we flatten it's dimensions but one can take in mind that it is a grayscale picture with spatial sizes), the function, represented by CNN that we describe earlier.According to introduction part, if we use ReLU as an activation function, for activated neurons we can write network transformation for vector as: where = affine transformation specific to = transformation matrix of the = bias of the * = matrix product This transformation take place between input space of images (m-dimensional) and output 10 dimensional space (class specific unnormalized scores), so is a 10 × matrix.Next we emphasize very simple fact which is important for further study: if we consider the whole net as an affine transformation, then we can treat any neuron as an affine map from input space to the real line ℝ.We show this for the output neuron from the last layer, but, as we will see further, this is also valid for all other neurons in any layer.
Let us look at one neuron from the 10 output neurons and denote its output as () .For this neuron we can do all the steps above and write its affine map: where ( ) = transformation matrix specific to neuron p ( ) = bias term specific to neuron p like in (1) 〈⋅〉 = scalar product.
For output neurons there is obvious relationship between matrixes and biases terms in equations ( 2) and (1).It is easy to see that ( ) is just row in matrix and also ( ) is p-th component in vector .Then we can write: As was mentioned above such affine function could be defined for any neuron in CNN.
We call ( ) an etalon image for the input image and the neuron , because it has the same dimensions as the input vector .The work of the whole CNN can be presented as the scalar product between the image and its etalon.Despite the fact that each neuron has its own etalon for every image, effective number of etalons depends on the data manifold complexity and also on the training algorithm and network architecture.But if we suppose that network architecture isn't too overabundant and train procedure is affective then first factor is dominant.Consequently, for more complicated dataset we shall have more etalons, and CNNs should have capacity to keep all of them, and that is exactly what they are good at, according to many practical and some theoretical results (Montufar, 2014).In the next paragraph we'll give you more constructive view on etalons and it would be clear that CNN can keep the large number of them.That's why etalons are another argument why CNN outperforms previous methods.
Etalon subgraph
In this paper we develop a method for getting the etalon from an input image.This method is based on another etalon interpretation like a subgraph in CNN.We can consider a neural net as graph, where neurons are vertexes and interlayer connections are edges.When a new image is passed to the network input, it goes successively through all layers.One can interpret this as some kind of flow, spreading through the network graph.But in contrast to classical flow, where there are sources and stokes, we don't have stokes here, however this interpretation is helpful.There is also one important point, connected with ReLU activations: the flow spreads through only some subgraph with non-zero activations.When ReLU gives us zero activation, it does not affect all further calculations, so we can freeze edges between this neuron and successive layer.Summarizing all the above, each etalon can be associated with its own subgraph.
For better understanding how it works, let us consider the following example.Net structure is shown in the picture below (Fig. 2).For simplicity we consider an example with twodimensional input and two fully connected layers (activation functions are ReLU).
) is passed as an input to the CNN.We marked in red connections, that are active for a current input.One can see that connections with Y 2 are inactive, because Y 2 = 0 (ReLU(⋅) = 0) and does not affect the final outcome.
Let us take the vector = ( 1 , 2 ) as an input and calculate the network output on it.Suppose that activation 2 argument is less than 0, so 2 = ReLU(⋅) = 0 and does not affect the result.We also assume that 2 argument is also less than 0. Then we have: As a result we get the explicit view of the 1 neuron affine map for the given input, according to equation (2).One can see the relation between affine map and subgraph, which is obtained as a result of the input vector passing through the network.This graph only includes those neurons, that have non-zero activations and its highlighted with red connections in Fig. 2. It is rather obvious that all the results are valid for CNN because they are just specific type of fully connected networks with most of the connections equal to zero.We can conclude that one possible way of getting neuron etalons is through reconstructing the graph of its affine map.In next section we describe how to do this.
Etalon acquisition
The above results prove that if subgraph and input vector are known, then it is possible to reconstruct the etalon image.However, in arbitrary CNN it is rather difficult to get separate components of an etalon image altogether simultaneously.But it is possible to get these components separately.We know that network processes any image as scalar product between its etalon and image itself plus bias term.Using linearity of the scalar product, we can write down next statement for the input image and any neuron : where = the basis coordinate vector (coordinate image) with component , equal to 1, having all other components equal to 0, ( ) = ( 1 , 2 , … , ) is the etalon image for the neuron of the input image (we flatten etalon as input image).From ( 6) we can conclude that if we feed coordinate vector as input, then we get component of the etalon vector ( ) .However we note that simply feeding this vector does not make sense because there is no guaranty that etalon subgraphs for image and coordinate image are coincide.But if we freeze all connections in the network except those, which belong to subgraph of the ( ) etalon and pass through the network, we get reliable result.
It is obvious that for freezing we need to know etalon subgraph for image .In principle it is not a problem, because we can track active connections in layers to form our graph structure and know the graph, then perform calculations for each coordinate image using simple traverse of the graph.However, we decided to go the other way and freeze unused connections (those where ReLU equals to zero) on the flight.It is very easywe can set activations of one particular neuron to zero to freeze its connections.Then they will not affect further computations, and it is equivalent to removing these connections.
Etalon reconstruction algorithm
In this section we propose an algorithm for etalon reconstruction, which outputs etalon image ( ) for the input image and the neuron .You can see the algorithm 1 steps in Fig. 3
In step 2 we nullify all biases in the network, because they influence the neuron activations and, as a result, we'll get incorrect values for components of the ( ) (which would be equal to the sum of -th component and bias term).However, from equations ( 2), ( 5) we see that affine transformation has two independent partsone for linear transformation and other for translation (bias term).Consequently, we can zero biases for reconstruction etalon image. In step 3.1 we pass each coordinate image and according to our implementation of ReLU and Pooling layers they flow through required subgraph. In step 3.2 we take activation of the given neuron which equals to -th component of the etalon image For better insight we visualize the work of algorithm 1 on a simple example of etalon reconstruction for 1 neuron.We also assume that 2 argument is also less than 0. For simplicity the net from Fig 2 .was taken again, and the same proposals are suggested: it is a simple fully connected net with two layers and two-dimensional input = ( 1 , 2 ).Suppose that activation 2 argument is less than 0, so 2 = ReLU(⋅) = 0 and does not affect the result.According to step 1 input image = ( 1 , 2 ) is passed to the net, and we remember non-zero activations (see Fig. 4). .vector X = (X 1 , X 2 ) is passed as an input to the CNN.We marked in red connections, that are active for a current input.Since neurons 2 , 2 activations are zero, their connections are freezed.
Summarizing equations ( 9) and ( 10 In this algorithm we don't reconstruct bias term in (2), however it is straightforward to do.Between steps 1 and 2 in algorithm 1 we can feed zero vector to the net, which gives us bias as activation value of the neuron .Indeed, according to system (4), if vector (0,0) is passed to the net, we have: Comparing ( 5) and ( 11), we see neuron 1 activation contains bias term of the ( ) 1 etalon image affine map.But for etalon images this doesn't make sense, so we ignore this step. .feeding the second coordinate vector (0,1) to get the second component of the etalon ( ) 1 .Components that were frizzed during step 1 of the algorithm 1 aren't shown.
Etalon reconstruction algorithm limitations
This algorithm is time consumingfor LeNet network with input image's resolution × we need to make 2 forward passes.
Cutting subgraph from network and performing calculations with it can reduce computation cost because of reducing the number of active connections, but neural nets use effective implementation of their operations in CUDA.In addition, we also need perform 2 forward passes.One possible solution is try to reconstruct etalon for one forward pass.It could be done by noting that convolution operation could be represented as linear mapping.ReLU activations after each mapping just nullifies some rows of transformation matrix of that mapping (concrete rows depends on input image).As a result, we can consider network as a product of affine mapping (different for each image).Multiplying all affine transformation, we'll get another one which rows represent etalon images.However, representing convolution operation as matrix product requires a lot of memory.We didn't investigate this way in practice.
LeNet etalon reconstruction
As was mentioned above, we choose LeNet for our experiments.
We experimented with different images and show our results in Fig. 11.For experiments we concentrate on penultimate layer and visualize etalons for each neuron from this layer for different input images.As a result, for each image there are ten etalon images (one per each neuron).For better interpretation we consider several examples.
From example 1 we can see, that neural net tries to give attention to definite regions of input image and does it depending on the input.In the next example we show another mechanism, which is reversed to mentioned above.
In example 2 we demonstrate another mechanism, different from example 1.Here, our net is strongly sure that particular image doesn't belong to given class, it simply votes against it, and again we see that it concentrates its attention on concrete regions.
Example 1: Input image class -0 (Fig. 7.) Figure 7 For this image here we visualize etalon images from several neurons (Fig. 8) image 0 6 2 5 Figure 8. Etalon images for neurons 0, 2, 5, 6 Numbers in captions are neurons, responsible for corresponding image class.White spots in etalon images can be interpreted as most important regions in input image on which neural net concentrates its attention.And, vice versa, black spots are regions which net prefers to ignore.We highlighted with red those regions that neural net makes attention on.From neuron 0 etalon image one can conclude that net tries to concentrate its attention on places which are responsible for zero number.Also it should be emphasized that it is not a simple tracking of most possible regions, but it is specific for concrete input picture.There are many possible zeros in train set which are located near boundaries, but net puts some weights on positions, that are specific to concrete image.From the other hand, if we look at neuron etalon 5 image, then one can conclude that the net has some imagination.From this picture we can see how our net tries to propose possible look of number 5 in this image.We see that it tries to use bottom part of zero number in the picture to draw number five.On neurons 2 and 6 we see similar behavior.However, when this etalons images multiplied with input one it is easy to see, that biggest response is for zero etalon image.
We highlighted with red those regions which neural net tries to ignore.From neuron zero etalon image you can see, that net votes against this class.It puts very low weights on places where number four is located and, as a result, this class has very small score.The same situation we can see for etalon image of neuron one when number six is processedneural net is completely sure that this picture can't below to class one, so it votes against it.
CONCLUSIONS
In this work we represent a new notionetalon images, which are defined as affine maps for any neuron and given input image.
From the construction point of view etalons can be considered as subgraphs in the whole neural network graph.We implement algorithm for their acquisition based on well-known properties of the affine maps.Their analyses have shown that neural net has two high-level mechanisms of voting: first, based on attention to the input image regions and specific to current input, and second, based on ignoring specific input regions.We also suppose that there is a connection between complexity of the underlying data manifold and the number of etalon images and their quality.We guess that analyzing etalons can give some insight into problem of under and overfitting and models comparison.We concentrate on these problems in our future works.
Figure 1 .
Figure 1.Schematic view of LeNet.We exclude SoftMax layer from further consideration Figure4.vector X = (X 1 , X 2 ) is passed as an input to the CNN.We marked in red connections, that are active for a current input.Since neurons 2 , 2 activations are zero, their connections are freezed.
Figure 5 .
Figure 5. Feeding the first coordinate vector (1,0) to get the first component of the etalon ( ) 1 .Components that were frizzed during step 1 of the algorithm 1 aren't shown.
Figure 11 .
Figure 11.First column contains ten input images of the different classes from MNIST dataset.Rows from 1 to 10 contains their etalon images, row contains etalon image of the corresponding neuron (each neuron has its own class for which it is responsible for).For example, neuron 8 for digit 8 has very clear etalon -one can see specific cross-figure of the digit 8. | 4,957.4 | 2018-05-30T00:00:00.000 | [
"Computer Science"
] |
In-Memory Computing with Resistive Memory Circuits: Status and Outlook
: In-memory computing (IMC) refers to non-von Neumann architectures where data are processed in situ within the memory by taking advantage of physical laws. Among the memory devices that have been considered for IMC, the resistive switching memory (RRAM), also known as memristor, is one of the most promising technologies due to its relatively easy integration and scaling. RRAM devices have been explored for both memory and IMC applications, such as neural network accelerators and neuromorphic processors. This work presents the status and outlook on the RRAM for analog computing, where the precision of the encoded coefficients, such as the synaptic weights of a neural network, is one of the key requirements. We show the experimental study of the cycle-to-cycle variation of set and reset processes for HfO 2 -based RRAM, which indicate that gate-controlled pulses present the least variation in conductance. Assuming a constant variation of conductance σ G , we then evaluate and compare various mapping schemes, including multilevel, binary, unary, redundant and slicing techniques. We present analytical formulas for the standard deviation of the conductance and the maximum number of bits that still satisfies a given maximum error. Finally, we discuss RRAM performance for various analog computing tasks compared to other computational memory devices. RRAM appears as one of the most promising devices in terms of scaling, accuracy and low-current operation.
Introduction
According to More's law, novel computing concepts are being researched to mitigate the memory bottleneck typical of von Neumann architectures.Among these new concepts, in-memory computing (IMC) has attracted an increasing interest due to the ability to execute computing tasks directly within the memory array [1][2][3].Various IMC circuits have been proposed so far, including digital gates [4][5][6][7], physical unclonable functions (PUF) [8][9][10][11] and neuromorphic neurons and synapses [12][13][14][15][16].In these circuits, the computational function stems from a physical property of the memory device and circuit, such as set/reset dynamics of resistive switching memory (RRAM) for synaptic potentiation/depression and neuron integration and fire.As a result of the physics-based computation, most IMC circuits operate in the analog domain and in a continuous time scale.A typical example of analog IMC primitive is the matrix vector multiplication (MVM), which can be executed in one step in a crosspoint memory array [17][18][19].The parallel and analog IMC operation allows the MVM operation to be significantly sped up with respect to the conventional multiply-accumulate (MAC) algorithm of digital computers [20,21].Crosspoint-based MVM has been adopted in a number of computing applications, ranging from deep neural networks (DNNs) [22][23][24][25] to linear algebra accelerators [26][27][28][29].
Figure 1 illustrates the hierarchical design approach for IMC accelerators.Computation is enabled at the device level by transport phenomena, e.g., the Ohm's law enabling multiplication of voltage and conductance or the threshold switching enabling comparison between voltages [30].Devices are connected within circuits, allowing parallel flow of input and output signals, Kirchhoff's current summation and feedback connections, usually in the analog domain.Circuit primitives are organized within novel architectures to harness the full potential of the computing cores and accelerate data-intensive workloads such as neural networks.Based on such a hierarchical structure, it is clear that the proper optimization of the analog accelerators requires a co-design approach from materials to applications to take into account device characteristics, circuit/device non-idealities and possible architecture limitations.Algorithms such as training of neural networks can also be used to optimize precision in view of device and circuit non-idealities.In this regard, a key point is the precision of the physical representation or mapping of the computational coefficients and the overall computation, which might be affected by noise, instability and parasitic elements in the crosspoint array circuit [31].This work presents an overview of analog IMC with RRAM devices.We first address the programming characteristics of a typical HfO 2 RRAM devices to highlight the intrinsic conductance variations for various programming algorithms.Then we focus on the various options for mapping computational parameters, such as synaptic weights in a DNN, discussing the trade-off between precision and memory area.Finally, we extend our study to various memory parameters, including low-current operation, programming speed and cycling endurance, by discussing their importance for various computational applications and the memory devices that maximize these performance metrics.
RRAM Device Structure
Various nanoscale devices have been proposed as a new class of non-volatile memory, where information is stored as the physical configuration of active material, resulting in different conductance [1].For example, information can be stored and retrieved by the device spin magnetization in a magnetic random access memory (MRAM) [32], as the phase structure of the materials in phase change memories [33,34] or as the atomic configuration of conductive defects in resistive switching memories, or RRAM [35].The latter has attracted particular interest thanks to the simple structure, compatibility with CMOS process, fast operation and high density [36,37].Figure 2a shows the typical structure and operation of a RRAM device, consisting of an insulating metal-oxide layer interposed between a metallic top electrode (TE) and a metallic bottom electrode (BE).The insulating layer results in a typical high resistance following fabrication.A forming process consisting of the application of a relatively high positive voltage pulse on the TE induces a local modification of the material composition with the growth of a metallic filament resulting in the low resistance state (LRS).A high resistance state (HRS) can be recovered by means of a negative voltage applied to the TE, which results in the creation of a depletion region across the conductive filament, thus resulting in a decrease of conductance.In addition to the LRS and the HRS intermediate states can be programmed.Thus it is possible, e.g., by controlling the filament size via the maximum current flowing across the device during the set operation, i.e., the compliance current I C .This is relatively straightforward in the 1-transistor-1-resistor (1T1R) structure as shown in the inset Figure 2b, or by controlling the maximum voltage applied during the reset operation which results in a different thickness of the depletion gap.For instance, Figure 2b shows typical current-voltage (I-V) curves of a RRAM device in 1T1R configuration for increasing I C , controlled with the transistor gate voltage V G [38], demonstrating the possibility of analog programming.The multilevel cell (MLC) capability of Figure 2b is interesting not only for increasing the bit density of the memory, but also for enabling computing applications [39].In fact, by arranging RRAM in crosspoint configuration as shown in Figure 3, it is possible to directly encode arbitrary matrix entry A ij into the conductance value G ij [1,31].Then, by applying a voltage vector V as input on the columns and collecting the current flowing in each row I j , it is possible to compute the matrix vector multiplication (MVM) according to: where N is the dimension of the input vector, thus G is a N × N matrix.MVM is at the backbone of a variety of data-intensive applications that can be accelerated by IMC acceleration, such as neural network training and inference [40][41][42][43], neuromorphic computing [15,16,44,45], image processing [18,46], optimization problems [47][48][49][50] and the iterative solution of linear equations [26,27].Circuits able of solving matrix equations without iteration thanks to analog feedback have also been demonstrated [28,29,38].All these applications have in common the need for reliable analog memory, which still appears a challenge due to the inherent stochasticity of switching behavior in RRAM devices. .By programming the RRAM conductance to the matrix entries of A and applying a voltage vector V on the columns, the resulting current flowing in each row j tied to ground according to Kirchoff's law is Adapted from [29].
Analog Memory Programming Techniques and Variations
Programming the same device several times, in the same condition (or initial state) of programming pulse-width and amplitude, results in various conductance due to the stochastic process of ionic migration during set/reset operation, which is usually referred to as cycle-to-cycle variability [35].Figure 4a shows an example of a programming algorithm during set operation, namely incremental gate pulse programming (IGPP), where the TE voltage V TE is kept constant while the gate voltage V G is increased at each time step.This process was repeated 1200 times and the conductance was read with a relatively low voltage after each cycle [51].Figure 4b shows that the single traces (grey) suffer from relatively large variations, while the median value (blue) seems to grow linearly with pulse number (i.e., V G ).The cumulative distribution of the conductance for each cycle is reported in Figure 4c for increasing V G .The standard deviation of the conductance σ G as a function of the median conductance is reported in Figure 4d, indicating a fairly constant value σ G = 3.8 µS.This indicates a linear increase in relative resistance variation σ R /R with resistance R, since R = 1/G and, on a first approximation, variations obey the same relationship between differential quantities, hence σ R /R = σ G /G [35].The linear increase in σ R with R is generally observed in variability measurements of RRAM [52,53] and has been attributed to variation in the shape of the conductive filament.Other variability data have been reported indicating an increase in σ R /R as R 0.5 , which can be interpreted in terms of Poissonian variation of the number of defects in the conductive filament [54][55][56].Analog programming of RRAM is also possible in the opposite polarity, namely by increasing the negative reset voltage V TE applied on the TE for a fixed V G , an algorithm referred to as incremental reset pulse programming (IRPP), as shown in Figure 5a.In this case, by first initializing the conductance into the LRS, it is possible to characterize the variability of analog programming with reset voltage by applying IRPP several times (i.e., 1200 as in Figure 5) [51].Figure 5b again evidences that single traces (grey) corresponding to conductance read after each pulse of a single reset ramp, show high variability and fluctuation, while the median value (blue) shows a gradual decrease with the pulse number.The cumulative distributions of such conductance are reported in Figure 5c for increasing applied TE voltage |V STOP |.The resulting standard deviation of conductance σ G as a function of median conductance G is shown in Figure 5d (red): from the comparison with IGPP results of Figure 5 (blue), it is possible to see that σ G is generally larger in the reset process.We can conclude that gradual set programming is more convenient than gradual reset for accurate tuning of resistance; however, accurate program/verify (PV) algorithms are needed for reducing the error to acceptable levels (i.e., for having σ G < 1 µS).
Program-Verify Algorithms and Device-to-Device Variations
To date, only cycle-to-cyle variation on a single device was considered.To address device-to-device variation, conductance is typically characterized in a relatively large array (e.g., >1 kB), which allows one to study the main variability features with a relatively simple circuit and short experimental time.Figure 6a illustrates a conceptual schematic of a PV algorithm for 1T-1R RRAM, namely incremental-step program-verify algorithm (ISPVA) [43] applied on a 4 kB array.For a given V G (or I C ), the TE is incremented until the read value of G after programming reaches the target value G = L i .Figure 6b shows conductance traces as a function of V TE for increasing V G obtained with ISPVA for target levels L 2−5 .The experiment was repeated on all the devices in the array, and the read current probability distributions are shown in Figure 6d [43].The final average standard deviation is σ G = 7.5 µS, which is slightly larger than the case of Figure 4 despite the PV algorithm where the number of pulses is adapted to reach a certain conductance.The larger σ G can be understood by the superposition of cycle-to-cycle variation and device-to-device variation, the latter having the larger contribution to variability within the array.
Conductance Drift and Fluctuations
Unfortunately, once the device is programmed with a given precision, the conductance might still change due to time-dependent drift and fluctuation that affects the reliability of IMC. Figure 7a shows measured traces of conductance programmed with 4 levels on a 1Mb RRAM array as a function of time during annealing at 150 °C [58].The change of conductance can be explained by the thermally-activated atomic diffusion in the conductive filament.Note that, in addition to conductance drift with time, random fluctuations across the median value are present, as shown in Figure 7a.This is better illustrated in Figure 7b [59], where the resistance of a RRAM device in HRS is plotted as a function of time.The results show that the resistance can experience abrupt variations, called random walk, in addition to the typical random telegraph noise (RTN).For instance, the accuracy of neural network can decrease with time due to the introduction of a bias in the conductance as a result of the drift [43,58,60].Figure 7c shows the cumulative distribution of the initial and drifted conductance by annealing at T = 125 °C of Figure 6c [43], which confirms the time-dependent drift of weights of a two-layer neural network.The network schematic is represented in Figure 7d, and it is composed of a 14 × 14 input layer (corresponding to a downsized version of the MNIST dataset), a hidden layer of 20 neurons and 10 output neurons corresponding to 10 handwritten digits and resulting in a total of 14 × 14 × 20 + 20 × 10 weights, which can be mapped in a 4 kB array.Figure 7e shows the confusion map for testing the MNIST dataset before annealing, showing a relatively good average accuracy of 83%.However, this accuracy drops to 72% after annealing as shown in Figure 7f, demonstrating the need of stable states for reliable neural network inference.
RRAM Conductance Mapping Techniques
While Figures 2-5 focus on multilevel conductance mapping, aiming at an increase in the number of programmable levels and a reduction of the programming error, other techniques can be adopted to map a given computing coefficient, such as a synaptic weight, into one or multiple memory devices.Figure 8 summarizes the main programming methodologies for IMC [51], including multilevel (a) [43,61], binary (b) [62], unary (c) [63], multilevel with redundancy (d) [64] and slicing (e) [23].In the following, we compare the various techniques in terms of mapping accuracy, maximum number of bits and number of devices.The mapping accuracy is evaluated by the standard deviation of the error σ in programming a certain coefficient with a given number of bits N. The accuracy is evaluated by analytical formulas assuming a constant σ G in programming an individual memory device with a maximum conductance G max [51].
Multilevel
Analog memories can naturally map discretized levels, which is referred to as multilevel mapping.To store N bits, 2 N equally spaced conductance levels between 0 and G max are needed.As a result, each level is separated from the adjacent ones by a conductance step ∆G = G max /(2 N − 1).Given a standard deviation of the programming error σ G , such as the value σ G = 3.8 µS in Figure 4d, the resulting standard deviation σ of the error in programming N bits can be obtained as: Equation ( 2) allows one to estimate the maximum number of bits N max , which can be mapped in a RRAM device with a given acceptable error σ << 1, which yields N max = log 2 (1 + σ G max /σ G ).For example, by considering σ G = 2.2 µS, G max = 225 µS [57] and targeting a maximum error of σ = 10%, the resulting maximum number of bits is N max = 3.35 corresponding to 10 levels that can be written in the memory to satisfy the precision requirement.
Binary
Binary storage is the typical mapping of conventional digital memories, where a value x is converted in its binary representation with N bits written in N memory cells, each containing two states for 0 and 1.For instance, x = 14 can be written in binary representation as x bin = 1110 with 4 RRAM cells programmed to [G max , G max , G max , 0], respectively.A weighted summation of the conductance values is possible by multiplying the current flowing in each cell by the corresponding power of 2, namely 2 3 , 2 2 , 2 1 , 2 0 , respectively, thus allowing one to reconstruct the correct number as To estimate σ , we consider the average imprecision divided by the least significant bit (LSB), namely: where the square-root term combines the independent variation of each memory cell multiplied by its weight.The maximum number of bits thus can be obtained as: Assuming the same σ G , G max and σ of the estimation for multilevel mapping, we obtain N max = 4.15 with binary RRAM.
Unary
To increase the precision of binary mapping, unary (or thermometric) coding uses 2 N − 1 devices to represent the information, each one having equal weight, which requires no bit-specific gain in the current summation.In unary coding, the error is given by: which leads to a N max = 7.7 with the same σ G , G max and σ used in the previous estimation.However, note that the higher precision has been achieved at the cost of a larger number of RRAMs, namely 2 N max − 1 = 207 memory devices.
Multilevel with Redundancy
To reduce the impact of σ G on multilevel coding, M memory devices having the same nominal conductance can be programmed and operated in parallel.As a result, the error is reduced by a factor √ M thanks to the averaging among the M redundant cells (see Table 1).The maximum number of bits is equivalently enhanced.Assuming M = 4 and the same σ G , G max and σ used in the previous estimation, we obtain N max = 4.36 bits, i.e., one additional bit compared to the pure multilevel case with no redundancy.
Slicing
By encoding a given number in base l, with l = 2 N number of levels stored in a single memory element with multilevel mapping, and using M different memories to represent the data, it is possible to significantly increase the precision in a compact implementation.For example, x = 14 encoded in base l = 4 yields to x 4 = 32 such that by using two memory elements with weights 4 1 and 4 0 , the current summation yields x = 4 1 × 3 + 4 0 × 2. Slicing can thus increase the number of addressable levels l by the number of used cells.The error can be evaluated in the same way as the binary scheme, by summing the weighted error contribution of each cell, which yields: Assuming M = 4 and the same values of σ G , G max and σ as before, we obtain N max = 13.41 bits for slicing.
Simulation Results
Table 1 summarizes the formulas for calculating the error σ and the maximum number of bits N max for different programming techniques.To validate the analytical formulas, we performed Monte Carlo simulations of the various programming conditions and compared the results to the analytical calculations [51].
Technique
Figure 9 shows the results of the analytical formulas (top) compared with the results of MC simulations (bottom) for the standard deviation of the error σ as a function of the standard deviation of conductance σ G and the number of bits to encode N, for multilevel (a,f), binary (b,g), unary (c,h), multilevel with redundancy M = 4 (d,i) and slicing with M = 2 (e,j) [51].The analytical formulas and the MC simulations show a good agreement, thus confirming the correctness of the models in Table 1. Figure 10a shows the calculated σ as a function of σ G for a target number N = 7 of bits.The results indicate that slicing and unary programming have a significant advantage over the other techniques by approximately one order of magnitude.This result is confirmed in Figure 10b showing the maximum number of bits N max as a function of σ G for σ = 1%.However, unary and slicing techniques require several memory elements to encode the coefficients, which thus increase the energy consumption and chip area per bit.To address the precision/area trade-off, Figure 10c shows the bit density, evaluated as N max divided by the number of memory cells as a function of σ G .The results demonstrate the good trade-off for the case of the slicing technique.
Array-Level Reliability Issues
Device variations and errors are not the only origin for accuracy degradation in IMC circuits.In large memory arrays, the interconnect lines are generally affected by nonideal behavior due to the parasitic resistance and capacitance that can deteriorate the analog signal integrity.In particular, parasitic wire resistance introduces a significant error, due to the current-resistance (IR) drop on the array rows/columns, which results in a difference between the applied voltage and the one across each memory cell.Figure 11a shows a sketch of a 4 × 4 memory array highlighting the wire resistance, namely the input resistance R in , the output resistance R out and the row/column wire resistance r between each memory cell [65].Assuming for instance an inference operation with a typical read voltage V read = 0.1 V, an average RRAM resistance R = 100 kΩ, corresponding to a current I = 100 µA for each cell within a a 100 × 100 crosspoint array with wire resistance r = 1 Ω, then the overall IR drop can be computed as rI + 2rI + 3rI + • • • + NrI = rI N 2 2 = 5 mV corresponding to a 5% error in the summation current [66].Note that this error does not include any contribution due to variations in RRAM conductance.To mitigate this effect, it is possible to increase the device resistance to decrease the current at the array wires.Unfortunately, large device resistances are usually more prone to variations, drift, fluctuations and noise [52,55].Furthermore, a high cell resistance requires a longer readout time because of CMOS noise in the sensing circuit, thus resulting in longer program/verify algorithms.Finally, the higher cell resistance could increase the RC delay time for charging the BL.IR drop can be mitigated by changing the memory device topology, for example, by inserting a current-controlled synaptic element [67], such as a Flash memory, ionic transistor or FeFET, as shown in Figure 11b.A three-terminal memory transistor can be programmed to various saturation currents, each representing a synaptic weight.By encoding the input in the gate pulsewidth and integrating the current flowing in each column, it is possible to perform a current-based computation where the resulting charge is given by: which corresponds to a MVM of a current matrix times a pulsewidth vector.Figure 11c shows a a comparison of the impact of IR drop for ohmic and saturated characteristics, demonstrating a much smaller impact of the current controlled devices [67].At algorithmic level, IR drop and other non-idealities can be taken into account during training/inference or programming of computational memory devices.For instance, parasitic-aware program-and-verify algorithms have been presented to minimize the impact of IR drop [65,68].By iteratively programming the target conductance matrix G resulting in G , evaluating the current i = VG and comparing it to the ideal current i = vG, a new target matrix can be computed and programmed.The algorithm is repeated until the error is reduced below a certain tolerance, e.g., = |i − i | < 1%.
At circuit architecture level, various techniques have been proposed to mitigate the impact of IR drop.Typically, the synaptic weights in a neural network have a differential structure, thus two separate 1T1R memory devices are used for representing a single weight.This is shown in Figure 12a where two contiguous columns are used for representing positive and negative weights that are summed up in the digital domain [69].This configuration results in a relatively large impact of the IR drop since two array locations are used for each matrix entry.To mitigate this issues, Figure 12b shows a signed-weighted 2-transistor/2-resistor (SW-2T2R) structure to represent the positive and negative part of each weight, where the current summation is performed directly within the memory cell, thus effectively reducing the impact of IR drop by roughly a factor 2 [69].Another approach is to use small-size crosspoint arrays and/or organize the IMC architecture in computing tiles [70,71], where the overall problem is divided in smaller operations with smaller summation currents, hence a lower IR drop.
where I is the vector of the row input currents and V is the output vector of column voltages.(c) Regression problem solver performing v = −(X T X) −1 X T i where i is the vector of the input row currents at the left crosspoint array, X is the matrix of conductances in the two crosspoint arrays and v is the output voltage of the second stage of amplifiers.(d) Analog CAM cell, a range is stored in the memory conductance M1 and M2, the ML is pre-charged and an analog input is applied to the DL line.If the analog input is in the range of acceptance given by M1 and M2 the ML remains charged otherwise it discharges.Reprinted from [31,73].
MVM Accelerator
Figure 13a shows a circuit schematic for implementing the MVM accelerator of Equation ( 1).The conductance matrix G is stored in the RRAM device of the crosspoint array and the input voltage vector V is applied with a digital-to-analog converter (DAC) connected to the rows of the array.Columns are connected to the sensing circuit, consisting of a trans-impedance amplifier (TIA) that converts currents into voltages, and an analogto-digital converter (ADC), which encodes the analog signals into digital words.MVM is performed in a single step, irrespective of the matrix size, although typically the sensing operation is multiplexed between various columns to reduce the peripheral overhead [60].Since the forward propagation in a neural networks basically consists in extensive MVM of activation signal multiplied by synaptic weights [74], MVM crosspoint circuits have been heavily used for accelerating both the inference [22,43] and the training [41,42] stage of neural networks.Since the neuron activation function is typically performed in the digital domain, various training algorithms have been developed, including supervised training [42], unsupervised learning [75] and reinforcement learning [76].MVM can serve as the core operation of various neural networks, including fully-connected neural networks [43], convolutional neural networks (CNNs) [77] and recurrent neural networks, such as long short term memory (LSTM) networks [78].Integrated circuit comprising the crosspoint memory array for MVM and the routing/sensing units have been reported [24,79,80], demonstrating strong improvement in throughput and energy efficiency compared to conventional digital accelerators.
Among recurrent neural networks, the Hopfield neural network (HNN) [81] provides a brain-inspired structure that is capable of storing and recalling attractors, thus allowing an associative memory in hardware to be realized [50,82,83].Interestingly, HNN can also naturally perform gradient descent algorithms, thus accelerating the solution of optimization problems such as constraint satisfaction problems (CSPs) [84].By performing inference of an appropriate Hamiltonian representing the problem [85], the HNN converges to a stable state representing a minimum of the energy function of the problem, given by: where G is the encoded matrix and v i , v j are the states of neurons i and j, respectively.However, the most difficult problems are usually not expressed by a convex energy landscape; thus even HNN cannot solve them, as the steady state remains trapped in a local minimum of the energy function.To prevent locking of the HNN in a local minimum, noise can be added to the system to perform simulated annealing [86], thus allowing the state to escape from the local minimum and converge to the global minimum corresponding to the optimization problem solution.Since memristive devices are inherently stochastic, various implementations combining MVM acceleration with noise generation have been proposed for accelerating the solution of CSPs [47][48][49][50]87,88].Within this type of IMC accelerator, a major challenge consists of the extension of the circuit size to the scale of real-world intractable problems.
The MVM circuit can also be used for accelerating the training of a neural network according to the concept of outer product [91].According to the back-propagation technique of the stochastic gradient descent (SGD) training, the error between the true output and the actual output at a certain stage during training is used to train each synaptic weight according to the formula [74]: where ∆w ij is the synaptic weight increase/decrease, η is a learning rate, x i is the input or activation value and j is the back-propagated error.Equation ( 9), which represents the outer product between the input vector x and the error vector , can be executed in hardware, e.g., by applying fixed pulse-width pulses with amplitude proportional to x at the array rows and fixed-amplitude pulses with pulse-width proportional to at the array columns [91].This concept, which is the main idea for the hardware acceleration of timeand energy-consuming DNN training, relies on the linearity of the weight update on both the time and the voltage amplitude, which is rarely demonstrated in practical memory devices [3,92].
Analog Closed-Loop Accelerators
Analog in-memory circuits can solve algebraic problems without the need for digital iterations [28,29,38].Figure 13b shows a circuit for the non-iterative solution of a system of linear equations [28].Given a matrix problem Ax = b, it is possible to encode the coefficients A in the conductance matrix G, while the input vector b is applied as currents i at the crosspoint rows, which are connected to the negative input of the operational amplifiers.Since the output of the operational amplifiers are connected to the array columns, the Kirchhoff's law for the crosspoint array reads i + Gv = 0, where v is the output voltage vector on the columns and where we have neglected the input currents entering the high-impedance input nodes of the operational amplifiers.This leads to which corresponds to the solution of the system of linear equations Ax = b.Note that the solution is obtained in one step, without iterations.It has been shown, on a first approximation, that the time complexity for solving linear systems in this circuit does not depend explicitly on the matrix size [93]; i.e., it displays an O(1) complexity for solving linear systems, since the speed of solution solely depends on the configuration of poles which are limited by the smallest eigenvalue of the conductance matrix G.The circuit can implement both positive and negative coefficients of matrix A by adding an inverting buffer along each array column and connecting a second crosspoint array with the negative coefficients [28].In addition to linear systems Ax = b, closed-loop crosspoint arrays allow one to solve eigenvector problems in the form (A − λI)x = 0, where λ is the principal eigenvalue and I is the identity matrix [38].For instance, the Pagerank algorithm [94], which is at the backbone of many computing tasks for searching, ranking and recommendation, relies on the calculation of the principal eigenvector of a given link matrix.Similar to the linear system solution, the IMC-accelerated computation of eigenvectors has a time complexity that does not depend explicitly on the matrix size, thus displaying O(1) time complexity [95].Figure 13c illustrates the IMC circuit for analog closed-loop linear-regression problems [29].Assuming a set of N data points where the values of M independent variables are stored in matrix X of dimension N × M and y contains the values of the dependent variable, a regression consists of the calculation of the M coefficients α, which minimize the square error of the matrix equation Xα = y.This problem is non-iteratively solved by the circuit in Figure 13c where the input current vector i is applied to the rows of a rectangular crosspoint array that maps the matrix X, which are in turn connected to the negative input terminals of operational amplifiers A 1 with TIA configuration and gain G T .The TIA's output terminals are connected to the rows of a second rectangular crosspoint array X.The columns of the second array are connected to the positive input terminals of operational amplifiers A 2 , whose output terminals are connected to the columns of the first array.From applying Kirchhoff's laws at the first array, the output voltage of the set of TIAs is given by v = G T (i + Xv), where v is the output voltage of the operational amplifiers A 2 .Since no current can flow in the high-impedance input terminals of operational amplifiers A 2 , we can assume X T v = 0, which can be rearranged as follows: Here, the output voltage v is given by the product of the Moore-Penrose pseudo-inverse of X times the input vector y mapped by i, which provides the least-square solution α by minimizing the norm ||Xw − y|| [29].Linear and logistic regression accelerators with the circuit configuration of Figure 13c have been demonstrated [29], where the application can range from predicting the price of a house based on a set of descriptions to the training of the output layer of an extreme learning machine, a particular neural network with a wide and random input layer and an output layer that can be trained with logistic regression [29].
Analog CAM
In a conventional random access memory (RAM), an address is given as input and the stored word is returned as output.CAMs work on the opposite direction; i.e., the data content is provided as input word, while its location in the memory (or address) is returned as output, thus serving as data search and data matching circuits [96].CAM is generally implemented by SRAM-based circuits, which can be relatively bulky and power-hungry; thus RRAM-based CAMs have been proposed [97][98][99].A distinctive advantage of RRAM with respect to SRAM is the possibility for multilevel CAM [72,100] and analog CAM [73], thus allowing to increase significantly the memory density.
Figure 13d shows the schematic of an analog CAM cell [73] based on RRAM devices.By storing in the conductance of RRAMs M 1 and M 2 two different values, pre-charging the match line (ML) and applying an analog search input data to the data line (DL), ML will remain charged only if the voltage on DL is such that f (M 1 ) < V DL < f (M 2 ).This property can be used for implementing a multilevel CAM; e.g., if the stored number to be searched is x = 15, M 1 could be set to the level corresponding to 14.5 while M 2 to the level corresponding to 15.5 where the 0.5 range represents the acceptance tolerance within an error of LSB/2.Interestingly, analog CAMs have been used for accelerating machine learning tasks such as one-shot learning in memory augmented neural networks [72], or tree-based models [101].In the latter case, each threshold of a root-to-leaf path in a decision tree is mapped to an analog CAM row, thus allowing the inference in parallel within a large amount of trees to be accelerated.
Outlook on Memory Technologies and Computing Applications
For each specific application of analog IMC, such as the training of a neural network or the accelerated solution of algebra problems, different properties are required from a device and circuit perspective.To illustrate the device-and application-dependent requirements, Figure 14a shows a radar/spider chart summarizing the device properties in terms of cycling endurance, low-current operation, scaling and possibility of 3D integration, ohmic/linear conduction behavior, programming speed, analog precision and linear conductance upgrade.Each property is shown on a relative scale for various IMC tasks, including algebra accelerators, DNN training/inference accelerators and spiking neural networks (SNNs).Among these computing applications, DNN training accelerators is one with the highest demand for high-performance memory devices.This is because training acceleration relies on the synaptic weights to be updated online, typically in parallel via the outer product operation, which requires a linearity in both time and voltage for accurate and fast convergence [41,102].This property is extremely difficult to achieve with resistive memory technologies due to the tendency for abrupt increase/decrease of conductivity, followed by a saturating change of conductance [3,92].A typical figure of merit for linearity is the exponent n pot in the formula: which describes the increase in conductance G as a function of the number p of potentiation pulses, where G min and G 0 are fitting parameters.A similar exponent n dep describes the linearity under depression.Parameters n pot and n dep should generally have similar values to allow for symmetric potentiation/depression, which is another key requirement for online training.Previous simulation results have shown that the recognition accuracy can range between about 82% for PCMO-based RRAM and a highest possible accuracy of about 94% for perfectly linear and symmetric characteristics in the case of a 3-layer neural network for MNIST recognition [103].On-line training accelerators also generally require a high programming speed and cycling endurance, due to the programming-intensive update of the synaptic elements.On the one hand, gradual set/reset operation reduces the stress on the memory device compared with full set/reset in the binary or memory application.However, the ability for analog-type programming generally degrades with cycling [104,105].Analog closed-loop algebra accelerators also show a high demand of device properties, including highly-linear conduction characteristics to prevent unstable and oscillatory behavior of the system.DNN inference accelerators show less stringent requirements, thanks to the mostly-read operation of the memory array device for accelerating the MVM, while a relatively small number of program/verify operations are needed to reconfigure the system for a new AI task, which considerably reduces the requirements in terms of endurance, programming speed and linear weight update.In the case of non-ohmic conduction, the array can be operated in shift-and-add fashion such that the input is applied as digital word and the output is reconstructed with post-processing [23].Non-linear characteristics are generally observed in RRAM devices with low conductance, which is essential for all computing schemes.When a device is programmed in the low conductance range, close to the HRS, transport typically takes place via Poole-Frenkel phenomena, which have a non-linear dependence on voltage [106].The desired conductance range for parallel MVM is generally below 10 µS, which would allow for an overall error due to IR drop around 5% for a 100 × 100 array (see Section 5).Achieving a lower conductance in the sub-µS range would enable the scaling-up of the computational array, with advantages of throughput, energy efficiency and area efficiency due to the smaller peripheral circuits.
Another key general requirement is the precision of conductance, i.e., ensuring a low σ G .For the case of inference accelerators, it has been recently shown that the network accuracy decreases only from 91.6% to 91.2% for a σ G between 0 and 10 µS for a 2-layer perceptron for MNIST recognition [57].Studies on deeper networks have indicated that the sensitivity to conductance variation can vary widely depending on the specific neural network [107].For instance, a ResNet model shows an increasing sensitivity to conductance for increasing number of layers, which can be understood by error accumulation in the forward propagation.On the other hand, AlexNet CNN shows a decreasing sensitivity at increasing size of the convolutional filter, due to error averaging within larger filters.
SNN show the most relaxed requirements thanks to neuro-biological frequencies in the 100 Hz range, which significantly relaxes the demand in terms of programming speed.Furthermore, update/conduction non-linearity and stochasticity are generally well tolerated or even potentially harnessed to perform brain-inspired adaptation and computations [50,108].All applications generally require high scalability and 3D integration of the memory elements to take advantage of high density of information for data-intensive computing.For instance, a recent neural network model for natural language processing (NLP) called generative pre-trained transformer 3 (GPT-3) includes 175 billion parameters, which approximately corresponds to 175 GB of memory devices [109].
Figure 14b shows the figures of merit for two-terminal devices, namely RRAM, phase change memory (PCM) [33,110,111] and spin-transfer torque (STT) magnetic random access memory (MRAM) [112].RRAM and PCM show comparable properties, the main difference being the analog precision, which is typically lower in PCM devices because of drift phenomena [113].STT-MRAM offers high programming speed, good endurance and highly-ohmic conduction [11]; however, the conductance is generally limited to two states, corresponding to the parallel and the antiparallel magnetic polarization in the magnetic tunnel junction.As a result, use of the STT-MRAM device is limited to digital computing, such as binarized neural networks (BNNs) [114,115].Figure 14c shows the relative performance of three-terminal devices for accelerating analog IMC, including both CMOS-based memory technologies and memristive technologies [116].Static random access memory (SRAM) is typically limited to digital operation, whereas Flash, ferroelectric field-effect transistor (FEFET) [117] and electrochemical random access memory (ECRAM) [118] show well-tunable analog conductance and low current operation.Compared to 2-terminal devices, transistor-type memory devices can display a lower conductance thanks to the sub-threshold operation regime [119].The relatively low conductance in ECRAM transistors can be explained by the use of low-mobility channels, usually consisting of metal oxides such as WO 3 [120] and TiO 2 [121].ECRAM devices have also shown well-tunable conductance levels, which translates in a large number of multilevel states [120].The precision of conductance can be attributed to the bulk-type conduction process within the switchable metal-oxide channel, as opposed to the filamentary conduction in typical 2-terminal RRAM [121].The weight of ECRAM can be updated with extremely high linearity, thus offering an ideal device for online training accelerators [118].For instance, the non-linearity exponents in Equation ( 12) are n pot = 0.347 and n dep = 0.268 in Li-based ECRAM, compared to a minimum n pot of about 2 for most RRAM and PCM devices [118].While CMOS technology has limited capability for 3D integration, the memristive FEFET and ECRAM seems most suitable for high density, back-end integrated memory arrays for analog computation of large-scale problems.
Conclusions
This work reviews the status and outlook of in-memory computing with RRAM devices.The RRAM device concept and the programming techniques aimed at high-precision analog conductance are presented.The possible coding of computational coefficient in the RRAM, including binary, unary and various multilevel approaches, are compared in terms of precision and circuit area.The typical challenges for analog precision of conductance are discussed, in terms of both device reliability (programming variations, drift and time-dependent fluctuations) and circuit-level parasitic resistance leading to IR drop errors.The most relevant analog IMC circuit primitives, including MVM, linear algebra accelerators and CAM, are presented.Finally, RRAM is compared to other computational memory devices in terms of reliability, precision, low current operation and scaling.From this overview, RRAM appears as one of the most mature and most promising technologies, despite significant challenges remain in reducing the operational current and controlling time-dependent fluctuations and programming variations.
Figure 1 .
Figure 1.A conceptual illustration of the different scales of in-memory computing.Computing relies on fundamental physical laws that are implemented in various types of memory devices and circuit designs.To perform large-scale computation, new architectures have to be developed for accelerating real world applications.New applications can also arise given the possibility of performing highly-parallel computation.The design and optimization of each different level should be performed by considering all the hierarchical stack.
Figure 2 .
Figure 2. RRAM structure and operation.(a) RRAM device made of a metallic TE and BE, with an interposed dielectric layer.After a forming procedure it is possible to set/reset the device and switch from LRS to HRS and vice versa.(b) Typical I-V curve of a RRAM device with 1T1R configuration for increasing gate voltage V G during set operation demonstrating analog programmability.Adapted from [31,38].
Figure 3 .
Figure 3. Crosspoint memory structure to perform analog MVM.At the intersection of each TE lines (orange) with each BE line (grey), a RRAM is placed (blue).By programming the RRAM conductance to the matrix entries of A and applying a voltage vector V on the columns, the resulting current flowing in each row j tied to ground according to Kirchoff's law is I j = ∑ N i=1 G ij V i .Adapted from[29].
Figure 4 .
Figure 4. Analog programming with set pulses at increasing I C according to the IGPP algorithm.(a) Conceptual schematic of the IGPP algorithm.(b) Conductance as a function of pulse number for multiple iterations (grey lines) and the average behavior (blue).(c) Cumulative distribution function (CDF) of the conductance for increasing gate voltage V G .(d) Standard deviation of the conductance σ G as a function of the average conductance G. Adapted from [51].
Figure 5 .
Figure 5. Analog programming with reset pulses.(a) Conceptual schematic of the IRPP algorithm.(b) Conductance as a function of pulse number for multiple iterations (grey lines) and the average behavior (blue).(c) Cumulative distribution function (CDF) of the conductance for increasing stop voltage V STOP .(d) Standard deviation of the conductance σ G as a function of the average conductance G for IRPP and IGPP algorithms.Adapted from [51].
Figure 6 .
Figure 6.Program and verify algorithm.(a) Conceptual schematic of ISPVA program and verify algorithm.(b) Mean conductance as a function of set voltage V TE for multiple values of the gate voltage V G .(c) Probability density function (PDF) of programmed conductance levels.Reprinted from [43,57].
Figure 7 .
Figure 7. Drift and fluctuations in RRAM devices.(a) Conductance as function of time for 4 different analog levels after heating at 150 °C.(b) Different fluctuations' effect as a function of time.(c) Cumulative distributions of 5 programmed levels before and after annealing at T = 125 °C.(d) Conceptual schematic of the neural network used to evaluate the effect of drift and its accuracy in classifying the MNIST dataset before (e) and after (f) annealing.Adapted from [43,58,59].
Figure 9 .
Figure 9.Comparison between analytical formula (top) and MC simulations (bottom) of standard deviation of the programming error σ as a function of the number of bits N and the standard deviation of the conductance σ G for multilevel (a,f), binary (b,g), unary (c,h), multilevel with redundancy factor M = 4 (d,i) and slicing in M = 2 units (e,j).Adapted from [51].
Figure 10 .
Figure 10.Figures of merit of various programming strategies.(a) Standard deviation of the overall programming error σ G as a function of the conductance error σ G assuming N = 7 bits.(b) Maximum number of bits N max as a function of the conductance error σ G considering a programming error σ = 1%.(c) Bit density as a function of the conductance error σ G .Adapted from [51].
Figure 11 .
Figure 11.IR drop in crosspoint arrays.(a) Explicit representation of the parasitic resistance in a crosspoint array, namely the input resistance R in , output resistance R out and wire resistance r.(b) Memory array with current controlled memory element programmed to various saturation currents.(c) Comparison of the impact of IR drop for ohmic and saturated characteristics.Reprinted from [67].
Figure 12 .
Figure 12.Bipolar conductance mapping and IR drop.(a) Typical bipolar conductance mapping in two adjacent 1T1R columns with the positive one (red) encoding the positive part of the weights and the negative one (blue) encoding the negative part of the weights.Currents are then subtracted in the digital domain after conversion.(b) To reduce the impact of IR drop, conductance representing the positive and negative weights can be summed up in the analog domain with a dedicated ST-2T2R circuit structure.Reprinted from [69].
Figure 13 .
Figure13.Schematic of IMC circuits for various applications.(a) MVM accelerator performing I = AV, where V is the input voltage vector, A is the conductance stored in the crosspoint array and I is the vector of the current in each row.(b) Linear system solver performing V = A −1 I where I is the vector of the row input currents and V is the output vector of column voltages.(c) Regression problem solver performing v = −(X T X) −1 X T i where i is the vector of the input row currents at the left crosspoint array, X is the matrix of conductances in the two crosspoint arrays and v is the output voltage of the second stage of amplifiers.(d) Analog CAM cell, a range is stored in the memory conductance M1 and M2, the ML is pre-charged and an analog input is applied to the DL line.If the analog input is in the range of acceptance given by M1 and M2 the ML remains charged otherwise it discharges.Reprinted from[31,73].
Figure 14 .
Figure 14.Application requirements (a) and figures of merit for various memory technologies with 2-terminal (b) and -terminal structure (c).
Table 1 .
Comparison of various mapping schemes, in terms of conductance range, variability-induced error and resulting maximum number of programmable bits. | 11,246.2 | 2021-04-30T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
What Is the Brain Basis of Intelligence?
Developing civilization arguably would be impossible without human intelligence, as would be our proudest achievements: Shakespeare’s plays, Mozart’s piano concertos, Darwin’s theory of evolution, and Einstein’s relativity. What is it about the human brain that endows us with the intelligence that provides the basis of such achievements? John Duncan sets out to answer this question in his new book How Intelligence Happens. Duncan, Assistant Director of the MRC Cognition and Brain Sciences Unit in Cambridge (United Kingdom), has brought together material from psychology, neuropsychology, cognitive science, artificial intelligence, brain imaging, and single-cell recording studies to develop a theory of human intelligence. The book is intended for a lay audience—and thus is written in nontechnical language with relaxed, anecdotal sections and personal stories interspersed with descriptions of experimental results—but it will be enjoyed by professionals who want to learn about frontal lobe function and modern ideas of human intelligence. Briefly, Duncan’s main argument is that our frontal lobes contain circuits that break complex problems down into manageable subproblems, and then oversee the solution of these subproblems in the proper order. This argument is developed in five chapters, the first of which introduces the idea of general (or fluid) intelligence in a historical context by focusing on Spearman’s research, which gave rise to his notion of g (general) and s (special) factors. Each kind of task, such as remembering the words in a list or figuring out a puzzle, depends on special factors (s) needed to make you good at that task and a general factor (g) that helps you to be good at any task. After presenting basic neuroanatomy and describing some methods for studying brain function (for example, functional magnetic resonance imaging, fMRI), Duncan recounts how he had the insight that fluid intelligence resides mostly in the frontal lobes. When he was a postdoctoral fellow, one of Duncan’s jobs was to do psychological testing on candidate bus drivers to try to predict which ones were more likely to have accidents. During this testing, he noticed that drivers with low g scores knew the rules they were supposed to follow in performing the tests, but somehow could not manage to apply these rules effectively. Their behavior reminded Duncan of Luria’s descriptions of disorganized behavior exhibited by patients with frontal lobe lesions, an observation that set Duncan’s future career path. Drawing on observations from artificial intelligence, a field whose goal is to give computers human-like intelligence, Duncan argues that, although brains and digital computers have quite different architectures and use different methods to carry out their computations, problem solving by computers offers a good model for how people do it: you have to break down the entire problem into small subproblems, and solve them in sequence. Being able to execute this strategy is at the heart of fluid intelligence, and its neuronal substrate should therefore be found in the frontal lobes. But how do the neurons in the frontal lobes do it? Duncan reviews recent research showing that circuits in the frontal lobes have the flexibility to encode, in the firing rate of resident neurons, just about anything being held in mind. The firing of a particular neuron might record a location that is to be held in short-term memory at one time, or be a cat-versusdog classifier at another time. One of the things that makes it hard for artificial intelligence to mimic human intelligence is that we can be, but often are not, rational beings. Capturing our special forms of irrationality poses a special problem for computer programmers. In the final chapter, Duncan describes some ongoing and incomplete research to give an impression of the direction intelligence studies is heading. For example, he outlines some experiments that permit us to ‘‘read a person’s mind’’ by examining the pattern of brain activity revealed by fMRI. The experimental findings relating to human intelligence often are unexpected and arresting—for example, people prefer a medical treatment when they are told it saves the lives of 90% of the patients over the same treatment when told that 10% of the patients die. The opportunity to learn about these discoveries will make this book
Developing civilization arguably would be impossible without human intelligence, as would be our proudest achievements: Shakespeare's plays, Mozart's piano concertos, Darwin's theory of evolution, and Einstein's relativity. What is it about the human brain that endows us with the intelligence that provides the basis of such achievements? John Duncan sets out to answer this question in his new book How Intelligence Happens.
Duncan, Assistant Director of the MRC Cognition and Brain Sciences Unit in Cambridge (United Kingdom), has brought together material from psychology, neuropsychology, cognitive science, artificial intelligence, brain imaging, and single-cell recording studies to develop a theory of human intelligence. The book is intended for a lay audience-and thus is written in nontechnical language with relaxed, anecdotal sections and personal stories interspersed with descriptions of experimental results-but it will be enjoyed by professionals who want to learn about frontal lobe function and modern ideas of human intelligence.
Briefly, Duncan's main argument is that our frontal lobes contain circuits that break complex problems down into manageable subproblems, and then oversee the solution of these subproblems in the proper order. This argument is developed in five chapters, the first of which introduces the idea of general (or fluid) intelligence in a historical context by focusing on Spearman's research, which gave rise to his notion of g (general) and s (special) factors. Each kind of task, such as remembering the words in a list or figuring out a puzzle, depends on special factors (s) needed to make you good at that task and a general factor (g) that helps you to be good at any task.
After presenting basic neuroanatomy and describing some methods for studying brain function (for example, functional magnetic resonance imaging, fMRI), Duncan recounts how he had the insight that fluid intelligence resides mostly in the frontal lobes. When he was a postdoctoral fellow, one of Duncan's jobs was to do psychological testing on candidate bus drivers to try to predict which ones were more likely to have accidents. During this testing, he noticed that drivers with low g scores knew the rules they were supposed to follow in performing the tests, but somehow could not manage to apply these rules effectively. Their behavior reminded Duncan of Luria's descriptions of disorganized behavior exhibited by patients with frontal lobe lesions, an observation that set Duncan's future career path.
Drawing on observations from artificial intelligence, a field whose goal is to give computers human-like intelligence, Duncan argues that, although brains and digital computers have quite different architectures and use different methods to carry out their computations, problem solving by computers offers a good model for how people do it: you have to break down the entire problem into small subproblems, and solve them in sequence. Being able to execute this strategy is at the heart of fluid intelligence, and its neuronal substrate should therefore be found in the frontal lobes. But how do the neurons in the frontal lobes do it? Duncan reviews recent research showing that circuits in the frontal lobes have the flexibility to encode, in the firing rate of resident neurons, just about anything being held in mind. The firing of a particular neuron might record a location that is to be held in short-term memory at one time, or be a cat-versusdog classifier at another time.
One of the things that makes it hard for artificial intelligence to mimic human intelligence is that we can be, but often are not, rational beings. Capturing our special forms of irrationality poses a special problem for computer programmers. In the final chapter, Duncan describes some ongoing and incomplete research to give an impression of the direction intelligence studies is heading. For example, he outlines some experiments that permit us to ''read a person's mind'' by examining the pattern of brain activity revealed by fMRI.
The experimental findings relating to human intelligence often are unexpected and arresting-for example, people prefer a medical treatment when they are told it saves the lives of 90% of the patients over the same treatment when told that 10% of the patients die. The opportunity to learn about these discoveries will make this book rewarding for the lay reader. At the same time, the broad range of disciplines represented will provide many professional neurobiologists with welcome new facts and ideas.
About the Author
Charles F. Stevens, Professor at the Salk Institute, is mainly known for work in the biophysics of ion channels and for studies of the molecular and cell biological mechanisms underlying synaptic transmission. In recent years, he has switched mainly to theoretical neurobiology and is working to identify the design principles that brains use to endow their neural circuits with a scalable architecture-that is, circuits that can be made computationally more powerful by simply making them larger. | 2,083.2 | 2011-06-01T00:00:00.000 | [
"Psychology",
"Philosophy"
] |
Using Mobile Phones for Activity Recognition in Parkinson’s Patients
Mobile phones with built-in accelerometers promise a convenient, objective way to quantify everyday movements and classify those movements into activities. Using accelerometer data we estimate the following activities of 18 healthy subjects and eight patients with Parkinson’s disease: walking, standing, sitting, holding, or not wearing the phone. We use standard machine learning classifiers (support vector machines, regularized logistic regression) to automatically select, weigh, and combine a large set of standard features for time series analysis. Using cross validation across all samples we are able to correctly identify 96.1% of the activities of healthy subjects and 92.2% of the activities of Parkinson’s patients. However, when applying the classification parameters derived from the set of healthy subjects to Parkinson’s patients, the percent correct lowers to 60.3%, due to different characteristics of movement. For a fairer comparison across populations we also applied subject-wise cross validation, identifying healthy subject activities with 86.0% accuracy and 75.1% accuracy for patients. We discuss the key differences between these populations, and why algorithms designed for and trained with healthy subject data are not reliable for activity recognition in populations with motor disabilities.
INTRODUCTION
Accurately tracking the activities of patients with motor disabilities has the potential to better inform patient care. With more precise, objective measures treatment alternatives can be evaluated more definitively. This is particularly important in motor disabilities such as Parkinson's disease (PD) that respond to an increasing variety of treatment options, including drugs and various exercise therapies (Palmer et al., 1986;Schenkman et al., 1998;Goodwin et al., 2008;Dibble et al., 2009). Quantifying symptoms both in the clinic and at home has the potential to provide functional measures better associated with quality of life (Ellis et al., 2011).
Current means of evaluating patient mobility are limited. Clinical evaluations require a patient to travel to the location of a health care provider, and testing is expensive in terms of money and clinician time. This limits the frequency of clinical evaluations to only a few times per year for personal needs and only a few times per week during research studies. For better temporal resolution, studies often turn to patient journaling. Asking patients to periodically indicate their activities suffers from a number of problems. First, it is subjective, leading to changes based on the mental state of the subject. Also, because of the inconvenience to patients, it is difficult to achieve high compliance, with one study indicating only an 11% compliance rate using journaling (Stone et al., 2003). It would be helpful to develop a measure of evaluating patient mobility that is both frequent and convenient for the subject.
Dedicated accelerometers are an inexpensive, standardized way of measuring movement (Mathie et al., 2004b). These components can cost as little as one dollar, and can measure movement in any direction as well as orientation relative to gravity. For example, by attaching an accelerometer to a shoe, one can estimate the amount of time running and walking based on the periodic movements. For activity recognition, there have been many studies which place these accelerometers at specific locations on the body -including the head, chest, arm, foot, and thigh (reviewed in Kavanagh and Menz, 2008). The advantage of placing accelerometers at such locations is more consistent signals across individuals. Although such accelerometers are inexpensive, they are often dedicated equipment that has to remain attached at a particular location on the body to be effective. This is often an impractical inconvenience for long-term use.
Modern mobile phones have built-in accelerometers that can be used to track movements without the need for an additional device (Brezmes et al., 2009;Gyorbiro et al., 2009;Ryder et al., 2009;Fernandes et al., 2011). They have their own power sources, memory storage capabilities, and can transmit data wirelessly. Patients could simply download an app onto their smartphone enabling data collection and analysis. Mobile phones allow automatic, convenient, real-time monitoring, and recording, which can be invaluable to large-scale studies and personal patient health monitoring.
Unfortunately, applying current mobile phone strategies to populations with motor disabilities is challenging. For example, PD symptoms include tremor, slowed motion (bradykinesia), rigid muscles, loss of common automatic movements, and impaired posture (Jankovic, 2008). These symptoms all can adversely affect activity recognition. Activity recognition strategies have been tailored specifically for the elderly (Najafi et al., 2003), individuals with muscular dystrophy (Jeannet et al., 2011), and even PD www.frontiersin.org (Salarian et al., 2007), but each of these studies were done with accelerometers at standardized locations or using multiple sensors throughout the body. Performing activity recognition for a population with a motor disability when carrying the phone naturally in pockets or belt clips provides additional challenges.
Here we show how modern machine learning techniques can quantify the movements of PD patients that carry mobile phones. We first collect data from both healthy subjects and PD patients performing a standard set of activities. From this data, we analyze the precision of activity recognition both within and across the two groups. Ultimately, we demonstrate an approach to activity recognition using mobile phone devices for patient populations, allowing us to better monitor patient responses to treatment in future studies.
MATERIALS AND METHODS
Eighteen healthy subjects and eight PD patients were recruited for this study. The eighteen healthy subjects, having had no previous history of a movement disorder, came from two groups, 13 younger subjects (6M/7F, 25.1 ± 3.0 years), and five older subjects (5F, 53.4 ± 7.4 years). The PD patients were mild and moderately affect patients in Hoehn and Yahr stage 1-3 (7F/1M, 67.0 ± 8.1 years, median ± range). Patients were recorded while taking their usual medications (ON-med condition). Many patients presented mild dyskinesias during the course of the experiment. Written, informed consent was obtained for all subjects. The Northwestern University Institutional Review Board approved this study.
All subjects were instructed to carry T-mobile G1 phones running Android OS version 1.6 in their front pockets. These phones have a standard built-in tri-axial accelerometer with a range of ±2.8 g. The sampling rate was variable between 15 and 25 Hz depending upon the amount of movement.
Subjects were instructed to perform a number of different activities, each for at least 1 min. Before each activity, the subject would select the activity on a specially designed phone app (Figure 1) with the experimenter present to minimize errors. The accelerations were labeled according to the activity they were performing. The activities were performed in a laboratory setting in the order shown below. For a few initial subjects the activities began and ended with an additional lying down activity, but the protocol was simplified by removing this activity for all subjects later on. We also repeated activities to get more recordings with the phone in slightly different positions on the subjects.
DATA PROCESSING AND CLASSIFICATION
The accelerometer signals were preprocessed using the following procedure: the recordings were segmented into 10 s clips performing a given activity for the entire duration. The first and last samples were removed from the ends of the recording to allow time for the phones to enter and be removed from the pockets of subjects. Three thousand three hundred and eighty eight samples were recorded from healthy subjects, while 2184 total samples were recorded from PD patients. The subject and researcher both observed labeling during recording, ensuring validity of training, and test data. Misclassified samples were only checked for artifacts. The phone accelerometer values were linearly interpolated from a variable rate between 15 and 25 Hz to match 20 Hz.
The subjects naturally placed the phone in their pockets in the following possible orientations, due to the elongated rectangular shape of the phones.
1. screen in/right side up 2. screen in/upside down 3. screen out/right side up 4. screen out/upside down In the accelerometer readings, each of these orientations only differs by the signs of the axes. For example, flipping the phone right side up to upside down (e.g., orientation #1-#2) changes the sign of the x and y axes, while turning the screen inward (e.g., #1-#3) changes the sign of axes x and z axes. To correct for these different phone orientations, we generated three additional samples for each recorded sample by a coordinate transform that effectively flips the phone 180˚along each of its three axes.
From these 10 s clips features were extracted, as summarized in Table 1. For the change in acceleration: mean, standard deviation, skew, kurtosis 12 Root mean square 3
Frontiers in Neurology | Movement Disorders
Smoothed root mean square (5 pt kernel, 10 pt kernel) 6 Extremes: min, max, abs min, abs max 12 Histogram: includes counts for −4 to 4 z-score bins 27 Fourier components: 32 samples for each axis 96 Overall mean acceleration 1 Cross product means: xy, xz, yz 3 Abs mean of the cross products 3 Two popular algorithms were used for classification: support vector machines (SVM; Chang and Lin, 2011) and sparse (regularized) multinomial logistic regression sparse multinomial logistic regression (SMLR; Krishnapuram et al., 2005). Both techniques have been successfully applied in a large number of machine learning classification problems with a great deal of practical success on large feature sets. The hyperparameters for both classifiers were found by a grid search of 10× where x is an integer between −5 and 5 and selecting the maximum cross-validated error in predicting the healthy subject labeled activities. Given the size of the data set used for cross validation this procedure was not expected not lead to noticeable over-fitting for the hyperparameters. For SMLR, the coefficient for the regularization term during optimization, λ, was 0.0001. For SVM, we normalized each feature to have 0 mean and unit variance. We applied radial basis functions, giving us two hyperparameters -the soft slack variable, C, and the size of the Gaussian kernel, γ. The values found by cross validation were C = 1 and γ = 0.1 for the across subjects validation and C = 10 and γ = 1 for the 10-fold validation.
RESULTS
To examine our ability to classify activities in a patient population, we collected data on both PD patients and healthy subjects, as detailed in the Section "Materials and Methods." Subjects were instructed to carry mobile phones in their front pants pocket while performing a series of activities ( Figure 1C). We applied two different classifiers, SVM and SMLR, to classify the activities. Our intention is to demonstrate the importance of using classifiers trained with data specifically from patient populations.
The activities subjects performed have characteristic patterns in the accelerometer signals. Recordings were made from the threeaxis accelerometers in the phones. The orientation of the phone determined the orientation of the accelerometer axes ( Figure 1B). There are also visible differences in the data of PD patients compared to healthy subjects. Example clips from these activities (Figure 2) show the presence of dyskinesias in one PD patient. It can also be observed in the accelerations that walking is often less periodic for PD patients than healthy subjects. Such differences can lead to errors in classification if features such as periodicity or vibrations are used for prediction. Although the movements are related between groups, the exact characteristics of those movements have enough differences between the populations to examine the effect of these differences on classification accuracy. Unlike many studies which classify signals based on only a few, specific features, we did not seek individual features that could be used for classification. For example, knowing if someone is standing, sitting, or holding the phone can depend not only on the orientation of the phone, but also on the amount of vibration in the movement. Instead of searching for particular features with clear, independent differences between activities, we chose to apply the standard, state-of-the-art machine learning approach: we constructed a large feature set and had the algorithms select how to combine and weigh the features appropriately.
First, we wanted to compute a classification accuracy measure that can be compared across studies. To do this, we used 10-fold cross validation, selecting every 10th sample for the test set. This accuracy is expected to be fairly high considering movement patterns specific to individual subjects were in both the training and test sets. For SVM classification, this lead to a 96.1% accuracy for healthy subjects (Table 2), and a 92.2% accuracy for PD patients ( Table 3). Similar results were found for SMLR (89.7% for healthy, and 84.7% for PD patients) so only the SVM results are shown for clarity.
We sought to quantify the effect of differences between movements made by Parkinson's patients and healthy subject on the classification algorithm. Unlike previous studies that have analyzed the difference from healthy subjects for particular features (Salarian et al., 2007;Jeannet et al., 2011), we directly trained the classifiers using healthy subject data, and applied the classifiers to patient recordings to observe the effect of those differences. From this, we achieved a much lower accuracy of 60.3% using SVM's ( Table 4) and 63.5% using SMLR. The lower accuracy is indicative of the difference between the populations and the need to find another, more accurate way to identify patient activities.
One entry in the table is made for every 10 s sample. The rows represent the activity being performed and the columns represent the prediction according to the algorithm. Overall accuracy is the sum of the correct classification (diagonal in bold) compared to the sum of all entries in the entire table.
We sought to better understand the large difference in accuracy for the within vs. across population results. Ten-fold or leave-oneout cross validation techniques do not remove the effect of the same individual that may have significantly distinct movement www.frontiersin.org Figure 1B.
The patient shown here exhibited dyskinesia in the arm that is clearly visible while holding the phone and somewhat visible during standing and sitting. The patient also had an irregular gait cycle during walking. pattern from others. Using 10-fold cross validation would still allow such idiosyncratic movements of individuals to be part of the training and test sets. We wanted a measure that would indicate the accuracy of the algorithm if it were applied to subjects after training. For this we performed subject-wise cross validation for the 18 healthy subjects and found an accuracy of 86.0% for SVM's and 85.2% for SMLR. Note that because much of the variation in movements is across subjects, this accuracy is much lower than that of the 10-fold cross validation. Table 5 presents a breakdown of the classification for SVM's, with SMLR appearing similar.
To consider the ability of this approach to be adapted to PD patients by using patient data, we also analyzed the PD patients separately. Similar to the healthy subjects, we applied subject-wise cross validation on the PD patient data alone. Using SVM's, the accuracy was 75.1% (Table 6) and using SMLR it was 76.0%. Although this is lower than the previous percentage for healthy subjects, this is expected as PD patients movements vary more significantly across subjects. Most importantly, when considering predictions across subjects, training using patient data led to a significantly better prediction than training using healthy data alone.
DISCUSSION
We applied machine learning to signals from mobile phones to classify the activities of people with PD. Instead of handpicking the Frontiers in Neurology | Movement Disorders most relevant features and comparing them, we used a large feature set and had the relevant features selected by the machine learning algorithms. Because many algorithms depend on the training data, these methods were not expected to test well for populations with unique movement patterns. This was done using mobile phones carried naturally in pants pockets. Though this natural way of carrying was expected to lower accuracy values, it is more indicative of expected accuracy if this research is to be applied to studies with patient groups. The most accurate methods of data collection for activity recognition rely on multiple sensors. Often this involves accelerometers, as they are small, relatively inexpensive, and register both movement and orientation to gravity. Some systems have integrated temperature, compass, light, and sound sensors on the waist (Choudhury et al., 2008) or a similar collection of multimodal sensors on the wrist (Maurer et al., 2006;Gyorbiro et al., 2009). For accelerometer-only arrays, multiple sensors may be placed throughout the body -anywhere from three to five locations (Bao and Intille, 2004;Tapia et al., 2007;Krishnan and Panchanathan, 2008) or more. Mannini and Sabatini (2010) provide a review of these approaches.
There are simpler alternatives to using multiple sensors, improving the convenience, cost, and compliance rates. The most common approach is to use a single, waist-mounted accelerometer. This approach has been analyzed on very specific sets of instructed activities with over 98% accuracy (Mathie et al., 2004a;Mathie et al., 2004b;Ravi et al., 2005;Lee et al., 2009). High accuracy ratings were possible in part due to the fixed location of the accelerometers on the body, the use of within-subject vs. acrosssubject cross validation, and the artificial nature of instructed movements. Signals from walking, sitting, and standing are necessarily more repeatable when in a consistent lab setting following instruction. For comparison, when subjects simply wore such a device for 24 h, with more natural activities, accuracy was closer to 80% (Long et al., 2009). Single, waist-worn accelerometers have been well-studied in the domain of activity recognition, but may need consistent placement for high accuracy.
Unlike dedicated accelerometers, some people already consistently carry mobile phones, making them a convenient platform for recording movements. Most smartphones have built-in accelerometers and are often worn on the person, similar in principle to previous work on accelerometry. Mobile phones have built-in communication protocols that allow simple data logging to the device and wireless transmission. This permits real-time response, or in an experimental setting, compliance verification. Because mobile phones are widely adopted, compliance without verification is already high, as people are used to carrying them. Due to these advantages, mobile phones have the promise to provide a convenient, inexpensive, and objective means to detect the activities of people.
Mobile phones have been used to classify activities of healthy subjects (Bieber et al., 2009;Brezmes et al., 2009;Gyorbiro et al., 2009;Ryder et al., 2009;Wang et al., 2009;Yang, 2009;Kwapisz et al., 2011). Common activities include walking, jogging/running, standing, sitting, and using stairs. The choice of activities influences accuracy rates, and also because most rates in these previous studies are not subject-wise cross-validated, applicability across subjects is more difficult to interpret. In Kwapisz et al. (2011), healthy subjects were instructed to carry the phone in their left pocket and perform a specific set of activities; all activities except stair climbing were classified with at least 90% accuracy. Other studies found similarly high accuracy but with different classification techniques (Brezmes et al., 2009;Ryder et al., 2009;Yang, 2009). In Wang et al. (2009), classes were divided as still, walking, running, or in a vehicle, which simplified classification which was done using microphones and GPS as well as accelerometer readings. In Yang (2009), a preprocessing technique was used which converted the axes from phone-specific to phone-independent coordinates based on orientation of gravity, providing 88-90% accuracy. While our results on healthy subjects are in line with previous studies, the central contribution of our paper is the careful analysis of precision of activity recognition in the context of PD.
PATIENT POPULATIONS
We chose to analyze the PD population for various reasons. Millions of people throughout the world are suffering from diseases that affect mobility. Many diseases, such as stroke, heart disease, or depression affect large populations but have a wide variety of causes, types, and symptoms. PD, on the other hand, is characterized by a number of common characteristics, which makes analysis easier across subjects (Gelb et al., 1999). Common symptoms such as tremor are visible in movements and lend themselves well to analyses using accelerometers in mobile phones (Joundi et al., 2011;Surangsrirat and Thanawattano, 2012). The PD population is also an important subgroup to consider as it also effects a relatively large population -approximately four million people globally (Dorsey et al., 2007).
There is another study that automatically classified and characterized postures and activities for a population of PD patients (Salarian et al., 2007). However, their results used within-subject cross validation and thus cannot speak to the across-subject generalization issue we are discussing here. Moreover, they used a set of accelerometers and gyroscopes instead of mobile phones. Our paper demonstrates the ability to use mobile phone recordings of acceleration to enable quality activity recognition with PD patients.
There are a few limitations to the interpretation of our results to address. For our healthy subjects, we used a population of both younger and older subjects, instead of age-matched controls. Some of the difference between the groups can be age-related, however we believe this effect was minor compared to the effect of PD on patient movements. Also, the PD group was relatively small (eight subjects) and heterogeneous (Hoehn and Yahr stage 1-3), however even from this heterogeneous group we note a significant improvement by using PD training data. Lastly, because we had both the researcher and subject observing, we relied on the recording procedure for accurate activity labeling. Subjects did not always perform the instructed actions in a typical fashion (e.g., moving feet while standing, stopping briefly while walking, etc). Instead of removing possible inconsistencies by hand, and thus affecting the validity of this approach, we retained all samples in the data set. Despite these limitations, the main conclusions of this study are supported.
www.frontiersin.org
There were two main goals for this study. First, we demonstrate how machine learning can be used to infer the activities of PD populations; the focus is not on particular, hand-picked features of movement, but on automated methods of weighing and combining those features. The second major goal was to highlight and quantify the effect of applying classifiers designed for healthy subjects on a PD patient population. A demonstrable drop in classification accuracy from 92.2 to 60.3% makes this point clear; it is important to use tools and analyses designed for specific patient populations. Although this study is not thorough enough to validate this classification method for clinical practice, it does demonstrate a strong benefit of machine learning, and a caution for clinicians who may want to use any activity recognition methods designed for healthy subjects.
The ultimate objective of therapies is to improve patient quality of life and activity tracking is an additional way of quantifying this. Such quantitative evaluation techniques could help clinicians test and optimize aspects of many therapies for motor disorders.
By only downloading an application, mobile phones can record a person's movements, greatly simplifying the study design and improving compliance. This information can be of personal or community medical use, improving evaluation of patient outcomes in therapeutic interventions. It is clear that populations with motor impairments require special consideration in approaches that analyze movement patterns. Mobile phones provide a means of tracking movements in an objective, convenient, and inexpensive way. The extent to which leveraging these qualities can improve and enable new therapeutic approaches is an area of further research. | 5,367.6 | 2012-11-07T00:00:00.000 | [
"Computer Science"
] |
LncRNA AL161431.1 predicts prognosis and drug response in head and neck squamous cell carcinoma
Background Long non-coding RNAs (lncRNAs) are increasingly recognized as essential players in various biological processes due to their interactions with DNA, RNA, and protein. Emerging studies have demonstrated lncRNAs as prognostic biomarkers in multiple cancers. However, the prognostic effect of lncRNA AL161431.1 in head and neck squamous cell carcinoma (HNSCC) patients has not been reported. Methods In the present study, we conducted a series of analyses to identify and validate the prognostic value of lncRNA AL161431.1 in HNSCC, which included differential lncRNAs screening, survival analysis, Cox regression analysis, time ROCanalysis, nomogram prediction, enrichment analysis, tumor infiltration of immune cells, drug sensitivity analysis, and quantitative real-time polymerase chain reaction (qRT-PCR). Results In this study, we performed a comprehensive survival and predictive analysis and demonstrated that AL161431.1 was an independent prognostic factor of HNSCC, for which a high AL161431.1 level indicated poor survival in HNSCC. Functional enrichment analyses found that cell growth and immune-related pathways were significantly enriched in HNSCC, suggesting that AL161431.1 may play a role in tumor development and tumor microenvironment (TME). AL161431.1-related immune cells infiltration analysis demonstrated that AL161431.1 expression is significantly positively associated with M0 macrophages in HNSCC (P<0.001). Using "OncoPredict", we recognized chemotherapy drugs sensitive to the high expression group. Quantitative real-time polymerase chain reaction (qRT-PCR) was performed to identify the expression level of AL161431.1 in HNSCC, and the results further validated our findings. Conclusions Our findings suggest that AL161431.1 is a reliable prognostic marker for HNSCC and can potentially be an effective therapeutic target.
Introduction
Head and neck squamous cell carcinoma (HNSCC), which arises from the mucosal epithelium of the oral cavity, nasopharynx, oropharynx, hypopharynx, and larynx, is the eighth most commonly occurring form of cancer in the world, responsible for approximately 745,000 new cases and 364,340 deaths in 2020 (1). The available clinical treatment modalities for HNSCC include surgery, radiotherapy, chemotherapy, and the latest emerging immunotherapy (2). However, despite a modest improvement in HNSCC survival over the past three decades, the 5-year survival rate still hovers at 60% (3). Therefore, there is an urgent need to find a new molecular biomarker capable of predicting survival, identifying new intervention targets, and predicting response to therapeutic agents.
In this study, we conducted a series of analyses, which included differential lncRNAs screening, survival analysis, Cox regression analysis, time ROC analysis, nomogram prediction, enrichment analysis, tumor infiltration of immune cells, drug sensitivity analysis, and quantitative real-time polymerase chain reaction (qRT-PCR). First, we screened the prognosis-related lncRNA AL161431.1 of HNSCC. Then, we confirmed that AL161431.1 was an independent prognostic factor of HNSCC. High expression of AL161431.1 indicated poor survival in HNSCC. Our findings suggest that AL161431.1 has a high potential in the diagnosis, prognosis, and targeted therapy of HNSCC.
Data collection and processing
A total of 271 RNA-sequence (17 normal and 254 tumor tissues) data were acquired from the TCGA-HNSCC database. After removing mRNA data, lncRNAs were used for further analysis. In addition, clinical information of 313 HNSCC patients, including age, gender, grade, stage, T-N-M stage, survival time, and survival status, was extracted for subsequent analyses (https:// portal. http://gdc.cancer.gov).
Analysis of differentially expressed lncRNAs
The differential lncRNA expression between normal and HNSCC tissues was assessed using the "Limma" package of R software (version 4.0.5) (20). The filtering criteria for lncRNA differential expression were false discovery rate (FDR) < 0.05 and | Log fold change| >1. In addition, the R package "heatmap" was used to display the 100 differentially expressed lncRNAs.
Identification of prognosis-related lncRNAs
Based on the previously obtained differentially expressed lncRNAs, the Kaplan-Meier analysis (KM) and the Cox regression analysis screened the prognosis-related lncRNAs. By merging expression and clinical data, the lncRNAs with P-values < 0.05 were regarded as independent prognostic lncRNAs and were analyzed further. Subsequently, time-dependent receiver operating characteristics (ROC) analysis was conducted to screen out lncRNAs with high accuracy in predicting overall survival (OS) using the "survival", "survminer", and "timeROC" R packages.
Differential expression and survival analysis
LncRNA AL161431.1 was selected from the above prognostic lncRNAs for subsequent analyses. The differential analysis used the R package "limma" to investigate the differential expression of AL161431.1 between normal and tumor subgroups. The pancancer analysis compared expression values between cancer tissue and adjacent normal tissue by the Wilcoxon rank-sum test (21). More importantly, the expression values of AL161431.1 in HNSCC samples and normal samples were compared. The above results are presented by boxplots. According to the median level, HNSCC tissues were divided into high-and low-AL161431.1 expression groups. Finally, KM analysis compared the OS between the two sets using the "survival" and "survminer" packages in R.
Univariate, multivariate Cox regression and time ROC analysis
The univariate Cox analysis was used to identify the prognostic indicators. At the same time, the multivariate Cox analysis was used to determine whether AL161431.1 is an independent risk factor for OS in HNSCC. First, the ROC curve assessed the diagnostic value of AL161431.1 in HNSCC by the "pROC" R package. Then, the "timeROC" R package was employed to score the prediction accuracy by AL161431.1 using time-dependent ROC curve analysis.
Establishment and evaluation of nomograms
A nomogram encompassing the expression value of AL161431.1 attributes was created using the R packages "rms" and "regplot" to predict HNSCC patient OS (1 year, 2 years, and 3 years). The nomogram's predictive power was evaluated using calibration curves.
Co-expression and enrichment analyses
The correlation analysis was used to identify the co-expression gene of AL161431.1 in the TCGA-HNSCC cohort to better understand the important biological function that AL161431.1 plays in HNSCC. Then, all co-expression genes that positively and negatively correlated with AL161431.1 were selected for subsequent enrichment analysis. For correlation analysis, the Pearson correlation test was employed. The corresponding coefficient threshold values were set at >0.4. The adjusted P-value was <0.001. The above results were manifested as scatter plots using the "ggplot2" package and R software (version 4.0.5).
Gene Ontology (GO) analyses were carried out to identify the biological effects of AL161431.1 co-expressed genes. In addition, Gene Set enrichment analyses (GSEA) were conducted to identify significant enrichment signaling pathways in the high-and low-AL161431.1 expression groups. The R package "clusterProfiler" was used to evaluate GO analyses and GSEA (22).
Immune cell infiltration
The Cell-type Identification by Estimating Relative Subsets of RNA Transcripts (CIBERSORT) algorithm (23) evaluated immune infiltration in HNSCC tissue in 22 subsets of immune cells, including M0 macrophages, activated dendritic cells, activated mast cells, eosinophils, resting NK cells, CD4 memory resting T cells, neutrophils, M2 macrophages, memory B cells, CD4 naïve T cells, plasma cells, gamma delta T cells, activated NK cells, M1 macrophages, monocytes, CD4 memory activated T cells, naïve B cells, resting dendritic cells, T follicular helper cells (Tfhs), and regulatory T cells (Tregs). Spearman correlation analysis investigated the relationship between AL161431.1 expression and the infiltrated immune cells. Lollipop plots were used to visualize the correlation coefficients of the results (P-value<0.05).
OncoPredict for drug sensitivity analysis
To predict the chemotherapeutic drug sensitivity of the highand low-AL161431.1 expression groups, we used the "OncoPredict" R package and the Genomics of Drug Sensitivity in Cancer (GDSC; https://www.cancerrxgene.org/) database to predict drug responses in cancer patients (24). First, we analyzed the difference in the activity of drugs between the two groups using the Wilcoxon test. A total of 198 drugs were calculated, and the significance level was set at P <0.001. The "ggplot2" and "ggpubr" functions of R were used to create the box plots.
Validation of the expression level of AL161431.1 in HNSCC by qRT-PCR
A total of 8 HNSCC tissues and 8 adjacent normal mucosal tissues, diagnosed with primary HNSCC after surgical resection in the Wuhan Union Hospital, were collected for subsequent study. There was no history of other tumors, and no patients received radiotherapy, chemotherapy, or other treatments. All patients had signed informed consent before surgery. The collected tissue samples were frozen in liquid nitrogen and stored at -80°C for further RNA extraction. First, total RNA was extracted from samples using TRIzol reagents (Invitrogen, Waltham, MA). Then, reverse transcription and PCR reactions were performed using the PrimeScriptTM RT-PCR kit (Takara, Shiga, Japan). The primer sequence was as follows: AL161431.1: Reverse 5 -GAATTGGGAGGATCTAGGACATCTA -3′
Statistical analysis
All analyses were performed using R version 4.0.5, and a P-value<0.0 was considered statistically significant.
Differential analysis of lncRNA in HNSCC
To identify the expression difference of AL161431.1 in tumors and normal tissues, we analyzed data from 271 patients (17 normal The flow chart of our study. and 254 tumors) extracted from the TCGA-HNSCC database. Following differential expression analysis, 1180 differentially expressed lncRNAs were identified. The study design of this project is shown in Figure 1. In addition, the differentially expressed lncRNAs have been presented as a heatmap, including only 100 differentially expressed lncRNAs ( Figure 2A).
Screening and identification of prognosis-related lncRNAs in HNSCC
To identify lncRNAs associated with the prognosis of HNSCC, we used the Cox regression and KM survival analysis to preliminarily screen out 137 prognostic-related lncRNAs (P <0.05). Then, 81 independent prognostic lncRNAs were identified by independent prognostic analysis (P<0.05). Finally, we used the ROC curve to select four highly accurate lncRNAs in predicting patient survival, namely SNHG26, AL161431.1, LINC00460, and AL358334.2 (AUC>0.65, Table 1).
Expression of AL161431.1 was upregulated in HNSCC and validated by qRT-PCR
To explore the differential expression of AL161431.1 in the normal and tumor samples, we first analyzed its expression in pancancer, which included HNSCC. The results demonstrated that compared with normal tissue, the AL161431.1 expression was significantly upregulated in 11 different types of cancer, namely, The statistical significance is shown as ns, P > 0.05; *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001. Figure 2B). In contrast, the AL161431.1 expression was significantly downregulated in kidney Chromophobe (KICH). Further investigation revealed that HNSCC tissues had significantly higher levels of AL161431.1 than normal tissues (P < 0.001, Figure 2C). At the same time, KM survival analysis found that high AL161431.1 expression was closely associated with worse OS in HNSCC (P< 0.001, Figure 2D). In addition, the expression level of AL161431.1 in HNSCC tissues was verified by qRT-PCR. qRT-PCR analysis showed that AL161431.1 was upregulated in HNSCC compared to normal tissues ( Figure 2E).
AL161431.1 was an independent prognostic indicator for HNSCC
We conducted the univariate Cox regression analysis to examine the predictive value of AL161431.1 in HNSCC patients. As illustrated in Figure Figure 3B. The results above manifested that AL161431.1 is an independent prognostic factor for HNSCC. As shown in Figure 3C, the AUC of AL161431.1 was 0.749, indicating a high diagnostic value of
Construction and assessment of the prognosis nomogram
To precisely estimate the 1-, 3-, and 5-year survival rates, we constructed a nomogram by combining the expression value of AL161431.1 with multiple clinicopathological factors (age, grade, M stage, gender, AJCC stage, N stage, and T stage) ( Figure 3E). The calibration curve analysis revealed that the predicted and actual 1-, 3-and 5-year survival times were comparable (Figures 3F). These findings suggested that the nomogram containing the expression level of AL161431.1 is accurate and reliable.
Co-expression and enrichment analyses reveal that AL161431.1 may involve in the regulation of cell growth
We first identified 443 co-expressed genes significantly associated with AL161431.1 to predict the functions and pathways that AL161431.1 may affect (Figures 4A, B). Then, we selected these genes to perform GO enrichment analyses, and results revealed AL161431.1 to be mainly associated with cell growth in the biological process (BP) category, the chromosomal region in the cellular component (CC) category, and transcription coregulator activity in molecular function (MF) category ( Figures 5A-C). In addition, GSEA was conducted to search pathways AL161431.1 might affect. The GSEA results showed that allograft rejection, antigen procession and presentation, autoimmune thyroid disease, the intestinal immune network for IgA production, and type I diabetes mellitus were significantly enriched in the low-AL161431.1 expression group ( Figure 5D).
The relationship between AL161431.1 expression and the level of immune infiltration in HNSCC
Immune cells that play an essential role in resisting or accelerating tumor growth are tightly linked to the initiation, development, and progression of HNSCC (25). Therefore, we evaluated immune cells significantly associated with AL161431.1 expression and their immune infiltration level in HNSCC. Correlation analysis revealed that the level of infiltration of M0 macrophages, activated dendritic cells, activated mast cells, and eosinophils were significantly and positively correlated with AL161431.1 expression. However, AL161431.1 expression was significantly negatively correlated with the infiltration level of regulatory T cells (Tregs), T follicular helper cells (Tfhs), resting dendritic cells, CD8 T cells, resting mast cells, and naive B cells (all P< 0.05, Figure 6).
Drug responses of high-and low-AL161431.1 expression groups in HNSCC
There were statistically significant response differences between high-and low-AL161431.1 expression groups for 59 of these drugs (P < 0.001). Apart from BI−2536 and PD0325901, several chemotherapeutic drugs, including AZD1332, AZD3759, Dasatinib, Erlotinib, Gefitinib, Lapatinib, Obatoclax Mesylate, Sapitinib, SCH772984, and VSP34_8731, showed a lower score in the high-AL161431.1 expression group (all P< 0.001, Figure 7). This suggested that HNSCC patients in the high-AL161431.1 expression group were more likely to respond to these chemotherapeutic drugs.
Discussion
In the past, the major difficulties in treating HNSCC were late detection, poor response to treatment, and a high recurrence rate. In addition, the molecular heterogeneity of HNSCC has hindered the identification of specific targets and the development of targeted therapy. EGFR inhibitors, as the only approved targeted drugs, have limited efficacy and face the problem of tumor drug resistance (2,26,27). In recent years, immunotherapy has brought new hope to patients with HNSCC (28). However, HNSCC remains difficult to treat, with high mortality and poor survival. As a result, determining how to diagnose the disease early and predict the patient's prognosis is a main priority. With the deepening of research, more and more lncRNAs have been proven to be novel diagnostic and prognostic markers and molecularly targeted therapeutic targets (29)(30)(31). This prompted us to search for lncRNAs related to the prognosis of HNSCC to predict patients' survival and treatment better.
This study comprehensively elucidated the prognostic value of AL161431.1 in patients with HNSCC by bioinformatics methods. We confirmed that the AL161431.1 expression was upregulated in HNSCC neoplastic tissues compared to normal tissues. In addition, we elucidated that AL161431.1 is an independent prognostic indicator for deteriorative OS in HNSCC patients using KM survival analysis combined with Cox regression analysis. Besides, gender and stage were also independent prognostic factors for HNSCC.
We used differentially expressed lncRNA, Kaplan-Meier analysis, and the Cox regression analysis to screen the prognosisrelated lncRNAs, which differs from Cao W et al.'s methods, such as the orthogonal partial least squares discrimination analysis (OPLS-DA) and 1.5-fold expression change criterion methods (32). In our study, we first screened out lncRNAs with different expressions in normal and tumor samples. Then, the Kaplan-Meier and Cox regression analyses were performed to screen the prognosisrelated lncRNA. However, Cao W et al. first screened out lncRNAs associated with the OS of the patients. Then, the patients were divided into good and poor survival groups based on survival time. Next, the OPLS-DA analysis evaluated the lncRNA expression profile differences between the groups. Finally, the lncRNAs with significant differences between groups were further screened by the 1.5x expression variation criterion.
As a non-parametric estimation method, Kaplan-Meier analysis is currently the most commonly used method for survival analysis. It can analyze the impact of univariate and categorical variables on overall survival and provide a visual representation of survival function (33). Cox regression analysis can handle multiple FIGURE 6 Lollipop charts show the correlation between AL161431.1 and immune cell infiltration in HNSCC samples.
predictors and confounding variables and provide hazard ratios, which help quantify the effect of predictors on survival (33,34). Combining these two methods, we established a reliable method for screening the prognosis-related lncRNAs. Many studies have used these two methods to screen prognostic-related lncRNAs and verify the prognostic capability of models. For example, Zhang et al. used differentially expressed lncRNA screening, univariate survival analysis, and multivariate Cox regression analysis to identify a 4-lncRNA signature, which performed well in predicting the prognosis of laryngeal cancer (35). Using univariate and multivariate Cox regression analyses, Wu et al. screened prognosis-related lncRNAs, and an eight-immune-related lncRNA prognostic signature was acquired. Then, Kaplan-Meier analysis and ROC analysis verified the prognostic capability of models (36). Similarly, OPLS-DA and 1.5-fold expression change criterion methods are innovative and effective methods for screening prognostic lncRNAs, which can remove irrelevant variables and effectively separate samples between groups. We used different methods to screen for prognosis-related lncRNAs, but both yielded reliable results.
In previous studies, numerous lncRNAs were aberrantly expressed in various tumors (37, 38). Studies have proved that AL161431.1 is implicated as an oncogene in various tumor types. For example, AL161431.1 was overexpressed in endometrial carcinoma and promoted endometrial carcinoma cell multiplication and migration by activating miR-1252-5p expression (17). Furthermore, LncRNA AL161431.1 could influence the invasion and metastasis of pancreatic cancer cells by promoting the epithelial-mesenchymal transition (EMT) process (16). Studies have also shown that AL161431.1 is an autophagy-related LncRNA, strongly correlated with the prognosis and tumor immune microenvironment of non-small cell lung cancer (19). Shao et al. found that hypoxia-related lncRNA AL161431.1 was highly expressed in LUAD. Inhibition of AL161431.1 can block the proliferation of LUAD cells. The expression level of AL161431.1 was upregulated under hypoxia (39). However, the precise role of AL161431.1 in HNSCC is unknown. In the present study, through GO enrichment analysis, we found that AL161431.1 is involved in a variety of biological functions and processes, such as cell growth, negative regulation of cell growth, and extrinsic apoptotic signaling pathway. In light of these data, AL161431.1 may affect the growth and development of HNSCC, which may be the reason for the poor prognosis of patients with high expression of AL161431.1. However, its specific role and molecular mechanism remain to be studied.
HNSCC is an immunosuppressive disease with high immune infiltration and poor antigen-presenting function (28,40). The GSEA results in this study showed that several immune-related pathways are enriched in the low-AL161431.1 expression group. Based on these results, we concluded that immunotherapy might be more effective for patients in the low-AL161431.1 expression group than in the high-AL161431.1 expression group.
Immune cells are the cellular basis of immunotherapy. Infiltrating immune cells in tumors is closely related to clinical outcomes and can be used as drug targets for cancer treatment (41). Our correlation analysis of AL161431.1 expression and immune cell infiltration revealed that AL161431.1 expression correlated significantly and positively with M0 macrophages, eosinophils, activated mast cells, and activated dendritic cells. In contrast, regulatory T cells (Tregs), T follicular helper cells (Tfhs), naive B Boxplots present the differences in chemotherapeutic sensitivity between the AL161431.1 low-level and high-level groups.
Zhou et al. 10.3389/fonc.2023.1134456 cells, resting mast cells, CD8 T cells, and resting dendritic cells were significantly negatively correlated with it. Studies have shown that lncRNA can activate the expression of immune-cell-related genes, thus leading to tumor immune cell infiltration (42). Meanwhile, immune cells, such as macrophages, dendritic cells, regulatory T cells, and mast cells, are essential to the tumor microenvironment (TME) that can positively or negatively regulate cancer development (43). Macrophages within the TME are called tumor-associated macrophages (TAMs) (44). The latest research has shown that TAMs infiltration often predicts a poor prognosis (45,46). On the contrary, T cell infiltration, especially CD8 T cells, predicts a favorable prognosis (47). Our study results support these conclusions. Tregs are the most important type of cell in the TME. Typically, Treg cells are dedicated to regulating immune responses to prevent excessive reactivity of the immune system. However, Tregs' function within the TME is highly complex, as they can promote cancer progression by suppressing the immune response against cancer cells (48). Considerable evidence suggests that Tregs increase during the development of HNSCC, but there is no consensus on the prognostic value of Tregs in HNSCC (49). Several studies strongly support that high levels of Tregs are negatively correlated with the prognosis of HNSCC (50,51). However, other studies describe a completely different situation, where high Treg numbers are associated with improved overall survival (51,52). Difficulties in correctly identifying Treg cells may cause these conflicting results. Similarly, this also explains our results that AL161431.1 expression correlated significantly and negatively with Treg cells. Based on these findings, we can better evaluate patients' prognosis and immunotherapeutic efficacy based on the type and status of infiltrating immune cells.
Hypoxia is a common cause of cancer cell proliferation, metastasis, and recurrence. In addition, hypoxia is closely associated with increased drug resistance in tumors, severely limiting the therapeutic efficacy of HNSCC (53,54). Increasing data have shown that hypoxic-induced activation of hypoxiainducible factors (HIFs) can regulate the expression of lncRNA and participate in tumor proliferation, metastasis, and drug resistance (55-57). AL161431.1, a hypoxia-related lncRNA, is upregulated under hypoxia conditions, associated with poor tumor patient survival (39). To improve treatment outcomes, we identified potential chemotherapeutic-sensitive drugs for HNSCC patients with high expression of AL161431.1 by analyzing the differences in sensitivity to chemotherapy drugs between the two sets. We found that patients in the high-AL161431.1 expression group may respond better to chemotherapeutic agents, such as BI −2536, PD0325901, and AZD1332, than those in the low-AL161431.1 expression group. These results may guide the chemotherapy treatment of HNSCC. However, the specific role of AL161431.1 in the difference in chemotherapeutic drug sensitivity in HNSCC warranted further analysis. Finally, we validated the expression of AL161431.1 in clinical specimens by qRT-PCR. Consistent with our previous bioinformatics analysis, the expression of AL161431.1 was upregulated in HNSCC.
To our knowledge, this is the first study on AL161431.1 and HNSCC prognosis, including differential and survival analyses. Our findings suggest that AL161431.1 has a high potential in the diagnosis, prognosis, and targeted therapy of HNSCC. However, our study has several limitations. First, HNSCC has been defined as a highly heterogenous tumor (58). Therefore, the clinical impact of AL161431.1 in other specific subtypes of head and neck cancer remains unclear. Second, we only analyzed the relationship between the AL161431.1 expression and the level of immune infiltration in HNSCC. However, its relationship with the immunosuppressive microenvironment of HNSCC remains to be further studied. Finally, additional experimentation and clinical research are required to further validate the clinical value and potential mechanism of AL161431.1.
Conclusion
In conclusion, our findings demonstrated that AL161431.1 is highly expressed and associated with a poor prognosis in HNSCC, which might make it a new therapeutic target for HNSCC patients.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material.
Author contributions
MZ, FY, and LZ conceived and performed bioinformatics analysis. MZ and LZ collected clinical samples. MZ, MM, and FY co-wrote the manuscript. LZ, TZ, and YL undertook a manuscript review. All authors contributed to the article and approved the submitted version. | 5,314.4 | 2023-06-16T00:00:00.000 | [
"Medicine",
"Biology"
] |
Reliability-Based View Synthesis for Free Viewpoint Video
View synthesis is a crucial technique for free viewpoint video and multi-view video coding because of its capability to render an unlimited number of virtual viewpoints from adjacent captured texture images and corresponding depth maps. The accuracy of depth maps is very important to the rendering quality, since depth image–based rendering (DIBR) is the most widely used technology among synthesis algorithms. There are some issues due to the fact that stereo depth estimation is error-prone. In addition, filling occlusions is another challenge in producing desirable synthesized images. In this paper, we propose a reliability-based view synthesis framework. A depth refinement method is used to check the reliability of depth values and refine some of the unreliable pixels, and an adaptive background modeling algorithm is utilized to construct a background image aiming to fill the remaining empty regions after a proposed weighted blending process. Finally, the proposed approach is implemented and tested on test video sequences, and experimental results indicate objective and subjective improvements compared to previous view synthesis methods.
Introduction
In the past few decades, three-dimensional video has been widely adopted in various applications.Free viewpoint video (FVV) is a novel display format that has evolved from 3D video that enables viewers to watch a scene from any position [1].This free navigation (FN) experience provides a rich and compelling immersive feeling that is much better than traditional 3D video [2].However, FVV has significant requirements for video acquisition, compression, and transmission technology.Due to the limitations on camera volume and bandwidth of the communication system, only a limited number of views can be transferred.View synthesis technology is proposed to support the FN capability of generating texture images that are not captured by a real camera.Depth image-based rendering (DIBR) [3] is a crucial technology for view synthesis.DIBR utilizes one or more reference texture images and their associated depth images to synthesize virtual view images, wherein every pixel in the original reference image plane is projected to the 3D world coordinate system according to its associated depth value; thereafter the 3D world coordinates are projected onto the image plane in the virtual viewpoint [4].
Although virtual view from an arbitrary viewpoint can be reconstructed by utilizing reference texture and depth information, DIBR still brings some artifacts due to the inaccurate depth images.image, while texture synthesis can fill the large-scale holes [12][13][14].Since large holes are observed when areas that are occluded by foreground objects in the reference view become exposed in the synthesized view, view-blending approaches can be used to alleviate this problem, as two adjacent cameras can cover a relatively wider viewing angle [15][16][17].
As to exploiting the temporal correlation, Scheming and Jiang [18] tried to determine the background information using a background subtraction method, but this approach relies on good performance of the foreground segmentation method, so it cannot be adopted in complex circumstances.Chen explored the motion vector of H.264/AVC bit stream to render disocclusions in the virtual view [19].In [20,21], a background sprite was generated by the original texture and synthesized images from the temporal previous frames using disocclusion filling, but the temporal consistency of the synthesized images needs further investigation, as described in [22].In [23], Yao proposed a disocclusion filling approach based on temporal correlation and depth information.Experimental results showed that this approach yields better subjective and objective performance beyond the above-mentioned spatial methods of filling disocclusions.However, the SVD format limits its wide usage because of the small baseline.Besides, some disocclusion regions that are not included in a single reference view may easily be spotted in another virtual viewpoint, and reverse mapping [4] may be more reliable.In [23], Luo and Zhu et al. proposed the use of a constructed background video with a modified Gaussian mixture model (GMM) to eliminate the holes in synthesized video.The foreground objects are detected and removed, then motion compensation and modified GMMs are applied to construct a stable background.Results indicated that a clean background without artifacts of foreground objects can be generated by using the proposed background model, so that the blurry effect or artifacts in disoccluded regions can be eliminated and the sharp edges along the foreground boundaries can be preserved with realistic appearance [24].
Although [17] indicates that large holes in a target virtual view can be greatly reduced by using other more neighboring complementary views in addition to the two (commonly used) most neighboring primary views, we still employ only two reference views to render virtual views in our proposed framework.The occlusions that appear on one warped image will be filled by another reference viewpoint in the weighted blending process.
In this paper, a multiview plus depth (MVD) format is employed for view synthesis.Two reference views are selected to interpolate virtual views located between them.Occlusions that appear on one warped image will be filled by another reference viewpoint.In addition, an adaptive background modeling method is proposed to construct background intensity distribution.The stable constructed reference background image helps to fill the remaining unfilled regions that are left due to the unreliable depth map.Another novelty of the proposed algorithm relates to depth refinement, which has the advantage of eliminating some noise caused by the coarse depth map.We also present a weighted blending process to blend two warped images from reference views based on the reliability of each pixel.An adaptive median filter and a depth map processing method are utilized before generating the synthesized virtual image.
Proposed Framework
In this section, the proposed approach will be presented in detail.The framework of the proposed synthesis algorithm is illustrated in Figure 1.There are mainly four techniques proposed in this framework: depth refinement, background modeling, reliability-based blending, and depth map processing method.These approaches will be discussed in Section 3.
Depth Refinement
There are two steps in the depth refinement process, as illustrated in Figure 1a.In the first step, depth consistency cross-checking is used to check whether each pixel's depth value is reliable or unreliable.Second, depth refinement is employed to interpolate the depth values of unreliable pixels.The details of the first step are as follows.For depth consistency cross-checking of the left reference view: let (u, v) be the coordinate of one pixel from the left reference view, then its corresponding pixel (u w , v w ) in the right reference view is obtained through the classical DIBR technology [2].The texture value I and depth value D of these two pixels are verified; the subscript L and R indicate left view and right view, respectively.Ith is a large preset threshold value for texture comparison and Dth is a small preset threshold value for depth comparison.The consistency checking produces five results, as follows: (1) If ‖ (, ) − ( , )‖ implies that these two pixels fail to match.In this situation, there is a high probability that the pixel belongs to the occlusion area and the depth pixel in the left reference depth map fails to check whether it is reliable or not, then it is marked as blue in its cross-checking mask.(3) If ‖ (, ) − ( , )‖ 2 2 > ℎ and | (, ) − ( , )| ≤ ℎ are both satisfied, this implies that these two pixels fail to match.Either an erroneous texture pixel or an unreliable depth value causes this situation.We will check its surrounding depth distribution to find the real reason in the second step.The depth pixel in the left reference depth map is unreliable and it is marked as red in its cross-checking mask.(4) If ‖ (, ) − ( , )‖ 2 2 ≤ ℎ and | (, ) − ( , )| > ℎ are both satisfied, this also implies that the depth pixel is unreliable, and it is marked as green in its cross-checking mask.(5) Some pixels in the left reference view are not able to project into the right reference view, because their corresponding pixels are located outside the image boundary.These areas are marked as white.
Figure 2 shows a result of the depth consistency check; because pixels in the white and blue regions fail to get a chance to verify their reliability, they are all determined to be unreliable and a
Depth Refinement
There are two steps in the depth refinement process, as illustrated in Figure 1a.In the first step, depth consistency cross-checking is used to check whether each pixel's depth value is reliable or unreliable.Second, depth refinement is employed to interpolate the depth values of unreliable pixels.The details of the first step are as follows.For depth consistency cross-checking of the left reference view: let (u, v) be the coordinate of one pixel from the left reference view, then its corresponding pixel (u w , v w ) in the right reference view is obtained through the classical DIBR technology [2].The texture value I and depth value D of these two pixels are verified; the subscript L and R indicate left view and right view, respectively.I th is a large preset threshold value for texture comparison and D th is a small preset threshold value for depth comparison.The consistency checking produces five results, as follows: are both satisfied, this implies that these two pixels are matched.This depth pixel in the left reference depth map is reliable only in this situation, and it is marked as black in its cross-checking mask. (2 are both satisfied, this implies that these two pixels fail to match.In this situation, there is a high probability that the pixel belongs to the occlusion area and the depth pixel in the left reference depth map fails to check whether it is reliable or not, then it is marked as blue in its cross-checking mask. are both satisfied, this implies that these two pixels fail to match.Either an erroneous texture pixel or an unreliable depth value causes this situation.We will check its surrounding depth distribution to find the real reason in the second step.The depth pixel in the left reference depth map is unreliable and it is marked as red in its cross-checking mask.(4) If ||I L (u, v) − I R (u w , v w )|| 2 2 ≤ I th and |D L (u, v) − D R (u w , v w )| > D th are both satisfied, this also implies that the depth pixel is unreliable, and it is marked as green in its cross-checking mask.
(5) Some pixels in the left reference view are not able to project into the right reference view, because their corresponding pixels are located outside the image boundary.These areas are marked as white.Figure 2 shows a result of the depth consistency check; because pixels in the white and blue regions fail to get a chance to verify their reliability, they are all determined to be unreliable and a specific weight is given when they are interpolated into virtual view.Several measures are implemented to refine other unreliable pixels, especially for the red and green regions.The main idea for the refinement is to find the most appropriate reliable pixel value to interpolate the depth value of unreliable pixels.Neighboring pixels from four directions are utilized here, and both the inverse proportion of distance and the reliability of the depth value are considered in calculating the weighting factors.If the reliable depth pixel maps to a reliable pixel in the other view, this indicates that this depth pixel is highly reliable.On the contrary, if the corresponding pixel in the other view is unreliable, the reliability of the pixel is lower.
Appl.Sci.2018, 8, x FOR PEER REVIEW 5 of 14 specific weight is given when they are interpolated into virtual view.Several measures are implemented to refine other unreliable pixels, especially for the red and green regions.The main idea for the refinement is to find the most appropriate reliable pixel value to interpolate the depth value of unreliable pixels.Neighboring pixels from four directions are utilized here, and both the inverse proportion of distance and the reliability of the depth value are considered in calculating the weighting factors.If the reliable depth pixel maps to a reliable pixel in the other view, this indicates that this depth pixel is highly reliable.On the contrary, if the corresponding pixel in the other view is unreliable, the reliability of the pixel is lower.Let WDt, WDb, WDl, and WDr be the weighting factors calculated by the distance from the current unreliable depth value to the nearest reliable depth pixel in top, bottom, left, and right directions, respectively.WH and WL are the weighting values with high reliability and low reliability, respectively.The weighting factor for each direction can be formulated as: × , if pixel in this direction has high reliability × , if pixel in this direction has low reliability , where the subscript direction can be either t, b, l, or r.The four weighting factors are normalized as WNdirection, then the unreliable depth value Dr can be interpolated by Equation ( 2): where Dd is the nearest reliable depth value in one of four directions.
Adaptive Background Modeling
In the previous step, a refined depth map was obtained.In Section 3.2, we propose to apply an adaptive background modeling method evolving from Gaussian mixture model (GMM) to generate a reference image.GMM is commonly used in video processing to detect moving objects because of its capacity to identify foreground and background pixels [7].In previous research, GMM was utilized to construct a stable background image aiming to fill large empty regions.However, GMM is not suitable for scenes that contain periodic or reciprocating foreground objects; these foreground moving objects are easily detected as erroneous background pixels, thus generating an inaccurate background image.In addition, some background pixels might have slight changes, for example, pixel densities are different while shadows caused by foreground objects appear or move.Thus, the stable background images generated by previous approaches always had blurring effects and were not accurate.In our proposed adaptive background modeling method, both the texture images and their associated depth maps are utilized to explore the temporal correlation.In addition, we propose to apply a reliability-based view synthesis method using background information to interpolate the intermediate image and fill the disocclusions.Let WD t , WD b , WD l , and WD r be the weighting factors calculated by the distance from the current unreliable depth value to the nearest reliable depth pixel in top, bottom, left, and right directions, respectively.W H and W L are the weighting values with high reliability and low reliability, respectively.The weighting factor for each direction can be formulated as: where the subscript direction can be either t, b, l, or r.The four weighting factors are normalized as WN direction , then the unreliable depth value D r can be interpolated by Equation (2): where D d is the nearest reliable depth value in one of four directions.
Adaptive Background Modeling
In the previous step, a refined depth map was obtained.In Section 3.2, we propose to apply an adaptive background modeling method evolving from Gaussian mixture model (GMM) to generate a reference image.GMM is commonly used in video processing to detect moving objects because of its capacity to identify foreground and background pixels [7].In previous research, GMM was utilized to construct a stable background image aiming to fill large empty regions.However, GMM is not suitable for scenes that contain periodic or reciprocating foreground objects; these foreground moving objects are easily detected as erroneous background pixels, thus generating an inaccurate background image.In addition, some background pixels might have slight changes, for example, pixel densities are different while shadows caused by foreground objects appear or move.Thus, the stable background images generated by previous approaches always had blurring effects and were not accurate.In our proposed adaptive background modeling method, both the texture images and their associated depth maps are utilized to explore the temporal correlation.In addition, we propose to apply a reliability-based view synthesis method using background information to interpolate the intermediate image and fill the disocclusions.
The proposed method works at the pixel level, and every pixel is modeled independently by a mixture of K Gaussian distributions, where K is usually between 3 and 5.By using this distribution, pixel values that have a high probability of occurring are saved if their associated depth values show they belong to the background.The Gaussian mixture distribution with K components can be written as [25]: where p x j,t denotes the probable density of value x j,t of pixel j at time t; η is the Gaussian density function with three dependent variables: x j,t , µ j,i,t , and σ 2 j,i,t , where µ j,i,t denotes the mean value of pixel x j ; and σ 2 j,t is the variance value of the pixel.Further, ω i j,t is the weight of the ith Gaussian distribution at time t of pixel j, with The function η is given by: Before texture information is modeled by Gaussian distribution, we propose to verify each novel pixel to ensure that it is not from a foreground region.If the depth value is much bigger than the stored depth buffer (which means this pixel is nearer to a captured device), the pixel is considered as a foreground pixel.Otherwise, if the depth value is much smaller than the stored buffer, the pixel is considered as a background pixel, and the modeled distribution is not reliable and should be restarted.The detailed process to generate the reference background distribution is as follows: 1.
Initialization.The model is initialized at the beginning of the generation (time t 0 ): where the variance value σ 2 j is set to a certain large number, d j is the stored depth buffer for pixel j, and d j,t 0 is the depth value of pixel j at time t 0 .
2.
Update.In the next frame, i.e., at time t 1 , we first check the depth level of this pixel, and d j,t1 is compared with the existing depth buffer d j .There are three situations for the depth comparison results: (a) If the condition d j,t1 − d j > t d is satisfied (t d is a predefined threshold depth value), this indicates that the new pixel x j,t1 belongs to the foreground objects, it will be discarded, and background distribution will not be updated.(b) If d j,t 1 − d j < t d is verified, x j,t1 is searched to match with K Gaussian models.From each model i from 1 to K, if the condition x j,t 1 − µ j,i,t 0 ≤ 2.5 σ j,i,t 0 is satisfied, the matching process will stop, and the matched Gaussian model will be updated as follows: where α is the model learning rate (α = 0.01), and ρ = α/ω i j,t 0 .The other parameters of the Gaussian models remain unchanged except: These two parameters reflect the rate of model convergence.If pixel x j,t 1 fails to match all the current Gaussian models, a new Gaussian model is introduced to evict the Gaussian model with the smallest ω/σ value.The mean and variance values of the other Gaussian models remain unchanged, while the new model is set with µ j,t 1 = x j,t 1 , σ j,t 1 = 30, ω j,t 1 = 0.01.Finally, the weights of K Gaussian models are normalized to (c) In the third situation, if the condition d j − d j,t1 > t d is satisfied, this indicates that the new input pixel x j,t1 belongs to the background and the previous Gaussian distributions need to be abandoned.The first step is executed for x j,t1 .
3.
Convergence.The remaining frames are processed by repeating step 2. The value of background pixels is derived by µ, and the most stable pixels in the time domain are modeled as background image; meanwhile, the number of Gaussian models of each pixel is obtained to determine whether the pixel experiences similar intensities over time or not.
Figure 3 shows two examples of adaptive background modeling.Figure 3a presents the Ballet background image generated with a small baseline, where cam03 is chosen as a target virtual viewpoint that is interpolated by the reference viewpoints cam02 and cam04.Figure 3b presents the Breakdancers modeling result, where the background image at virtual viewpoint cam04 is projected from reference viewpoints cam02 and cam06.Although some foreground objects are stored in a stable temporal background reference using the mechanism of the proposed framework, these effects would not affect the quality of the final synthesized image, since the filling of remaining empty regions always occurs in the background areas.Thus, the temporal stable background information can be obtained by both large and small baseline instances.This adaptive background modeling approach can be widely adopted in applications with unchanged scenes.
from reference viewpoints cam02 and cam06.Although some foreground objects are stored in a stable temporal background reference using the mechanism of the proposed framework, these effects would not affect the quality of the final synthesized image, since the filling of remaining empty regions always occurs in the background areas.Thus, the temporal stable background information can be obtained by both large and small baseline instances.This adaptive background modeling approach can be widely adopted in applications with unchanged scenes.
Reliability-Based Weighted Blending
As the background distribution for each reference view is obtained by the proposed background modeling method discussed in Section 3.2, two background images are projected into virtual viewpoint and then blended into one background image in virtual viewpoint (represented by IB).Previous research shows that GMM has an inherent capacity to capture background and foreground pixel intensities; missing pixel intensities of an occluded area are successfully recovered by exploiting temporal correlation.
Reliability-Based Weighted Blending
As the background distribution for each reference view is obtained by the proposed background modeling method discussed in Section 3.2, two background images are projected into virtual viewpoint and then blended into one background image in virtual viewpoint (represented by I B ).Previous research shows that GMM has an inherent capacity to capture background and foreground pixel intensities; missing pixel intensities of an occluded area are successfully recovered by exploiting temporal correlation.
In our proposed method, weighting factors are also applied to blend two reference views and one background image into a synthesized image.Two reference texture images are projected to virtual view using their corresponding refined depth maps, and two intermediate texture images I L , I R and depth images D L , D R are obtained.The reliability-based weighted blending process to produce a virtual image I V is as follows: (1) If a pixel is filled in both I L and I R , first two depth values are compared.If the depth value of one pixel is much bigger than the other, this indicates that one pixel is obviously nearer to the capturing device.I V is filled by the pixel with a bigger associated depth value.If two depth values are very close, weighting factors are utilized.I V is formulated as follows: where WD is the weighting factor for the inversely proportional distance between reference view and virtual view, and WR is the weighting factor for the previously defined reliability of depth value.One of three values (r H , r M , or r L ) is assigned to WR when a pixel in this reference intermediate image is mapped by a reliable, refined, or unreliable depth value, respectively.It should be noted that W L and W R need to be normalized by W i so that W L + W R = 1.(2) If only one pixel is filled in two reference views, for example only I L is filled, the reliability of I L is taken into consideration.If I L is mapped by a reliable depth value, I V can simply be filled with I L (I V = I L ).Otherwise, background information is used to generate I V .If D L is close to the background depth value, then I V = (I L + I B )/2; if D L is much bigger than D B , I V = I L .(3) If pixels in both reference views are not filled, we use the constructed background image to deal with the hole-filling challenge.First, we check the surrounding depth value of I V and find the filled depth value to determine a proper depth value range.Then I V is filled by the background pixel if its depth value is in the obtained range.Otherwise, inverse warping and classical inpainting are applied to fill I V .
We propose this hole-filling method to ensure that background pixels are appropriate to fill the remaining hole regions.Because they adopt depth information, background pixels can be chosen to improve the rendered image quality even when the hole is surrounded by foreground objects.
Depth Map Processing Method
After weighted blending is completed, the warped texture image and depth map become entirely filled.However, in the previous process, cracks and pinholes could be observed in the rendered image.With the previous method, a classical median filter was applied to smooth the texture image or remove these artifacts.In our framework, a depth map processing method is proposed.Not only the above-mentioned artifacts, but also the background pixels in cracks of foreground regions (shown in Figure 4a,b) can be removed.This method has advantages in preserving the texture details, since it is only performed on the detected coordinates.
The main idea DMPM is based on the fact that pixel value in depth maps always changes smoothly in a large area, except in the case of sharp edges in the boundary area between foreground objects and background.These features allow easy detection of noise in depth maps.In fact, most artifacts and noise caused by inaccurate depth values are reduced because of the previously introduced depth refinement, but some unreliable or undetected depth values remain in the reference depth map, most of them in out-of-boundary areas and occluded areas.Therefore, DMPM is still necessary.Details of the depth map processing method are as follows: (1) A conventional median filter is proposed to apply to the coarse depth map d in (x, y) to obtain an improved depth map d (x, y).It is capable of removing the existing noise and preserves the sharp boundary information.
(2) The texture image I in (x, y) is refined according to the improvement of its associated depth map.
If the condition |d (x, y) − d in (x, y)| > ε is satisfied (ε is a threshold value for depth difference), this indicates that the depth value of the pixel is unreliable and it is renewed after the median filter.An inverse mapping process using the updated depth value is employed to find an appropriate texture pixel.A depth range d ∈ [d − ε, d + ε] is used as a candidate to find its corresponding pixel in two reference views.In Equations ( 11) and ( 12), we can get a corresponding reference pixel location (u r , v r ) through pixel (x, y) and the associated depth values z v and z r ; A and b denote rotation matrix and translation matrix, respectively.Several measurements are used to make sure a highly reliable pixel is obtained by using backward warping.First, the depth value of the obtained pixel should be close to the updated depth value d (x, y).Second, the disparity between (x, y) and (u r , v r ) should not be too large according to the alignment of the reference viewpoint and virtual viewpoint: In our previous method, we simply used a median filter on (x, y), and this turned out to be very effective when the texture of this area was smooth.However, a median filter easily produces blurring effects when the scene has detailed textures.Unlike the texture images, the smooth regions in the depth map are invulnerable to the filter with gray value distributions.After the renovation is conducted, the associated texture image is updated according to the improvement of its depth map.
Figure 4d shows the updated version of the integrated depth map, where the infiltration errors and unnatural depth distribution are eliminated by the classical median filter, while the sharp edges are preserved.Comparing Figure 4a,c, the DMPM generates desirable improvement and avoids filtering of the entire image at the same time.
In our previous method, we simply used a median filter on (x, y), and this turned out to be very effective when the texture of this area was smooth.However, a median filter easily produces blurring effects when the scene has detailed textures.Unlike the texture images, the smooth regions in the depth map are invulnerable to the filter with gray value distributions.After the renovation is conducted, the associated texture image is updated according to the improvement of its depth map. Figure 4d shows the updated version of the integrated depth map, where the infiltration errors and unnatural depth distribution are eliminated by the classical median filter, while the sharp edges are preserved.Comparing Figure 4a,c, the DMPM generates desirable improvement and avoids filtering of the entire image at the same time.
Experimental Results
In this section, the proposed framework is implemented in C++ based on OpenCV, and the tested multiview video plus depth sequences include two Microsoft datasets: Ballet and Breakdancers.In all video sequences, the size of each frame is 1024 × 768 pixels, and each video contains 100 frames with an unmoved background.The baseline between two adjacent cameras is 20 cm for both Ballet and Breakdancers.The associated depth maps and camera parameters are provided with the sequences.The format of all video sequences is avi, while texture images contain three channels (RGB).
Experimental Results
In this section, the proposed framework is implemented in C++ based on OpenCV, and the tested multiview video plus depth sequences include two Microsoft datasets: Ballet and Breakdancers.In all video sequences, the size of each frame is 1024 × 768 pixels, and each video contains 100 frames with an unmoved background.The baseline between two adjacent cameras is 20 cm for both Ballet and Breakdancers.The associated depth maps and camera parameters are provided with the sequences.The format of all video sequences is avi, while texture images contain three channels (RGB).
To evaluate the performance of the proposed method, we implemented two state-of-the-art methods and my previous work in [26], in order to compare this with the proposed approach.One of these two methods is a commonly used reference software, VSRS 3.5 [27], which mainly contains a simple DIBR method [3] and a classical inpainting technique [10].The other is a hole-filling method exploiting temporal correlations based on GMM [5].These two methods [5,27] represent the exploitation of spatial correlation and temporal correlation, respectively.In each experiment, the test sequence was composed of three real video sequences from three reference viewpoints.The coded left and right views with their associated depth videos were projected to interpolate the virtual video in the target viewpoint between them.The rendered sequence was compared with the actual video on the target viewpoints to measure the peak signal-noise ratio (PSNR) and structural similarity index (SSIM).In order to show wide practical applicability of the proposed synthesis algorithm, each view synthesis method was performed on both small baseline and large baseline instances.Tables 1 and 2 show the average PSNR and SSIM values for 100 frames.In the PSNR evaluation, the proposed approach obtained 4-10 dB better results than VSRS 3.5 on Ballet for a large baseline instance.In the case of a small baseline, the results for both Ballet and Breakdancers were also better.The proposed method also showed better results beyond GMM-based disocclusion filling method and my previous work.Inpainting is an effective algorithm to fill narrow gaps and other small empty regions when the baseline is small, however, it is not practical for fill large empty regions.Moreover, my previous work did not perform well for both Ballet and Breakdancers sequences.This is due to the fact that simple GMM is not capable to deal with the scenes which foreground objects are with reciprocating motion.
Consequently, the proposed approach yielded better results on both tested sequences.The larger the baseline, the better the results.In addition to the objective measurements, Figure 5 shows a subjective comparison.Figure 5a presents the synthesized results generated by a simple DIBR technology, where the disocclusion regions and pinholes remain to be filled.Figure 5b shows the performance of VSRS 3.5, where large empty regions are filled based on neighboring texture information.Blurring effects are observed, in contrast to our proposed method in Figure 5e.This improvement comes from our idea of avoiding global processing for every pixel to handle the noise.Hence, our method shows desirable results in reducing errors and removing unwanted effects, while texture remains sharp and clear.Figure 5c shows an enlarged part of the synthesis result produced by the GMM-based disocclusion filling method; the temporal correlation method shows better performance in filling large empty areas beyond the inpainting method.Depth refinement and weighted blending lead to much more satisfactory interpolation results, as shown in Figure 5e.
improvement comes from our idea of avoiding global processing for every pixel to handle the noise.Hence, our method shows desirable results in reducing errors and removing unwanted effects, while texture remains sharp and clear.Figure 5c shows an enlarged part of the synthesis result produced by the GMM-based disocclusion filling method; the temporal correlation method shows better performance in filling large empty areas beyond the inpainting method.Depth refinement and weighted blending lead to much more satisfactory interpolation results, as shown in Figure 5e.Frame-by-frame comparisons of PSNR and SSIM are shown in Figure 6. Figure 6a,b show a synthesis result with a large baseline: viewpoint cam03 is interpolated by cam01 and cam 07.Another PSNR and SSIM comparison (Figure 6c,d) comes from a small baseline; two reference viewpoints, cam03 and cam05, were utilized to render target virtual view cam04.Both instances are from the sequence Ballet.Obviously, exploring temporal correlations to fill the disocclusions yields better performance beyond the inpainting-based view synthesis method, which only explores the spatial correlation, especially when the baseline is large.In all the frames, our proposed framework shows more stable output than the GMM-based method.
In this article, we additionally tested the computation time for all the four approaches.Greater improvements in subjective and objective image quality are brought by much more complex computation.In our proposed method, 3D warping process is performed six times and adaptive background modeling is applied twice, that is the reason why the computation cost of my proposed method is high.The first reason is that the GPU-accelerated algorithm is commonly used for image processing and the hardware performance is growing rapidly, the increased computation time for one frame will not increase too much time for synthesizing the whole sequence if parallel algorithm is adopted.The second reason is that due to the mechanism of our proposed approach, we mainly explore the contribution of depth refinement technique and adaptive background modeling, the time can be shortened if this method is applied in practical applications.After all, our proposed method is implemented using OpenCV library, the computation time is capable to reduce a lot if we carefully using coding skills.Frame-by-frame comparisons of PSNR and SSIM are shown in Figure 6. Figure 6a,b show a synthesis result with a large baseline: viewpoint cam03 is interpolated by cam01 and cam 07.Another PSNR and SSIM comparison (Figure 6c,d) comes from a small baseline; two reference viewpoints, cam03 and cam05, were utilized to render target virtual view cam04.Both instances are from the sequence Ballet.Obviously, exploring temporal correlations to fill the disocclusions yields better performance beyond the inpainting-based view synthesis method, which only explores the spatial correlation, especially when the baseline is large.In all the frames, our proposed framework shows more stable output than the GMM-based method.
In this article, we additionally tested the computation time for all the four approaches.Greater improvements in subjective and objective image quality are brought by much more complex computation.In our proposed method, 3D warping process is performed six times and adaptive background modeling is applied twice, that is the reason why the computation cost of my proposed method is high.The first reason is that the GPU-accelerated algorithm is commonly used for image processing and the hardware performance is growing rapidly, the increased computation time for one frame will not increase too much time for synthesizing the whole sequence if parallel algorithm is adopted.The second reason is that due to the mechanism of our proposed approach, we mainly explore the contribution of depth refinement technique and adaptive background modeling, the time can be shortened if this method is applied in practical applications.After all, our proposed method is implemented using OpenCV library, the computation time is capable to reduce a lot if we carefully using coding skills.
Conclusions
In this paper, we present a reliability-based view synthesis framework using depth refinement and an adaptive background modeling method.Multiple viewpoints are employed to render desirable virtual images.In the proposed algorithm, the disocclusion regions are filled by a combination of two sources.The first one comes from two reference viewpoints; the disocclusion regions generated from one reference view are more likely to be found from another reference view due to different position and viewing angle.If the disocclusion regions are lost in both reference views, the updated background image is utilized to fill the static regions.Experimental results indicate that depth refinement obviously improves the accuracy of the depth map, thus improving the performance of the proposed adaptive background modeling and forward (and backward) warping.In addition, an adaptive median filter and DMPM are proposed to replace the classical median filter due to their ability to eliminate unwanted effects and noise while ensuring high-quality texture images.The experimental results show that the combination of proposed techniques yields satisfactory subjective and objective improvement.There are three aspects to focus on in our future research.First, we will focus on improving synthesis quality while reducing computing complexity.Second, we will explore how to construct a stable temporal correlation for complex scenes with moving cameras.Finally, as deep learning is becoming more popular in various types of research, deep view synthesis seems to have a bright future.
Conclusions
In this paper, we present a reliability-based view synthesis framework using depth refinement and an adaptive background modeling method.Multiple viewpoints are employed to render desirable virtual images.In the proposed algorithm, the disocclusion regions are filled by a combination of two sources.The first one comes from two reference viewpoints; the disocclusion regions generated from one reference view are more likely to be found from another reference view due to different position and viewing angle.If the disocclusion regions are lost in both reference views, the updated background image is utilized to fill the static regions.Experimental results indicate that depth refinement obviously improves the accuracy of the depth map, thus improving the performance of the proposed adaptive background modeling and forward (and backward) warping.In addition, an adaptive median filter and DMPM are proposed to replace the classical median filter due to their ability to eliminate unwanted effects and noise while ensuring high-quality texture images.The experimental results show that the combination of proposed techniques yields satisfactory subjective and objective improvement.There are three aspects to focus on in our future research.First, we will focus on improving synthesis quality while reducing computing complexity.Second, we will explore how to construct a stable temporal correlation for complex scenes with moving cameras.Finally, as deep learning is becoming more popular in various types of research, deep view synthesis seems to have a bright future.
Figure 1 .
Figure 1.Framework of the proposed view synthesis: (a) illustration of Depth Refinement; (b) the framework of proposed approaches using refined depth information.
Figure 1 .
Figure 1.Framework of the proposed view synthesis: (a) illustration of Depth Refinement; (b) the framework of proposed approaches using refined depth information.
Figure 2 .
Figure 2. Result of depth consistency cross-check.
Figure 2 .
Figure 2. Result of depth consistency cross-check.
Figure 4 .
Figure 4. Examples of depth map processing method: (a,b) enlarged integrated texture image and its associated depth map before depth map processing method (DMPM); (c,d) image and its associated depth map after DMPM.
Figure 4 .
Figure 4. Examples of depth map processing method: (a,b) enlarged integrated texture image and its associated depth map before depth map processing method (DMPM); (c,d) image and its associated depth map after DMPM.
Author
Contributions: Z.D. and M.W. designed the experiments.Z.D. performed the experiments.Z.D. wrote the paper and analyzed the data.M.W. contributed simulation tools.M.W. supervise the whole work.
Table 1 .
Average peak signal-noise ratio (PSNR) comparison of the proposed technique and three state-of-the-art techniques.
Table 2 .
Average structural similarity index (SSIM) comparison of the proposed technique and three state-of-the-art techniques. | 9,382.8 | 2018-05-20T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Android-Based Material for Sports Massage Learning
The Android application is a Linux-based operating system for cellular phones such as smartphones and tablet computers that can be designed as digital media to facilitate the need for teaching materials. The purpose of this study was to develop Android-based massage sports teaching materials for students in the Physical Education, Health, and Recreation Study Program. The research method was research and development with the Borg and Gall models. Data collection techniques included observation, interviews, questionnaires, tests, and documents. The participants involved were students, colleagues, and experts. The results of the study indicate that teaching materials for learning sports massage based on Android provide convenience in understanding the content of the material. Students can access materials easily at any time, both on-campus and off-campus. This application is called "MassageSmartClick," which can be accessed on mobile phones, and students can also study it in the print book that has been presented in the application. It has some menus, namely the splash menu, main menu, competency menu, material menu, illustration menu, evaluation menu, and information menu.
Introduction
Technological advances have been utilized for the implementation of teaching and learning in the 21st century. In addition, students, as digital natives, can use various technological applications and educational processes must also direct students to have learning skills using technology and information media. They can apply these skills in everyday life. Therefore, educators must make adjustments to the current development of knowledge and technology so that educators can change traditional learning patterns into modern technology-based learning. It means that the educational process is carried out using technological products. Products of technological progress have broad benefits if teachers can use them well. A survey of 2,462 Advanced Placement (AP) and National Writing Project (NWP) teachers in America found digital technology as a tool that has helped teachers in teaching middle school and high school students in many ways. At the same time, the internet, mobile phones, and social media have brought new challenges to teachers [1]. Mobile learning is a new form of learning that takes advantage of the unique capabilities of mobile devices. The use of digital pictures, which were delivered via mobile device, proved to be surprisingly successful. Mobile learning is a new form of learning that takes advantage of the unique capabilities of mobile devices. The use of digital pictures, which were delivered via mobile device, proved to be surprisingly successful [2]. In colleges, the use of smartphones is still a hot trend. However, students do not yet have the readiness to use mobile as a learning medium. A study found that student acceptance of m-learning was quite good. More specifically, attitudes, subjective norms, and behavioural control positively influence their intention to adopt mobile learning. The results provide valuable rewards for increasing student acceptance of mobile learning. [3] But on the other hand, mobile learning is not a stable concept; therefore, its current interpretations need to be made explicit. Examples of current projects and practices show an affinity between mobile and game-based learning and can further illuminate what is distinctive and worthwhile about mobile learning [4].
Mobile Learning can provide the best support for learning in the dissemination and improvement of the quality of learning content. It means that teachers can innovate on learning content and students can access the content easily. In addition, mobile learning also facilitates the communication process during the learning process. Thus, mobile learning can be assumed as a learning medium that can be designed for synchronous and asynchronous methods. This medium can facilitate the teaching and learning needs of all subjects. One of them is physical education and sports. Several studies on the trend of using mobile learning in physical education articles that have been published in reputable journals show that the integration of new mobile technologies into physical education activities has increased from year to year. The application of mainstream mobile learning teaching strategies in physical education is also limited. Therefore, several research issues for mobile technology in physical education are recommended [5]. Other research shows that students who have low digital skills are often not ready to use mobile learning [6]. Therefore, one of the efforts to overcome this problem is to design mobile learning systems that are easy to use and simple. It may even be designed with a flexible application and can be operated with or without the internet.
One of the efforts to use technology in learning is to make the learning process more effective, efficient and interesting. The use of these technologies includes designing and developing learning materials in attractive packaging based on technology and information. Not only attractive in terms of appearance but also must be able to improve the quality of interaction in the student learning process. Therefore, the presentation of teaching materials must be harmonized with technological devices so that learning activities are easier and more interesting. One principle we use to design the materials is motivation. Motivation can be enhanced through challenge, curiosity, control, recognition, competition, and cooperation. This model helps inform our understanding of the motivating features of using mobile devices for learning and how mobile technologies can be used to enhance learners' motivation [7].
From the results of field observations at Institut Keguruan dan Ilmu Pendidikan Budi Utomo Malang, it is known that the Sports Massage course is one of the compulsory subjects and must be taught with the integration of technology devices. While the results of studies from various structures of the Physical Education, Health and Recreation curriculum at other universities in Indonesia, such as Universitas Pendidikan Indonesia, Universitas Sebelas Maret Surakarta, Universitas Tadulako concluded that the sports massage course is also a compulsory subject and is presented with authentically open materials. With these findings, the ideal learning conditions in sports massage courses should be able to create a pleasant learning climate, attract attention, improve 21st-century learning competencies and innovation skills that involve technological advances through the presentation of relevant teaching materials. Therefore, teaching materials must be designed to develop the skills of today's century which include (a) critical thinking and problem solving, (b) communication and collaboration, (c) creativity and innovation.
However, this ideal condition has not been realized. Therefore, this research has an urgency for the Study Program in designing teaching materials in massage sports courses. The implementation of this research is supported by several reasons; 1) the use of technological devices has not been used in the provision of teaching materials, 2) students have technological devices to support online learning activities, 3) the need for authentic teaching materials can stimulate student learning experiences by utilizing technology tools that owned, 4) the demands of the learning process are relevant to the digital era, and 5) universities must be able to produce competent graduates in the 21st century.
Many previous studies have examined the relationship between the development of teaching materials and technology. Educational institutions must develop digital models for all aspects of education delivery [8], [9]. Currently, the Internet of Things-based school sports intelligence system and artificial intelligence technology influence the management of school sports [8], [10], [11]. It means that schools have entered the digital era. In Sweden, adequate digital competence is in the spotlight due to Sweden's 2017 national strategy for digitizing the K-12 school system [12]. Various technological tools have been applied such as video, 3D animation, or website applications [13]. The multimedia has provided theoretical references in research related to sports teaching and learning [10], [14], [15]. However, some educators do not understand the application of these tools and ICT in learning [16], [17]. There is also cellular technology that has been utilized as multimedia in teaching and learning at all levels of education. Teachers have taken advantage of the combination of affordability of mobile technology and Apps that improve some aspects of learning practice [18]. Especially during the current covid-19 pandemic, mobile technology has made it easier to carry out learning [16], [18], [19].
Three constructs characterizing the pedagogy of mobile learning have emerged: authenticity, collaboration, and personalization. The pedagogical framework provides a spotlight to illuminate and examine mobile learning experiences, account still needs to be taken of learners' specific characteristics and needs, the environments in which the learning could potentially take place, and the preferences and characteristics of teachers, including their epistemological beliefs. Teacher roles and the learning task design are further crucial factors [20].
The last research concluded that technology has had a very big role in the teaching and learning process. However, in this study, the technology related to the development of massage sports teaching materials is an android application. It is a gap with previous research. This development focuses on designing android-based applications that are tailored to the context of the university and student backgrounds. Thus, the research is expected to be able to provide many benefits to the need for authentic teaching materials for sports teaching. Moreover, students at the University have technological devices that support learning activities. Teaching materials presented in the form of an android application help facilitate teaching and learning during the covid-19 pandemic. Thus, the purpose of this study is to develop android-based teaching materials for massage sports learning that are by the needs of the student context and technological advances.
Research Method
The study was a research and development (R & D) method using Borg and Gall model and ADDIE model. The two models were adopted into three main steps, namely, namely 1) preliminary study, product trial, and dissemination. Procedural development steps followed their steps, starting from initial analysis to product effectiveness during implementation.
Participants
The research was carried out from July 2019 to May 2021. The participant was taken randomly to each University. Three universities in East Java were involved in this research. There were 200 students as research subjects have used and provided their feedback on the products we developed. Students from three universities in East Java were divided into three groups consisting of 20 students who participated in the initial trial, 100 students involved in the main field trial, and 80 students were involved in product implementation. The product implementation process was carried out using the true experimental control group method with 40 students in the experimental group and 40 students in the control group. The participant was taken randomly to each University. Participants involved in the research were divided into 3 parts, namely:
Process of Collecting Data
The process of collecting data used several ways, namely: a Observation. The observation guide was developed to describe the current learning environment. It was done in a preliminary study to get the problems information of material in massage sports learning. It was done to get the needs analysis data related to the sports massage learning material. b Document. It was taken from the syllabus and material which has been used in sports massage learning. c Interview. Interview guidelines were prepared to determine the level of student satisfaction with the developed application. It was done to get the student's and colleagues' perceptions about the product in the product trial process both small and large trials. Students who were involved in product trials in small and large groups at the end of the lesson were interviewed by researchers. 85% of the participants in this stage were taken to be interviewed. d Questionnaire. The questionnaire was developed with the indicators mentioned in the questionnaire section to determine the quality of the product developed from the student's perspective. It was distributed to students and colleagues to obtain the needs of teaching materials in massage sports learning related to technology. In addition, the questionnaire was also used to assess the product being tested in small and large groups. The questionnaire was designed using a closed questionnaire type and participants had to choose the answers that had been provided related to the questions. The questionnaire used a Likert scale with five answer choices. The validity of the instrument used the judgment expert consisting of technology experts, curriculum experts, and experts in physical activity and sports. While the reliability of questionnaire items used Cronbach's Alpha value. It is >0.70, the result of the test 0.76 > 0.70 and the questionnaire was reliable. 4) The test was used to get the improvement of students' achievement in massage learning using the android application. 5) Document was taken from syllabus and material which were used in the universities.
Procedure
The research began with the collection of initial data consisting of a literature review, the results of current learning observations and a need analysis questionnaire at several universities. Initial data was used to design a mobile learning application model that would be developed, several devices such as storyboards and other materials (text, images, and videos) were collected and then assembled into a developed product. The product was packaged in the form of *.apk which was installed on a smartphone device. It was produced and operated on a smartphone, the product was tested on a limited basis by the researcher and development team, revising the product if necessary. The finished product was validated by experts to get the content validity of the developed product. The experts involved include learning technology experts, massage sports material experts, and teaching and learning experts. After the validation process was complete, the next product was in an initial trial with 20 research subjects to determine their satisfaction with the product. After getting the results from the initial trial, the product was then continued in the main field trial with 100 subjects at several universities in Indonesia. The result of the feedback on field trials was used to revise the product until it was implemented. Product implementation was carried out using the true experimental control group method at three universities with a total of 80 subjects. The results of this implementation showed the level of student satisfaction with the product and the level of effectiveness of the product in helping students achieve better learning outcomes.
The procedure of the study was adopted from Borg and Gall model with the ADDIE model. It was described in the following picture.
Result
The results of the needs analysis concluded that a) the sports massage learning process still used book media and LCD, requires an electrical plug, requires a laptop, requires a screen for delivering sports massage material, b) there was limited time in sports massage learning so that massage learning sports were not optimal, c) the material presented in sports massage learning only refers to a few learning resources. It was from lectures and textbooks which were sometimes difficult to find by students, d) the renewal of learning media using technology was needed because textbooks that had a visualization of basic sports massage techniques were still limited. It gave the wrong perception of sports massage technique movements if students interpret the description of sports massage techniques incorrectly. However, if there was a video simulation of sports massage technique movements, it would further clarify the visualization of sports massage technique movements to be clearer and more precise. These findings were the reason for the need for the development of a sports massage model that was by the needs and technological developments at present.
There were several important conclusions in the needs analysis that related to the development of android-based massage sports teaching materials. 1. The data showed that 61.70% of sports massage lessons were taught a week and 38.30% of sports massage lessons were taught twice a week. Thus, students' understanding of sports massage material was quite low. Students had difficulty in learning the material being taught without any learning media. 2. The data showed that students opened learning applications as much as 6.38%, students opened Facebook applications as much as 40.43 and students opened Instagram applications as much as 23.40%. 3. The data showed that 38.30% of students chose Android application media, 23.40% of students chose printed books and 14.89% of students chose to learn VCD. This means that students had an enthusiasm for android-based media that were adapted to the progress of the times and technological developments in the current era. This media was considered to be used for learning anytime and anywhere without being bound by space and time. 4. The data showed that 97.87% of students already had an android phone. While 2.13% did not have it so that if the sports massage learning media was packaged in the form of an android-based application, there were no obstacles for students to access it. 5. Students who agreed that sports massage learning was designed using an android application were 87.23%.
The following was a picture of the "MassageSmartClick" application that has been designed on a mobile phone. This application had several menus, namely: 1. The splash screen. The following was a splash screen of the Android App product after a revision; Picture 2. Splash Screen "Massage SmartClick" Picture 2 was divided into four displays, namely: 1) the name of the application "MassageSmartClick", 2) bringing up the Welcome Word, 3) bringing up information on the homepage which contained what competencies was obtained after using the application, 4) bringing up information Attention that was important to read before starting to open the application.
Picture 3. Menu Screen "Massage SmartClick"
Picture 3 is the main menu that contained icons that had been designed to make it easier for students to choose what submenu they wanted and would choose next. Picture 5 contained sports massage material and sports massage techniques that had been arranged in such a way that it was easy for students to understand the contents of the entire material 5. The illustration sub-menu. Figure 6 contained videos of sports massage technique movements so that it was easier for students to understand good and correct sports massage techniques.
Picture 7. Example Video of Massage Motion in "Massage SmartClick"
6. The evaluation sub-menu.
Picture 8. Evaluation Sub-menu "Massage SmartClick
Picture 8 contained questions about all the material presented, the questions were with multiple choice answers and students can see directly the scores obtained after answering these questions. The goal was to find out how far students were in mastering the material presented. 7. Information on the application developer who had a role in designing and supporting the creation of the MassageSmartClick Android application.
Picture 9. Information sub-menu "Massage SmartClick
Picture 9 presents the three product designers. The main designer as the actor who carried out research activities was Boby Ardianzah. The development of this product is assisted by research supervisors Prof. Dr Achmad Sofyan Hanif and Dr Imam Sulaiman.
The test results in the large group test also showed that students had a significant improvement in massage learning outcomes after using the new teaching material model. The results of the Independent Samples Test obtained the value of t = 12,588 and the value of Sig.
(2-tailed) or p-value = 0.000 < 0.05 or Ho was rejected. So, it can be concluded that the android-based teaching materials model had a significant influence on mastery of massage learning.
Discussion
The android application product that had been developed was designed very clearly because it had several menus that made it easier for students to use the application. Some of the menus displayed were 1) splash screen "Massage SmartClick" displayed the first-page screen in the application, 2) menu screen displayed the start page for menus in the application, 3) competency sub-menu screen displayed information on learning activities in one semester, 4) screen sub-menu material contained various massage sports materials that had been designed according to the student need, 5) illustration sub-menu screen and example video of massage motion included examples of massage sports technique movements that made it easier for students to understand all movements, 6) evaluation sub-menu described information about student learning outcomes assessment, and 7) information sub-menu related to product designers.
The development product in the form of an Android-based sports massage learning application had provided complete data. Many applications were developed through the basic android base. Web3D was a concept of interactive web content in three-dimensional (3D) form assisted by WebGL (Web-based Graphics Library) as an engine and was a Platform Application Programming Interfaces (API) 3D graphics library that allowed internet browsers to create 3D scenes [21]. Besides, the teachers can design material using the android application as the basis of designing. The coach can use an android-based design in preparing the material to train the sensory-motor [22], [23]. The product presented the history of the development of sports massage, the purposes and benefits of sports massage, indications and contraindications of sports massage, requirements in sports massage, principles of sports massage, basic sports massage techniques along with examples of basic sports massage techniques. Sports massage material that was packaged in applications and textbooks was arranged systematically, from easy to difficult. Technology had also made learning activities easier today in the era of the Covid-19 pandemic [24]. With this application, students can still understand the basic techniques of massage movements in sports.
The product was equipped with an application manual which was integrated into one with the "Learn Easy Sports Massage" textbook which was designed as attractive as possible so that readers can easily understand and practice sports massage techniques. As we know that android-based digital books can trigger an interactive and independent learning environment between students and students and teachers because students' enthusiasm for learning tended to be high [25]. So that the learning targets can be delivered completely. Limitations in further research can be developed. This sports massage learning application can be installed by all users who had an android hand tree in the play store by providing a capacity slot of 500Mb. The location for taking sports massage technique videos was in the multipurpose building of IKIP Budi Utomo Malang, and the demonstrations and equipment used were simple to make it easier for students to practice in the field.
In addition to having several advantages, the development of an Android-based sports massage model in this sports massage course also had several limitations or shortcomings. However, after several improvements and revisions had been made, it was expected to minimize the shortcomings contained in this model product. The results of the presentation of massage learning videos can be complemented by physiological and psychological effects as evidence of clinical trials on the effects of mastering basic massage techniques [26]. Here were some disadvantages of the "MasaseSmartClick application: a The "MasaseSmartClick" application product that had been made was still very simple both in terms of the overall design appearance. b Application products can only be installed and opened on smartphones with Android OS c The application capacity still cannot be reduced because it can reduce the quality of the illustration video so that the size was still above 300Mb.
Conclusion
"MassageSmartClick" was a product of Android-based teaching materials that can be accessed using a mobile phone anywhere and anytime. This product had the feasibility and effectiveness in improving massage learning in the classroom and outside the classroom. From the findings of the product trial, it was concluded that students also had high learning enthusiasm. The display on each menu was clear and attractive to students. This also had an impact on the mastery of basic massage techniques and the achievement of learning outcomes.
In addition, the results of the study also had implications for the learning process which became easier and faster because students can learn anytime, anywhere without being bound by space and time, so that learning objectives can be achieved. The results of expert validation, field testing and model effectiveness testing were suggested that lecturers who support sports massage courses applied the product of this development model as a learning medium in the process of lecturing sports massage courses on campus.
However, the development of this product was still limited to one course and the videos used during the research process were also uploaded on YouTube. Thus, this research can still be developed by designing videos of students' massage learning outcomes as an example of the basic techniques of massage movements. This can be a form of appreciation for groups of students in doing assignments. The android-based sports massage model needed to be followed up by increasing the innovation of its use in various campuses so that the interaction process between lecturers and students can be more meaningful. | 5,644.8 | 2022-02-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Implementing Core Genes and an Omnigenic Model for Behaviour Traits Prediction in Genomics
A high number of genome variants are associated with complex traits, mainly due to genome-wide association studies (GWAS). Using polygenic risk scores (PRSs) is a widely accepted method for calculating an individual’s complex trait prognosis using such data. Unlike monogenic traits, the practical implementation of complex traits by applying this method still falls behind. Calculating PRSs from all GWAS data has limited practical usability in behaviour traits due to statistical noise and the small effect size from a high number of genome variants involved. From a behaviour traits perspective, complex traits are explored using the concept of core genes from an omnigenic model, aiming to employ a simplified calculation version. Simplification may reduce the accuracy compared to a complete PRS encompassing all trait-associated variants. Integrating genome data with datasets from various disciplines, such as IT and psychology, could lead to better complex trait prediction. This review elucidates the significance of clear biological pathways in understanding behaviour traits. Specifically, it highlights the essential role of genes related to hormones, enzymes, and neurotransmitters as robust core genes in shaping these traits. Significant variations in core genes are prominently observed in behaviour traits such as stress response, impulsivity, and substance use.
Introduction
Understanding complex behaviour traits is a very important part of human genomics research.Behaviour traits, such as impulsivity, stress, depression, and addictive tendencies, profoundly affect individual lives and social interactions.However, their multifactorial nature, influenced by both genetic and environmental factors and the vast number of genetic variants involved, presents significant challenges in elucidating their genetic underpinnings.
This review is grounded in a narrative approach, and it explores the emerging concept of core genes and the omnigenic model, aiming to shed light on the genetic basis of behavioural traits and providing an original perspective on the genetic architecture of complex traits, moving beyond the traditional genome-wide association studies (GWAS) and polygenic risk score calculations that have dominated the field.The interplay of core genes within intricate biological pathways is investigated, with the goal of achieving a deeper understanding of behaviour characteristics.This approach, focusing on genes with clear biological pathways and significant influence on behaviour, suggests a more standardised approach for the practical prediction of behaviour traits.
In the following sections, the review discusses the intricacies of genome-wide association studies, the practical application of combined genome variants for complex traits, and the implementation of the omnigenic model.The review also presents the significance of biological pathways and the role of core genes in shaping behaviour traits.It is intended to provide a comprehensive understanding of the current state of attempts to predict behavioural traits using genomics and the potential future directions in this field.
Complex and Monogenic Traits
Unlike a monogenic trait, complex traits are not explained by one or a few genes but are characterised by polygenicity, where multiple genetic variants contribute to the trait's manifestation.Furthermore, studying complex traits is challenging because it is determined by the combination of genetic and environmental factors [1].
The genomic variation in monogenic traits is widely used for practical purposes such as genetic counselling, diagnostic testing, carrier screening, or personalised medicine.The availability of such wide practical use comes due to the simplistic nature of monogenic trait manifestation and the constantly improving quality of information collected by researchers, which is stored in public databases such as ClinVar [2].
The Role and Limitations of Genome-Wide Association Studies
The field of complex inheritance genomics has experienced a substantial surge in available data.A key contributing factor for associating thousands of genome variants with complex traits has been the widespread implementation of genome-wide association studies (GWAS) [3,4].
The usual way in which GWAS data are utilised on an individual level is by calculating the polygenic risk score (PRS).The PRS is a quantitative measure that combines the effects of multiple genetic variants across the genome to estimate an individual's genetic predisposition to a particular trait.Researchers assign weights to each genetic variant based on their association with the trait from the GWAS study, sum up the weighted effects of all the genomic variants related to that trait, and the resulting single numeric value represents the individual's possible predisposition to the trait [5,6].
However, research has shown that in many cases of GWAS, it is challenging to replicate results and genomic data retrieved from GWAS could have contradictory results.Most importantly, having thousands of variants for one trait complicates the phenotypic prediction from such genome data.
The genetics of physical performance is an excellent example of too many associated variants from GWAS data.Bray and colleagues reviewed numerous studies from the year 2007 and compiled a list of 214 genetic markers associated with physical performance [7].However, Varillas-Delgado et al. highlighted the limited reproducibility of reported associations and many "false positive" findings.Similar conclusions were raised in a publication by Rankinen et al. in the Journal of Physiology, where the authors noted that while the number of genes and polymorphisms associated with physical performance continues to grow, the predictive power of these genetic markers is limited, and it is problematic to translate genetic research from GWAS into practical use [8,9].A 2023 study by Semenova and colleagues reported that out of 251 DNA polymorphisms related to athlete status, only 128 markers consistently showed positive associations in at least two separate studies.Moreover, they provided a timeline of athletic-performance-associated genome variants, highlighting that numerous variants previously linked to physical performance were not included in the 2023 list [10].
Due to this, commercial genetic companies in their sports genetic testing tend to focus only on the main known genes, such as ACTN3 and ACE, which have very clear biological pathways related to physical performance, instead of calculating polygenic risk scores from associated variants from many genes [11].
Practical Application of Combined Genome Variants for Complex Traits
One of the better practical examples of utilising a combined genomic variation of complex traits for practical purposes is the forensic DNA Snapshot™ developed by Parabon NanoLab [12].The DNA Snapshot™ system, which received funding from the US Department of Defense, uses machine learning algorithms to analyse complex trait-associated genetic variation.This advanced technology enables the system to predict various physical features, including genetic ancestry; eye, hair, or skin colour; freckling; and face shape.Combining these predictions, the system generates a composite profile known as a "genetic photo robot," estimating an individual's physical characteristics.While it is just a prediction tool to predict the physical appearance of complex traits from individual DNA data, the prediction is accurate and has direct practical use in forensics [12,13].
Just as the DNA Snapshot™ system predicts physical appearance traits from DNA data, a similar approach could be taken to predict behaviour traits.By analysing the genomic variation associated with behaviour characteristics, such as impulsivity, risk taking, or addictive tendencies, this tool could provide insights into an individual's behaviour profile.
Understanding human behaviour involves a complex interplay of genetic and environmental factors.Genetic information can offer valuable insights into a person's behaviour predispositions and vulnerabilities, providing a tool, e.g., for awareness and self-education to improve personal well-being.By being aware of the genetic component of their behaviour, individuals can control and avoid environmental triggers associated with those specific behaviour traits [14].This knowledge empowers individuals to make informed choices and encourages them to choose environmental aspects, ultimately helping them manage and shape their behaviour more effectively.
However, complex behaviour traits pose even more significant challenges compared to physical appearance traits due to the more decisive influence of environmental factors and the involvement of a higher number of genetic variants.The conventional approach of calculating PRSs from all available GWAS data has limited practical usability in the case of behaviour traits due to statistical noise and the small effect size from a high number of genome variants involved.
Omnigenic Model Implementation
Creating a personal behaviour profile from GWAS data is very challenging.An omnigenic model suggested in 2017 by Boyle and his colleagues could be applied to predicting complex traits from genomic data, which expands the view of complex traits from polygenic to omnigenic [15].
The omnigenic model attempts to explain this observation by suggesting that diseases can be thought of as networks, where genes directly involved in complex traits are named "core genes", while peripheral genes are spread through whole-genome interaction networks.The omnigenic model represents a paradigm shift in our understanding of the genetic basis of complex traits.
Some researchers criticised this model as although it looks appealing, it oversimplifies complex traits, underestimates their biological complexity, and should not be focused on discovering only core genes [16].
Despite receiving some criticism, the omnigenic model has demonstrated its usefulness in various studies.For instance, Ratnakumar and his colleagues utilised the omnigenic model and successfully identified novel disease-associated core genes [17].Studies of Populus nigra and Drosophila melanogaster support the omnigenic model [18,19].The omnigenic model contributed to suggesting causes for disorders such as schizophrenia and autism [20,21].
From the perspective of complex traits in behaviour genetics, narrowing the focus solely to core genes and disregarding peripheral genes with very small effect sizes may reduce the accuracy compared to a full polygenic risk score encompassing all traitassociated variants.However, this simplification enables a more straightforward and easier practical implementation.
Biological Pathways for Behaviour Traits
When discussing core genes in the context of behaviour, we are referring to genes that possess well-defined biological pathways and play significant roles in critical processes associated with behaviour.These genes substantially influence behaviour by modulating various intricate biological mechanisms and pathways.
Several biological processes contribute to the complex interplay of factors influencing behaviour, such as neurotransmission, synaptic plasticity, hormone regulation, and neuronal signalling [22,23].
To elaborate further, it is important to underscore the roles of particular biochemical elements, which are fundamental to these processes.Enzymes, hormones, and neurotransmitters are essential components within an organism's chemical communication and regulation systems, constituting vital contributors to the manifestation of behaviour traits.An enzyme is a protein that acts as a catalyst, facilitating and speeding up biochemical reactions in the body.It plays crucial roles in various metabolic processes, helping to break down substances, build new molecules, and regulate cellular activities [24].A hormone is a biological compound that serves as a regulatory messenger in multicellular organisms, organising, coordinating, and controlling cellular and tissue functions.These chemical messengers are crucial in various physiological processes, including metabolism and behaviour [25].While hormones may modulate the expression of behaviour, they are not the cause of behaviour itself.Different behaviours are driven by a variety of internal and environmental stimuli, with hormones playing a prominent role in regulating and influencing behaviour responses [26].
A neurotransmitter is a chemical messenger that is synthesised and released by neurons and is used in the process of synaptic communication between neurons.It is involved in the process of sensory information and motor behaviour control [27].
While hormones and neurotransmitters share the role of chemical messengers, hormones exert systemic effects by being released into the bloodstream, whereas neurotransmitters act locally within the nervous system to transmit signals between neurons.Nevertheless, the distinction between hormones and neurotransmitters can be ambiguous, as certain substances like epinephrine and dopamine can serve dual roles as hormones and neurotransmitters [28].
Core Genes of Behaviour
A good example of core behaviour genes with clear biological pathways related to enzymes can be found in genes associated with substance abuse.The aldehyde dehydrogenase (ALDH) enzyme is involved in the metabolism of alcohol, and gene variations related to it have been found to influence the risk of alcohol dependence [29].
Risk of alcohol dependence, while being a complex trait, can be influenced by only two known genome variants that significantly impact its risk.Such variants are also interesting due to their distinct frequencies in East Asia compared to other populations worldwide, leading to a lower risk of alcohol dependence in East Asia [30,31].
rs1229984 is a single-nucleotide polymorphism (SNP) in the ADH1B gene, which encodes a subunit of the alcohol dehydrogenase enzyme, a crucial catalyst in hepatic ethanol metabolism.The T allele of this SNP enhances the enzyme's activity, accelerating the metabolism of alcohol to acetaldehyde, a toxic by-product.Elevated acetaldehyde levels may cause symptoms such as facial flushing, nausea, and elevated heart rate, which deter individuals from heavy drinking and thus lower the risk of developing alcohol dependence.Similarly, rs671 is an SNP in the ALDH2 gene, known for its impact on the alcohol metabolism process.The A allele leads to a Glu504Lys substitution that significantly impairs enzyme activity.This results in an accumulation of acetaldehyde when alcohol is consumed, provoking symptoms that deter heavy drinking and may lower the risk of alcohol dependence [29,30,32].In the genome aggregation database (gnomAD), the frequency of the rs1229984 T allele in the East Asia (A) population is 73.9%, compared with only 3.8% in the European population, and the rs671 A allele is 25.4% compared with 0.003% [33].
Figure 1 represents the interplay of genetic and environmental factors in alcohol dependence.The manifestation of complex behaviour traits is influenced by the interaction between core and peripheral genes, with variations in core genes having a significantly higher impact.Specific alleles of rs1229984 and rs671 in the ADH1B and ALDH2 genes, accordingly, lead to physical reactions like facial flushing and nausea after alcohol consumption, lowering the risk of alcohol dependence.In contrast, alternative alleles that do not cause these physical discomforts increase the likelihood of individuals becoming alcohol-dependent, as they lack these immediate negative physical deterrents post alcohol consumption.Environmental factors, which can be modulated, significantly influence alcohol consumption and dependence, particularly in groups with a higher genetic predisposition to alcohol dependence.These factors can either promote alcohol consumption, thereby increasing dependence risk, or discourage it, consequently reducing the risk of alcohol dependence.
Genes 2023, 14, x FOR PEER REVIEW 5 of 11 the rs1229984 T allele in the East Asia (A) population is 73.9%, compared with only 3.8% in the European population, and the rs671 A allele is 25.4% compared with 0.003% [33].Figure 1 represents the interplay of genetic and environmental factors in alcohol dependence.The manifestation of complex behaviour traits is influenced by the interaction between core and peripheral genes, with variations in core genes having a significantly higher impact.Specific alleles of rs1229984 and rs671 in the ADH1B and ALDH2 genes, accordingly, lead to physical reactions like facial flushing and nausea after alcohol consumption, lowering the risk of alcohol dependence.In contrast, alternative alleles that do not cause these physical discomforts increase the likelihood of individuals becoming alcohol-dependent, as they lack these immediate negative physical deterrents post alcohol consumption.Environmental factors, which can be modulated, significantly influence alcohol consumption and dependence, particularly in groups with a higher genetic predisposition to alcohol dependence.These factors can either promote alcohol consumption, thereby increasing dependence risk, or discourage it, consequently reducing the risk of alcohol dependence.Prominent among the robust core genes associated with behaviour are those involved in hormone regulation.A compelling illustration of such genes is those related to stress.The FKBP5 gene is primarily associated with regulating the stress response and is important in releasing stress hormones, including cortisol.Its variants are highly associated with stress response, for example, the rs1360780 variant in which the G allele has Prominent among the robust core genes associated with behaviour are those involved in hormone regulation.A compelling illustration of such genes is those related to stress.The FKBP5 gene is primarily associated with regulating the stress response and is important in releasing stress hormones, including cortisol.Its variants are highly associated with stress response, for example, the rs1360780 variant in which the G allele has been linked with impaired regulation of the stress hormone cortisol and, due to this, has been associated with increased vulnerability to stress-related disorders [34,35].The CRHR1 gene, which encodes the corticotropin-releasing hormone receptor 1, is associated with increased reaction to stress.A good example of this gene is the T allele of rs110402, which causes increased sensitivity to stress hormone signalling [34,36].The NR3C1 gene encodes the glucocorticoid receptor, which is involved in binding and responding to glucocorticoid hormones.Its variant, rs6195, plays a role in regulating and responding to cortisol.The C allele of rs6195 has been associated with increased stress sensitivity and a higher risk for stress-related disorders [34,37].
Many genes are intricately linked to their functioning within the neurotransmitters domain, making them highly relevant as core determinants of behaviour traits.The CHRNA5/CHRNA3/CHRNB4 gene cluster could represent the core genes for nicotine abuse and addiction.These genes encode the nicotinic acetylcholine receptor subunits, which are involved in synaptic neurotransmission.Genome variants such as rs16969968 in CHRNA5 have been strongly associated with increased nicotine dependence and with a higher risk of developing a nicotine addiction.As well as rs578776, rs1051730 variants in the CHRNA3 gene are likewise associated with developing nicotine addiction [38].
Another example of core genes related to neurotransmitters is those related to impulsivity.The COMT gene is involved in the metabolism of catecholamine neurotransmitters, such as dopamine, epinephrine, and norepinephrine.Its variant rs4680 is known to be associated with impulsive behaviour [39,40].The rs25531 variant of the SLC6A4 gene, which encodes the serotonin transporter protein, is also associated with impulsivity [41].This particular gene, and even the same rs25531 variant, is also associated with other behaviour traits such as anxiety, depression, and suicide [42,43].These and other examples of the core genes and their variants associated with different behaviour traits can be found in Table 1.The behaviour traits included in Table 1 were selected based on their relevance to the topic of this review and the robustness of the scientific evidence linking them to specific core genes and genomic variants.
Core Genes: Ethnic and Gender Perspectives
The primary emphasis of this review is to simplify and standardise the prediction of complex behavioural traits considering genetic factors.To achieve this, priority is given to the variation observed in the core genes.The core genes are characterised by their direct biological pathways and the coding genes responsible for producing specific products.Any variation within these genes directly influences the associated biological processes and is likely to have a universal effect across all individuals [66].By focusing solely on the core genes, we can minimise the scope of factors under consideration, thereby streamlining our analysis and creating a more uniform approach to predicting behaviour traits from genetic factors.However, even in the case of the core genes, it is important to consider additional factors, such as gender or ethnicity, which are considered to have a high impact on behaviour manifestation.
Considering the gender factor, gender-based genetic variation is a notable aspect.Sex chromosomes (X and Y) hold some genes that are not present in the opposite gender, leading to sex-specific genetic effects.For instance, the monoamine oxidase A (MAOA) gene located on the X chromosome is known to influence behavioural traits such as aggression and impulsivity, impacting males and females differently.Namely, greater risky behaviour is found in males [67].Gender differences in hormone levels can also affect the expression and functioning of core genes, thus affecting behaviour.For instance, the stress response, which is influenced by variations in the FKBP5, CRHR1, and NR3C1 genes, can vary between males and females due to the differences in the regulation of cortisol, a hormone that plays a key role in the stress response [68,69].Variants of the COMT gene, involved in the metabolism of catecholamine neurotransmitters, have been implicated in impulsivity.However, it has been suggested that the impact of COMT variants on impulsivity may differ between genders, warranting further investigation into gender-specific effects [70].However, it is important to note that gender differences in trait manifestation do not change the underlying role of the core genes involved in particular traits.
In terms of ethnicity, given that core genes have a direct influence on corresponding biological processes, the effect of a specific genomic variation within these genes tends to be universal, irrespective of an individual's ethnic group.While the impact of a specific variant remains constant across various ethnic groups, the frequency of these variants can differ significantly among different populations.Genetic diversity and unique evolutionary histories among different populations can contribute to substantial variation in the frequency of these variants without altering the effect of the specific variant.A clear example of this is the distinct frequencies of previously described variants associated with alcohol dependence, such as rs1229984 in the ADH1B gene and rs671 in the ALDH2 gene, observed in East Asia compared to other populations worldwide [30,31].
Conclusions
The omnigenic model and the selection of core genes with clear biological pathways present a promising approach to studying complex traits.Integrating core genes with other behaviour datasets presents a more precise approach that avoids genome variation with very small effect sizes.This approach could help to reduce statistical noise and the wide range of statistical methodologies used, thereby paving the way for the standardisation of complex trait analysis.
However, it is crucial to note that numerous studies have demonstrated that the majority of the genetic variation associated with complex traits is located in non-coding DNA regions and genes, which are not considered core genes.This may be attributed to the fact that the associated variants are proximal to core genes, and genes have a direct pathway with the main core genes [71][72][73].For example, one of the strongest associations between impulsivity and genetic variants is in the CADM2 gene, which mediates synaptic plasticity.The gene is co-expressed with HTR2A and GABRA2, both of which are also implicated in impulsivity.All three genes are involved in neurological processes, with HTR2A and GABRA2 playing direct roles in neurotransmitter signalling pathways.Specifically, HTR2A is a key component in the serotonin signalling pathway, while GABRA2 is integral to the γaminobutyric acid (GABA) signalling pathway.This co-expression and shared involvement in neurotransmission suggest potential synergistic roles in the modulation of impulsive behaviours [74][75][76].
Direct pathogenic variants in core genes predisposing complex traits are relatively rare, as important protein-coding sequences tend to be conservative.However, when such variants occur in these genes, they can have a significant effect on trait manifestation.For instance, the hormone vasopressin gene AVPR1A variants are associated with autism spectrum disorder [77].In such cases, identifying core genes and genetic variants can contribute to understanding different conditions and disorders as well as guide the development of personalised interventions and treatments.Because of the drastic interference in traits, the majority of the genetic variation associated with particular traits is not found directly within the core gene but rather in genes (and their variants) that are interlinked with it.Publications on the omnigenic model have demonstrated that variants identified in GWAS studies with the highest p values tend to be located in the proximity of
Figure 1 .
Figure1.The diagram illustrates the interplay of genetic and environmental factors in alcohol dependence."Core Genes" represent the primary genes associated with alcohol consumption, while "Core Genes Main Variation" displays the main genome variants of ADH1B and ALDH2 genes and their allele distribution in different populations."Peripheral Genes" and "Peripheral Genes Variation" indicate the multitude of genes that influence this complex trait, albeit with a lesser impact.Black arrows indicate interactions between different elements or factors.The green arrows point to factors associated with a decreased risk of alcohol dependence.The red arrows highlight factors that increase the risk of alcohol dependence.
Figure 1 .
Figure1.The diagram illustrates the interplay of genetic and environmental factors in alcohol dependence."Core Genes" represent the primary genes associated with alcohol consumption, while "Core Genes Main Variation" displays the main genome variants of ADH1B and ALDH2 genes and their allele distribution in different populations."Peripheral Genes" and "Peripheral Genes Variation" indicate the multitude of genes that influence this complex trait, albeit with a lesser impact.Black arrows indicate interactions between different elements or factors.The green arrows point to factors associated with a decreased risk of alcohol dependence.The red arrows highlight factors that increase the risk of alcohol dependence.
Table 1 .
Core genes and variants associated with behaviour traits. | 5,474 | 2023-08-01T00:00:00.000 | [
"Biology"
] |
Optical Potentials: Microscopic vs. Phenomenological Approaches
. In this work we study the performances of our microscopic optical potential [1, 2], derived from nucleon-nucleon chiral potentials at fifth order (N 4 LO), in comparison with those of a successful non-relativistic phenomenological optical potential in the description of elastic proton scattering data on tin and lead isotopes at energies around and above 200 MeV. Our results indicate that microscopic optical potentials derived from nucleon-nucleon chiral potentials at N 4 LO can provide reliable predictions for observables of stable and exotic nuclei, even at energies where the robustness of the chiral expansion starts to be questionable.
Introduction
In Ref. [1] we constructed a microscopic optical potential (OP) for elastic proton-nucleus (pA) scattering starting from nucleon-nucleon (NN) chiral potentials derived up to N 3 LO in the chiral expansion and we studied the chiral convergence of the NN potential in reproducing the pA scattering observables. The OP was obtained at the first-order term within the spectator expansion of the nonrelativistic multiple scattering theory and adopting the impulse approximation and the optimum factorization approximation.
In a subsequent work [2] we adopted the same model to obtain the OP and we studied the chiral convergence of a new generation of NN chiral interactions derived up to N 4 LO.
In this contribution we show a sample of our latest work [3] about the predictive power of our microscopic OP derived in Ref. [2] from different chiral potentials at N 4 LO and of the successful phenomenological OP (KD) of Ref. [12] in comparison with available data for the observables of elastic proton scattering on different isotopic chains. Results are presented for several proton energies around and above 200 MeV, with the aim to test the upper energy limit of applicability of our OP before the chiral expansion scheme breaks down.
Theory
The theoretical justification for the description of the pA optical potential in terms of the microscopical NN interaction has been addressed for the first time by Watson et al. [4] and then formalized by Kerman et al. (KMT) [5], where the so-called multiple scattering approach to the pA optical potential is expressed by a series expansion of the free NN scattering amplitudes.
In Refs. [1,2] a microscopic OP V opt was obtained at the first-order term within the spectator expansion of the nonrelativistic multiple scattering theory, corresponding to the single-scattering approximation. The impulse approximation was adopted, where nuclear binding forces on the interacting target nucleon are neglected, as well as the optimum factorization approximation, where the two basic ingredients of the calculations, i.e. the nuclear density and the NN t matrix, are factorized. We refer the reader to Refs. [1,2] for all relevant details and an exhaustive bibliography. In the momentum space, the factorized V opt is obtained as where t pN represents the proton-proton (pp) and protonneutron (pn) free t matrix evaluated at a fixed energy ω, ρ N the neutron and proton profile density, and the incoming and outgoing projectile momenta k and k are conveniently expressed by the variables q ≡ k − k and K ≡ 1 2 (k + k) (see Sect. II of Ref. [1]). For the neutron and proton densities of the target nucleus we use as in Refs. [1,2] a Relativistic Mean-Field (RMF) description [6], which has been quite successful in the description of ground state and excited state properties of finite nuclei, in particular in a Density Dependent Meson Exchange (DDME) version, where the couplings between mesonic and baryonic fields are assumed as functions of the density itself [7].
For the NN interaction we use here two different versions of the chiral potentials at fifth order (N 4 LO) recently derived by Epelbaum, Krebs, and Meißner (EKM) [8,9] and Entem, Machleidt, and Nosyk (EMN) [10,11]. As explained in Ref. [2], the two versions of the chiral N 4 LO potentials have significant differences concerning the renormalization procedures and we follow the same prescriptions adopted there. The strategy followed for the EKM potentials [8,9] consists in a coordinate space regularization for the long-range contributions V long (r), by the introduction of f r , and a conventional momentum space regularization for the contact (short-range) terms, with a cutoff Λ = 2R −1 . Five choices of R are available: 0.8, 0.9, 1.0, 1.1, and 1.2 fm, leading to five different potentials.
On the other hand, for the EMN potentials, a slightly more conventional approach was pursued [10,11]. A spectral function regularization, with a cutoffΛ 700 MeV, was employed to regularize the loop contributions and a conventional regulator function, with Λ = 450, 500, and 550 MeV, to deal with divergences in the Lippmann-Schwinger equation. For all details we refer the reader to Ref. [2].
Since the goal of the present work is to test the predictive power of our microscopic OP in comparison with available experimental data, it is definitely interesting to show the uncertainties on the predictions produced by NN chiral potentials obtained with different values of the regularization parameters. For this purpose, all calculations have been performed with three of the EKM [8,9] potentials, corresponding to R = 0.8, 0.9, and 1.0 fm, and with two of the EMN [10,11] potentials, corresponding to Λ = 500 and 550 MeV.
In all the next figures the bands give the differences produced by changing R for EKM (red bands) and Λ for EMN (green bands). Thus the bands have here a different meaning than in Ref. [2], where the EKM and EMN NN chiral potentials at N 4 LO were also used. The aim of Ref. [2] was to investigate the convergence and to assess the theoretical errors associated with the truncation of the chiral expansion and the bands were given to investigate these issues.
We also showed in Ref. [2] that EKM calculations based on different values of R are quite close and consistent with each other (although, as remarked in Ref. [8], larger values of R are probably less accurate due to a larger influence of cutoff artifacts). The same assumption can be made about the EMN potentials: changing the cutoffs does not lead to sizeable differences in the χ 2 /datum (see Tab.VIII in Ref. [11]) and it is safe to perform calculations with only two potentials. Because we want to explore elastic scattering at energies around and above 200 MeV, we exclude the EKM potentials with R = 1.1 and 1.2 fm and the EMN potential with Λ = 450 MeV.
In Ref. [1] we compared the results obtained with different versions of EKM and EMN chiral potentials at N 4 LO for the pp and pn Wolfenstein amplitudes and for the scattering observables of elastic proton scattering off 12 C, 16 O, and 40 Ca nuclei at an incident proton energy of E = 200 MeV.
We show in Fig. 1 the ratio of the differential cross section to the Rutherford cross section for elastic proton scattering off 16 O at E = 200 MeV. The results obtained with the EKM and EMN potentials and with the KD optical potential are compared with the experimental data taken from Refs. [13,14]. The EKM and EMN results for the differential cross section give a reasonable, although not perfect, agreement with data. The experimental ratio is slightly overestimated at lower angles and somewhat underestimated for θ ≥ 50 • . The differences between the EKM and EMN results are small and not crucial, EKM gives a smaller cross section around the maxima and therefore a somewhat better agreement with the data in this region. The bands, representing the uncertainties on the regularization of the NN chiral potentials, are generally small and not influential for the comparison with data. The KD result gives a good description of the experimental cross section for θ ≤ 20 • and underpredicts the data for larger angles.
The results for Sn isotopes at 295 MeV and for 120 Sn at 200 MeV are displayed in the upper panel of Fig. 2. In this case all the OPs give qualitatively similar results and a reasonable agreement with data, in particular, for θ ≤ 20 • . The agreement generally declines for larger angles. KD gives a better description of 120 Sn data at 200 MeV, where the EKM and EMN results are a bit larger than the data at the maxima and a bit lower at the minima. We note that 120 Sn is included in the experimental database for the KD potential for proton energies up to 160 MeV. At 295 MeV, the microscopic OP gives, in general, a slightly better agreement with the data than KD for all the tin isotopes shown in the figure.
The results for Pb isotopes at 295 MeV and for 208 Pb data at 200 MeV are displayed in lower panel of Fig. 2. Also in this case the experimental cross section at 200 MeV is well described by KD, the agreement is better than with the microscopic OP. The experimental database for KD includes 208 Pb up to 200 MeV. At 295 MeV a better agreement with data is generally given by the EKM and EMN results, in particular by EMN: for all the three iso- topes considered, the two results practically overlap for θ ≤ 20 • , where they are also very close to the KD result, then they start to separate and the EMN result is a bit larger than the EKM one and in better agreement with data. We point out that the uncertainty bands, that are generally narrow, in this case become larger increasing the scattering angle, when also the agreement with data declines.
The results that we have shown till now indicate that, in comparison with the phenomenological KD potential, our microscopic OP, in spite of the approximations made to derived it, has a comparable and in some cases even better predictive power in the description of the cross sections on the isotopic chains and energy range here considered. KD is able to give a better and excellent description of data in specific situations, in particular, for data included in the experimental database used to obtain the parameters of the phenomenological KD potential and at the lower energies considered. Our microscopic OP is able to give a similar and more homogeneous description of data for all the nuclei of an isotopic chain and, for energies above 200 MeV it gives, in general, a better agreement with data than the phenomenological KD potential. | 2,444.4 | 2019-05-01T00:00:00.000 | [
"Physics"
] |
Prosocial behavior among human workers in robot-augmented production teams—A field-in-the-lab experiment
of their production output as well as their pro-sociality among each other. Learning factories are learning, teaching, and research environments in engineering university departments. These factory environments allow control over the production environment and incentives for participants. Results: Our experiment suggests that the robot’s presence increases sharing behavior among human workers, but there is no evidence that rewards earned from production are valued differently. Discussion: We discuss the implications of this approach for future studies on human-machine interaction.
. Introduction
Human-machine interaction is becoming increasingly relevant in production environments across industries (Graetz and Michaels, 2018;Cheng et al., 2019).Thus, the adoption and use of computer and robotic technology results in-at times drasticchanges to employees' work environments.In the context of manufacturing, automation and artificial intelligence (AI) in combination with improved sensors allow so-called cobots (collaborative robots) to work closely and safely alongside humans (International Federation of Robotics, 2018).Such hybrid human-robot teams are relevant in the production workforce as they allow for improved efficiency and flexibility compared to fully automated or manually operated setups.Robots can work tirelessly, but changes in product design or the workflows in a production line can still most easily be adapted to by humans (see, e.g., Simões et al., 2022, for a review on creating shared human-robot workspaces for flexible production).Beyond production, hybrid human-robot teams are relevant in the fields of medicine, service, and logistics, where they assist in surgeries, patient care, customer service, and warehouse operations (Hornecker et al., 2020;Beuss et al., 2021;Carros et al., 2022;Burtch et al., 2023;CBS News, 2023).On the one hand, this raises a lot of interest in how humans and robots will work together effectively (Corgnet et al., 2019;Haesevoets et al., 2021) and which features of the robots affect the human workers' perceptions of the robots (Terzioglu et al., 2020).On the other hand, despite humanmachine interaction being an important subject for practitioners and researchers alike, it still needs to be determined how robots in hybrid human-robot teams affect human-human interaction.This is particularly relevant in the work context because it is known to create strong social incentives (Besley and Ghatak, 2018), norms (Danilov and Sliwka, 2017), and can serve as a socialization device (Ramalingam and Rauh, 2010).
The study of human-machine interaction in work environments has garnered increasing attention in recent years (Jussupow et al., 2020;Chugunova and Sele, 2022), focusing on the role of incentives (Corgnet et al., 2019), team interaction (Corgnet et al., 2019), and shared responsibility (Kirchkamp and Strobel, 2019).However, economic research with a more specific focus on robotics is relatively scarce.We see two main reasons for this scarcity.Firstly, there is an assumption that human-machine interaction is universal in that behavioral phenomena in humancomputer interaction carry over to human-robot interactions or that attitudes toward robots elicited in surveys are meaningful when it comes to actual decisions.There is little evidence for tests of this assumption.Secondly, while robotics technology has been around for decades in the industry, controlled environments for experimental research have thus far not been available to behavioral researchers.Using field-in-the-lab experiments in learning factories (Kandler et al., 2021;Ströhlein et al., 2022) offers a promising experimental paradigm for this line of research.
An important question that can be investigated in this experimental paradigm is how prosociality between human coworkers, central to productivity and efficiency in firms (Besley and Ghatak, 2018), changes after introducing robots to the workplace.With an ever-changing work environment, it is increasingly vital for individuals to be adaptable and learn new skills quickly to stay competitive and meet the changing needs of their organizations.To a considerable extent, workers can do so by sharing skills and knowledge with their coworkers.Maintaining prosocial interaction while increasing the share of robots in production environments is thus essential but also demanding for organizations.We, therefore, investigate whether robotic team members affect the prosocial behavior among their human coworkers.
Another aspect that the introduction of robots could change together with the work context is the meaningfulness of the work carried out (Cassar and Meier, 2018).If they feel that they have no impact on the eventual team output, they might perceive the resulting income to be less valuable, which could, in turn, lead to a higher willingness to share it with others (Erkal et al., 2011;Gee et al., 2017).We want to test whether we can observe a reduction in people's valuation of their produced output, depending on whether they work in a hybrid human-robot or a pure human-human team.
We report evidence from a field-in-the-lab experiment, i.e., a controlled, incentivized experiment in a lab-like environment that contains essential elements from the field (Kandler et al., 2021).This setup allows studying the effects of robotics on humanhuman interaction in an environment that closely parallels natural production environments-a learning factory.In our experiment, two human participants operated two production stations at the beginning and end of a three-station production line to produce electronic motor components.The middle station was either operated by two robots or by a "transfer station" that performed the same steps but with the robots switched off and hidden.For each component, the human participants received a team piece rate.In addition to that monetary payment, they could earn a chocolate bar, i.e., a material, non-monetary incentive, if they individually completed their production step at least five times.After the production round, we elicited participants' willingness to accept (WTA) for selling this material/non-monetary part of their payoff, and they engaged in a bully game (see, e.g., Krupka and Weber, 2013).The WTA for the non-monetary part of their earnings allows us to test whether rewards earned in hybrid humanrobot teams are valued less than in purely human production teams, whereas the bully game allows us to measure prosocial behavior between our treatments.
We find suggestive evidence that humans in hybrid human-robot teams are more prosocial toward each other when compared to the humans in pure human-human teams.Qualitatively, participants in our sample have a lower valuation for the material, non-monetary part of their earnings when they were part of a hybrid human-robot team compared to those in a pure human-human team.Still, this difference is not statistically significant and thus not the mechanism driving the greater extent of prosocial behavior.Investigating a range of controls levied in the post-experimental survey, it seems that human workers shifted responsibility.However, rather than shifting it to the robot, they instead shifted responsibility away from the robot, allocating relatively higher responsibility to themselves and the other human participant.
There is ample evidence that joint work on tasks creates more prosociality (Allport et al., 1954;Chen and Li, 2009;Stagnaro et al., 2017;Lowe, 2021).In contrast, introducing robotics into production lines can decrease the number of work interactions between workers and reduce the feeling of working together toward a common goal (Savela et al., 2021).Organizations must consider how to integrate these technologies into their production processes optimally.Our study is a first step to inform this consideration, focusing on the changing human-human interaction in such environments.In addition to demonstrating the feasibility of running lab-like experiments with state-of-theart production robotics in learning factories, our primary goal is to understand whether the prosocial behavior between human workers changes when a robot is in the team.Our design allows us to test whether any such change is due to a changed valuation of the income earned, either with or without the external help of a robot.As a secondary and more exploratory objective, we want to understand how the robots' team membership changes human workers' attitudes toward technology and each other.
One key advantage of our methodology is the clarity of what the treatment is.An important design choice in experiments using virtual automated agents is how these are framed.The use of different frames to refer to automated agents can be problematic as it can trigger different concepts of the "machine" that participants are interacting with.For example, the use of the term "AI" (von Schenk et al., 2022) or "algorithm" (Dietvorst et al., 2015(Dietvorst et al., , 2018;;Klockmann et al., 2022) can lead to participants having higher expectations of the machine's capabilities when compared to the use of terms like "computer" (Kirchkamp and Strobel, 2019) or "robot" (Veiga and Vorsatz, 2010).Yet, it is technically not always clear which term to use for the programmed automated agent.This can lead to different outcomes in the experiments, as participants may interact with these agents differently, depending on the framing (see, e.g., Hertz and Wiese, 2019, for the difference between "computer" and "robot").The different cognitive concepts induced by the differences in the terminology could partly explain why the experimental evidence on human-machine interaction is still largely mixed (Jussupow et al., 2020;Chugunova and Sele, 2022).Our methodology allows us to avoid this ambiguity, as the robots are visible, and the interaction with them is experienced beyond simply observing the outcome of their work.
Our paper broadly relates to three strands of the literature: (i) human-machine or human-computer interaction, (ii) prosocial behavior with a specific focus on fair sharing, and, as we investigate the participants' valuation of their income, (iii) deservingness and the meaningfulness of work.
Research on human-machine interaction (Fried et al., 1972, using the antiquated term "Man-Machine Interaction") and human-computer interaction (Carlisle, 1976) dates back to the 1970s.It has since largely focused on how the interfaces for these interactions affect the users' acceptance and ease of using them (Chin et al., 1988;Hoc, 2000).Due to an ever-increasing degree of computerization, automation, and robotization, the topic has attracted cognitive psychologists (Cross and Ramsey, 2021) and economists (Corgnet et al., 2019) alike.Jussupow et al. (2020) and Chugunova and Sele (2022) provide excellent literature surveys on the more recent studies within the social science methodological framework.Studies that have received particular attention are those relating to the phenomena of algorithm aversion (Dietvorst et al., 2015(Dietvorst et al., , 2018;;Dietvorst and Bharti, 2019) and algorithm appreciation (Logg et al., 2019).The aforementioned literature surveys suggest that aversion is more pronounced in moral and social domains, whereas appreciation (and lower aversion) is more likely to be found when people have some degree of control over the automated agent.Savela et al. (2021) report evidence from a vignette study suggesting that humans in mixed human-robot teams have a lower in-group
See March (
) for a review of experiments using computer players.
identification than those in purely human teams.Similarly, in another vignette study on service failures taken care of by either humans or robots, Leo and Huh (2020) report evidence suggesting that people attribute less responsibility toward the robot than the human because people perceive robots to have less control over the task.In the context of machine-mediated communication, Hohenstein and Jung (2020) show that when communication is unsuccessful in such situations, the AI is blamed for being coercive in the communication process.Thus, it functions like a moral crumple zone, i.e., other humans in the communication process are assigned less responsibility.
Besides the mere focus of our study on human-machine interaction, we also want to investigate how the presence of robots affects the participants' prosocial behavior, in particular, sharing.A well-established economic paradigm for these behaviors is the dictator game (Güth et al., 1982;Kahneman et al., 1986;Forsythe et al., 1994).A participant is in the role of the dictator and can share a fixed endowment between themselves and a passive receiver.A particular variant of the dictator game is the bully game (Krupka and Weber, 2013), in which both the dictator and the receiver are equipped with an initial endowment.Beyond splitting their own endowment between themselves and the receiver, in this variant, dictators can even take parts of the receivers' endowments, allowing us also to measure spiteful behavior (Liebrand and Van Run, 1985;Kimbrough and Reiss, 2012;Ayaita and Pull, 2022).
The closest study to ours is Corgnet et al. (2019), which, among other aspects, also analyzes how prosocial motives change in hybrid teams compared to traditional human work teams.They report evidence from a computerized lab experiment in which participants need to fill out matrices with patterns of three distinct colors.They either form a team consisting of three human players or two human players and a "robot."Each team member has one specific color they can apply to the matrix, so teams need to work together to complete the task.They find lower performance in mixed teams with a robot than in purely human teams and explain this with a lack of altruism toward the robot, leading to a lower social incentive to be productive on behalf of the team.Our design builds on this setup but instead uses the production round as a pre-treatment before the elicitation of prosocial behavior and the participant's valuation of their earned reward.The experiment in Corgnet et al. (2019) was conducted in French, where robot can either be a wild card for various types of machines (e.g., web crawler translates to robot de l'indexation) or the translation of l'ordinateur, which can also be translated to computer.Nonetheless, even in other computerized studies run in English, the term robot is frequently used in instructions (e.g., Brewer et al., 2002;Veiga and Vorsatz, 2010).Calling a computer player a "robot"-or likewise an "algorithm, " "computer, " or "automated system"-is somewhat arbitrary.Our setting uses actual production robots visible to the participants, allowing us to use the term "robot" with much less ambiguity.
Another advantage of our approach is that it is a relatively meaningful task that participants engage in.Abstracting from more complex interactions in the workplace over prolonged periods of time, this parallels the nature of actual work, which is a source Frontiers in Behavioral Economics frontiersin.org of meaning (Cassar and Meier, 2018).Compared to abstract realeffort tasks, this is particularly pronounced in jobs and tasks that produce a tangible output (Ariely et al., 2008;Nikolova and Cnossen, 2020).When a robot assists humans in this meaningful production, it could reduce the relative meaning of each worker's contribution to the overall output.If workers value their income relatively less in hybrid teams, they might be more willing to share parts of it with others.A piece of evidence that supports this is provided by Gee et al. (2017), who suggest that an increase in inequality has less impact on redistribution choices when income is earned through performance than through luck.Erkal et al. (2011) investigate the relationship between relative earnings and giving in a two-stage, real-effort experiment.They provide evidence that relative earnings can influence giving behavior and that this effect can be reduced by randomly determining earnings.Again, the degree to which earnings are generated through external factors influences the degree to which participants tend to give larger parts of their endowment away.More broadly, this raises the question of whether an endowment entirely earned through performance is valued more highly, as it is more meaningful to workers when compared to an income that is (at least partially) obtained with the help of an external factor, such as luck or a robot helping to generate the income.The remainder of this paper is structured as follows.In Section 2, we briefly explain our general field-in-the-lab approach and how it is specifically conducive to research on human-machine and human-human interaction in the presence of machines.Section 3 describes our experimental design, the main variables of interest, as well as our hypotheses.We present our results in Section 4 and discuss them together with an interesting exploratory finding in Section 5. Section 6 concludes.
. Field-in-the-lab methodology for behavioral human-robot research As the experiment was conducted in a non-standard environment, i.e., neither a computerized lab experiment nor an online experiment administered solely through the browser, we briefly describe the learning factory environment where we ran the experiment and the advantages this environment has for research on human behavior when collaborating in hybrid human-robotic teams.The more general approach is described in Kandler et al. (2021).
The field-in-the-lab approach is an experimental method to create real-world settings in controlled environments that mimic the field.Kandler et al. (2021) suggest that so-called learning factories are ideal for running such studies.They are intended to teach students about the possibilities of production setups, lean management approaches, and the capabilities of digitization technologies in realistic factory settings (Abele et al., 2015).Typically, these factories have a topical focus in the sense that a Note that this is di erent from the lab-in-the-field approach (Gneezy and Imas, ) or artifactual field experiments (Harrison and List, ), which refer to experiments that are lab-like but use a non-standard subject pool.
In contrast, field-in-the-lab experiments (Kandler et al., ) use standard subject pools in malleable field-like environments like learning factories.
specific product in a particular industry can be produced.Still, they are also designed to be malleable in the direction of the respective training courses convened.In the case of our study, the learning factory offered a line production of up to 10 production stations with a modular setup, i.e., individual stations could be replaced, moved, or left out of the production line.This allows building a layout tailored toward anonymity-by using visual covers and placing stations for humans apart from each otherand toward the concrete research question-by cutting out three stations of the entire line for the experiment and, depending on treatment, replacing one station with robots.Combined with data recording developed in oTree (Chen et al., 2016) or other input methods, this allows a methodologically clean experimental setup.As such, learning factories allow experimental economists to observe and measure the causal impact of various factors, such as the introduction of robotics, on human-human interactions and how this affects social incentives in the workplace.In addition, they allow us to assess the entire range of more traditional economic questions, such as the impact of different types of incentives and how they affect human behavior within the context of hybrid human-robotic teams.Such analyses are typically hard to conduct with happenstance or other observational data because this data type is often unavailable and lacks a precise measure of performance measure and social interaction.
Though laboratory experiments always contain a degree of abstraction conducive to testing hypotheses clearly and unambiguously, the research on human interactions with algorithms, computers, AI, or robotics and the interaction of humans among themselves in the presence of such technologies faces a central challenge.It is unclear whether lab participants understand the same thing if words like "algorithms, " "computers, " "AI, " or "robots" are used in writing instructions.In our approach, there is no ambiguity about the concept of a robot because it is visible, and participants can experience what it does and how exactly its actions affect the team outcome.Thus, besides recreating a setup that resembles real factories and production lines more closely, focusing on the specific aspects relevant to the research question is only one advantage of the field-inthe-lab approach.These infrastructures are available in many universities across the globe (Abele et al., 2015).They offer an opportunity for interdisciplinary research into human-machine and human-human interaction in the presence of machines with industry-standard robotics while maintaining substantial experimental control.Finally, the work in the learning factory produces a tangible and potentially meaningful product.
. Experimental design
We begin by describing the production task.Then, we introduce our two treatments and subsequently describe the stages and procedures of the experiment.
. . The task and the flow of production
In every session, each participant was either in the role of Worker 1 or Worker 2. Their task was to produce a component for an electronic motor (see Figure 1).Motors of this type are typically used in cars for various purposes, such as window lifters, seat adjusters, or automated boot lids.
For each production step, Worker 1 used a station with a press (from hereon Station 1) to press two clips (Figure 2A) and two magnets (Figure 2B) into one pole housing (Figure 2C).Worker 1 was equipped with a sufficient supply of clips, magnets, and pole housings at Station 1. Worker 1's production step involved placing the clips and magnets into the designated, so-called "nests" of the press, placing the pole housing on top with the opening facing down, and executing the lever of the station's press to join the individual parts.
After completing this production step, Worker 1 placed the resulting intermediate product onto a conveyor belt to hand it to Station TR (transfer or robot station).
At Station TR, an armature shaft with a ring magnet (Figure 3A) was placed into the prepared pole housing, and a brush holder (Figure 3B) was put on top, closing the pole housing and keeping the armature shaft in place.
Worker 2 at Station 2 took the resulting intermediate product after this step and screwed a worm gear (Figure 4A) onto the thread of the armature shaft.Subsequently, Worker 2 put the gearbox (Figure 4B) onto the pole housing and fastened it with two screws.Like Worker 1, Worker 2 was equipped with sufficient wrought parts (worm gears, gearboxes, and screws) to be able to produce throughout the production round.
From hereon, we will refer to a complete component as a final product.Once this final product was produced, Worker 2 placed it into a plastic box.The box, in turn, needed to be placed into a shelf with slides, where it was counted toward final production.
Note that for both workers, it was hardly possible to hand in intermediate or final products in a bad quality.Bad quality and non-completion (i.e., simply handing the raw/input materials into the shelves) were almost indistinguishable.As such, we only counted pieces of good quality as incomplete intermediate products could not be processed any further.This essentially never happened in the production round.
. . Treatments
The production flow can be seen in Figures 5A, B. Station 2 was automated in both treatments, but it differed in the degree of automation and the visibility of the robots.For the sake of exposition, we introduce the Robot treatment first before describing the control group (NoRobot).
In the Robot treatment, shown in Figure 5A, Station 2 consisted of two KUKA KR 6 R900 robots (see Supplementary Figure 2 in Appendix E for a 3D model).These robots are pick-andplace robots that are equipped with light barriers as sensors for incoming intermediate products.Worker 1 placed each intermediate product on the conveyor belt after producing it.The conveyor belt transported the intermediate product to Robot 1.This robot placed an armature shaft (Figure 3A) with a mounted ring magnet (Figure 2B) in the intermediate product.It then automatically traveled to the next robot, which mounted the brush holder (Figure 3B) on the intermediate product.The robot then released the resulting intermediate product onto the conveyor belt, transporting it to Experimenter 2. Experimenter 2, after having it picked up from the conveyor belt, immediately placed the intermediate product into the shelf to the left of Worker 2. Processing at Station TR took 54 s for an intermediate product produced by Worker 1 before it arrived in the shelf to the left of Worker 2. Both workers could see the robots and the conveyor belt.However, they could not see each other or any of the experimenters.
In the NoRobot treatment, shown in Figure 5B, the robots were switched off and surrounded by partition walls and thus were not visible to the participants.The conveyor belt operated outside these partition walls.It transported the intermediate product from Experimenter 1 to Experimenter 2. Thus, Station TR was still a (partially) automated station.Worker 1 placed each intermediate product on the conveyor belt after producing it.After Experimenter 2 picked up the intermediate product from the conveyor belt, a timer was started, and the intermediate product was processed by Experimenter 2 for Worker 2. When the timer showed 28 s, Experimenter 2 placed the intermediate product into the shelf to the left of Worker 2. Added to the time of the conveyor belt (26 s), this was the time the robots in the Robot treatment needed for their production step, which led to the same time gap between the completion of a work step of the participant at Station 1 and Note that these robots, though resembling a human arm, have not been further anthropomorphized, which is known to improve the human perception of robots (Terzioglu et al., ).
Due to the setup in the learning factory, we had to implement the layout such that Worker had the robots in their peripheral view throughout, whereas Worker would only see them if they turned, e.g., for picking up a part from the shelf to their left.
Put di erently, we did not deceive participants by using the robots in both treatments at Station TR.
The raw time of the conveyor belt to transport the intermediate product from experimenter to experimenter is s, thus adding to s with experimenter 's timer.
Frontiers in Behavioral Economics frontiersin.org ./frbhe. .Note that in the NoRobot treatment, we told participants in the role of Worker that we would provide Worker with an intermediate product that is equipped with an armature shaft and a brush holder s after they turn in their intermediate product.We neither told them that this was done by Experimenter nor did we claim that it is the identical intermediate product.
This allowed us to also use pre-mounted intermediate products for Worker in this treatment.
Worker 2, Worker 2 started their 10 min with a 90-s offset to Worker 1.
. . Stages
The experiment comprised four stages: Stage 1 consisted of instructions and a practice round, Stage 2 was the 10-min production round, Stage 3 consisted of two decision tasks, and Stage 4 was the post-experimental survey.In the following, we describe each stage in more detail.
In Stage 1, participants were brought to their stations and received general instructions about the experiment and specific instructions concerning the production step and their stations on a tablet computer.They also received instructions on how to handle their station.These instructions used GIF animation pictures.Subsequently, participants completed a practice round in which they had 3 min to produce a maximum of two intermediate products (Worker 1) or final products (Worker 2) with their station.Worker 2 was provided with two pre-mounted intermediate products to do so.Throughout the practice round, all instructions regarding the respective station and production steps were visible on the practice round screen.There were no incentives in this stage.After this practice round, participants had to answer a series of control questions before entering the production round, in which they carried out their production steps for the rewards.
Stage 2 consisted of the 10-min production round.We described the task and flow of production in the previous subsection.After 10 min, the production round ended, and Stage 2 concluded.Each participant received 65 ECU (corresponding to e0.65 at the end of the experiment) for each final product produced by their team and a chocolate bar if they individually produced more than five intermediate products (Worker 1) or final products (Worker 2).Thus, both workers earned the same monetary reward from the production round but earned the chocolate bar individually.We did so to avoid a single slow worker resulting in losing both observations for the ensuing Becker-deGroot-Marschak Mechanism (Becker et al., 1964, from hereon BDM).
For Stage 3, participants were brought to a table to allow the experimenters to clean and set up the stations for the next Participants also received a brief description of what the other participant was doing and that the other participant would receive the same incentives.
The static photos used in the figures of this article are identical to the ones used in the instructions.
participants.In this stage, participants had to complete three decision tasks.
For the first decision task, the material part of their payment (the chocolate bar) was placed on their table.We used a 100 g milk chocolate bar from a popular brand that cost around e 1 at the time of the sessions.Subsequently, we elicited the participants' willingness to accept (WTA) for selling this item back to the experimenter using the BDM.Participants were asked to state a price r in the range of 0 to 200 ECU at which they would be willing to sell the chocolate bar.A random draw p from a uniform distribution between 0 and 200 ECU determined a price.If the participant's reservation price was lower than that draw (r < p), the chocolate bar remained at the table at the end of the experiment, and the participant received the price p randomly drawn by the computer.If the participant's reservation price was higher than or equal to that draw (r ≥ p), the participant would keep the chocolate bar at the end of the experiment and would not receive any additional ECU from this decision-task.
In the second decision task, participants decided whether to give a part of their monetary payment to their human team member or to take some of their team member's monetary payment away.This is the bully variant of the dictator game, as described in Krupka and Weber (2013).Remember that both workers earned the same amount of ECU in the production round within teams.Yet, across production teams, the accumulated amounts of ECU differed as they produced different numbers of final products.Thus, this A picture can be found in Appendix B, together with the instructions.
Additional to instructions on the mechanism, participants could also gain an intuition with a virtual tool on the preceding instructions page.They could enter a fictional WTA, and a fictitious price would be randomly drawn from between and ECU.Subsequently, the resulting outcome was described.Participants could do this as often as they liked.
Frontiers in Behavioral Economics frontiersin.orgdecision was programmed to be relative to the earned endowment from the production round to make it salient one more time that they worked toward a common goal in the preceding production round.The highest amount a participant could take away from their human team member was 50% of the earned endowment.
The highest amount they could give to their team member was 100% of their earned income from the production round.They could choose any integer value between (and including) those ECU boundaries.At the end of the experiment, one worker's decision was randomly chosen with equal probability to be implemented for payoffs.
In the third and final decision task, we elicited social value orientation with the incentivized six-slider task described in Murphy et al. (2011).At the end of the experiment, one of the workers was chosen randomly with equal probabilities, and one of this worker's six sliders was randomly chosen with equal probabilities to be payoff-relevant.
Stage 4 comprised a survey containing questions on contextrelated attitudes, the allocation of responsibility for the team output, conventional economic attitudes and preferences, and demographics.Participants received 250 ECU for this stage, irrespective of their responses.
. . Procedures
The sessions were run at the Learning Factory for Global Production at the Institute of Production Science (wbk) at the Karlsruhe Institute of Technology (KIT).The experiment was conducted in German, where the word Roboter has very similar connotations to the English word robot.The exchange rate was 1ECU = e0.01.The average session lasted 41 min, and participants earned e13.03 on average (including the flat payment of e2.50 for the survey and the selling price in the BDM if participants sold), and the chocolate bar for producing more than five units in case participants did not sell it in the BDM.Participants from the KD 2 Lab Pool (KD 2 Lab, 2023) were recruited via hroot (Bock et al., 2014), and the experimental software was programmed using oTree (Chen et al., 2016).Due to the availability of only one experimental production line, i.e., one station for Workers 1 and 2, respectively, Given that we used the team production as stakes, this guaranteed that no participant could earn a negative payo from this task.Similar to the BDM, participants could familiarize themselves with that decision and try di erent potential amounts.A slider was displayed with the actually possible amounts as the endpoints.For any amount chosen, the consequences of that choice for both participants were shown, assuming that the participant was randomly assigned the role of the dictator.
Participants could only make their actual decision after having chosen to leave this page.
All random draws during the experiment were independent of each other.
The experimental instructions for all stages can be found in Appendix B.
No participant failed to cross the threshold of five intermediate products (Worker ) or final products (Worker ).This was intentional to obtain su cient data for our analysis.The experimental software would have skipped the BDM task in that case.
and only one station with KUKA Robots, we ran 24 sessions with two human participants for each treatment.This results in a total sample of 48 participants.Throughout the experiment, participants had a table bell they could ring in case they needed assistance or wanted to ask clarifying questions.Upon arrival, participants were immediately led to separate tables (spatially separated and surrounded by visual covers), and eventually, they exited the learning factory through different exits.Thus, our setup did not allow for interaction between workers before, during, or immediately after the experiment other than through the tasks described above.
. . Hypotheses
As argued above, evidence suggests that the change from purely human to hybrid human-machine teams can influence the social context of human interaction (Corgnet et al., 2019;Savela et al., 2021).Participants in our experiment are not colleagues for a prolonged amount of time.Thus, in line with previous research (Allport et al., 1954;Chen and Li, 2009;Stagnaro et al., 2017;Lowe, 2021), we implemented a production round where they had to work toward a common goal and made team performance salient.In this production round, we administered our treatment.From the perspective of any one of the workers, this also changes the salience of their coworker's human identity.With the robot in the (relative) out-group, the robots' presence in the team could strengthen the human team members' in-group identity (Akerlof and Kranton, 2000;Abbink and Harris, 2019), leading to increased prosociality between the human workers.
Hypothesis 1.The robots' presence in a production line increases the share transferred in the bully game.
Our literature discussion also suggests that when external factors are involved in attaining income, people change their willingness to share with others (Erkal et al., 2011;Gee et al., 2017).Thus, individuals may be more likely to share an income generated with the help of robots in a task.Since individuals do not feel as personally responsible for income generated through such external factors, one reason might be that they do not value it as highly and thus are more willing to share it with others.Our second research question is thus whether a worker's valuation for their production output changes depending on the team context.Work is a source of meaning (Cassar and Meier, 2018).Compared to abstract realeffort tasks, this is particularly pronounced in jobs and tasks that produce a tangible output (Ariely et al., 2008;Nikolova and Cnossen, 2020).We hypothesize that the robot in the team saliently diminishes the relative meaning of each worker's production step The presence of machine players in economic paradigms is also known to result in more rational behavior (March, ).In our context, that would mean higher amounts taken in the bully game.Yet, the literature reviewed focuses mainly on how humans act toward the machine players and not the human players.
The hypothesis in our preregistration was formulated in terms of the null hypothesis: "The presence of robots in a production line does not influence the share transferred in the bully game".
Frontiers in Behavioral Economics frontiersin.orgto the overall output.Therefore, we implemented a non-monetary part of income that could be earned in the production round to measure a change in participants' income valuation between treatments by eliciting their WTA for this part of their payoff.
Hypothesis 2. The robots' presence in a production line reduces the WTA for the individually earned payoff.
The experimental design and the hypotheses were preregistered at aspredicted.org.
We are aware that beyond the valuation of the income earned, other factors, e.g., attitudes toward technology, experience with the production environment, and demographic factors, may play a role in the prosocial behavior, and thus leave the investigation of the controls we collected for exploratory investigation in the discussion of potential further mechanisms.
. . Main variables of interest and estimation strategy
Our primary interest is in the behavior in the bully game.We define the variable Share transferred that ranges from −50 to 100% for the amount transferred between the two workers according to the decision of each participant.That is, if a participant decided to take a part of the earnings from the other participant in the team, Share transferred would be negative.In contrast, it is positive if a participant decided to give a part of their own earnings to the other participant in the team.To investigate behavior at the extensive margin, we also create the variable Share categorical that is equal to 3 if Share transferred is strictly positive, equal to 2 if Share transferred is equal to zero, and equal to 1 if Share transferred is strictly negative.The variable Robot is our treatment dummy.It equals one if the participant was in the robot treatment and zero otherwise.We keep track of Individual production, the number of completed production steps by a participant in either role, and Team production, which is the number of final products produced by the production team.Worker 2 is a dummy equal to one if the participant was in the role of Worker 2 in the production line and zero otherwise, i.e., if the participant was in the role of Worker 1. Production in trial round is the number of work steps completed in the trial round.This number can only range from zero to two as this was the maximum number of intermediate products ( WorkerThe hypothesis in our preregistration was formulated in terms of the null hypothesis: "The presence of robots in a production line does not influence the WTA for the individually earned endowment".
The preregistration number is # , and the document can be found at https://aspredicted.org/NVY_ CZ.
Since teams have produced di erent amounts and thus earned di erent budgets for the bully game, we stated our hypothesis regarding the share that participants transferred rather than absolute amounts.The Amount transferred ranged from − ECU to ECU, and we report regressions using this variable as the dependent variable in Supplementary Table in Appendix A as a robustness check.
Note that Individual production and Team production are identical for Worker because this worker finished the intermediate products to final products counted toward team production.
1) or final products (Worker 2) participants could produce in the 3-min trial round.Our empirical strategy is as follows.For each hypothesis, we first report a two-sided Mann-Whitney U-test to compare the two treatments.We use the variable Share transferred for our first hypothesis on how the robot affects prosocial behavior and WTA for the second hypothesis on how the robot being in the team affects the valuation of the non-monetary part of the payoff.
We estimate regression specifications to control for demographics and the above survey measures.First, we regress the dependent variable on only the treatment dummy to show the pure treatment effect.We then add demographics in a second specification.In specification three, we control for Team production since the percentage point differences between treatments translate into different absolute amounts transferred, and the budget for this task depends on the amount earned.We also add Worker 2 to account for differences between the two roles and Production in trial round to account for differences in observed ability from the production round.In the fourth specification, we add all survey items related to attitudes applying directly to the environment in the production round.In the fifth and last specification, we add more general attitudes.We will refer to this specification as the saturated specification and base our discussions on findings mainly on this specification.
Unless mentioned otherwise, our results are robust to using the absolute number of shared ECUs.We can compare the participants' behavior with Share transferred.This way, means and coefficients can be interpreted as percentage points.We provide robustness checks using the absolute amounts in Appendix A. Also, we use heteroskedasticity-robust sandwich estimators (Eicker, 1967;Huber, 1967;White, 1980) in all regressions in the main text, but the results using cluster sandwich estimators (Rogers, 1993) on the team level can be found in Appendix C.
. Results
Hypothesis 1 was that the presence of robots in a production line does not influence the share transferred in the bully game played with the earned endowment from the production round.Figure 6A reveals that we cannot reject this hypothesis without controlling for participant and team characteristics (p = 0.282, Mann-Whitney U-test).Qualitatively, contributions in the Robot treatment were 10.597% points higher than in the NoRobot treatment, translating into a difference of 100.542ECU in absolute earnings.
When considering behavior by roles in the production team in Figure 6B, we see that most of that aggregate difference was driven by the participants in the role of Worker 2. Here, the difference in A complete description of the remaining control variables can be found in Appendix A.
If we use Amount transferred this does not change (p = 0.270, Mann-Whitney U-test).For both tests, we summed up the share or amount transferred within each team and ran the test with twelve independent observations per treatment.This test result is also qualitatively identical when accounting for clustering on the team level as suggested by Rosner et al.Frontiers in Behavioral Economics frontiersin.orgWhen controlling for the covariates described in our empirical strategy, we see in Table 1 that the coefficient on our treatment dummy is positive, irrespective of our specification.In our last and preferred specification, participants transferred an 11.102% points higher amount to the other participant in the Robot compared to the NoRobot treatment.
The distribution of shares transferred reveals that about a third of the participants did not transfer any earnings from or to the other participant in their team.In the NoRobot treatment, 37.50% of the participants took a part of the earnings from the other participant.As 29.17% of the participants in this treatment gave parts of their earnings to the other participant, the remaining 33.33% of participants in the NoRobot treatment chose neither to take a part of the earnings from the other participant nor to give parts of their earnings to the other participant (in other words, their Amount transferred or Share transferred was zero).In the Robot treatment, 25.00% of the participants took earnings from the other participants, whereas 41.67% gave parts of their earnings to the other participant.This leaves 33.33% of participants in the Robot treatment who chose an Amount transferred or Share transferred of zero.We checked whether the higher shares of participants choosing a strictly positive or negative transfer are driven by our treatment.
Table 2 reports the results from ordered probit regressions on whether the Share transferred was strictly positive, equal to zero, or strictly negative.Though the treatment coefficient is not significant in all specifications, it is marginally significant in specifications (4) and ( 5).Thus, the treatment effect is partially due to differences in Consider Supplementary Table in Appendix A for regressions on the absolute amounts transferred.
Note that we considered zero-inflated and other two-step procedures, but they are not suitable to our data.
the decision on whether to transfer at all and in which direction and only partially due to the decision on how much to transfer.
Result 1.We find suggestive evidence that participants behave more prosocially in the Robot than in the NoRobot treatment.In our data, this effect is partially driven by the extensive margin (whether they transfer any nonzero amount or not and in which direction).
Hypothesis 2 relates to one potential mechanism to explain this finding.Participants in the Robot treatment potentially valued the earnings generated from the production round less than those in the NoRobot treatment because these earnings were generated with the robots' assistance and not solely through their own work.The absence of such an effect would, in turn, indicate that being in a mixed human-robot team does not affect the workers' perceived value of the individually earned reward.Looking at Figure 7A, we see that participants in the Robot treatment stated a 19.167 ECU (16.04%) lower WTA for the chocolate bar than participants in the NoRobot treatment.Still, this difference is not statistically significant (p = 0.298, Mann-Whitney U test).
The direction of the difference is the same for Worker 1 ( = 27.083ECU or 23.83%, p = 0.384, Mann Whitney U test) and Worker 2 ( = 11.250ECU or 8.98%, p = 0.743, Mann Whitney U test), which can also be seen from Figure 7B.
In line with what can be seen from the figure, the regression results reported in Table 3 corroborate that there is no treatment difference in the WTA for the non-monetary part of earnings.Yet, the coefficient is negative in all specifications, ranging between 14.584 ECU and 24.703 ECU.
Result 2. We do not find evidence for a difference in the WTA for the non-monetary part of earnings from the production round.The complete regression results with all coefficients for all controls can be found in Supplementary Table 6.Results are robust to specifying clustered standard errors on the team level, as can be seen in Supplementary Table 7.This concludes our investigation into our preregistered hypotheses.
. Discussion
We found mild evidence for more prosocial behavior in the Robot treatment compared to the NoRobot treatment.We hypothesized that a lower valuation for the earned income could have led to this result.While the WTA for the chocolate bar was, on average, lower in the Robot treatment, that treatment difference was not statistically significant.Given the low number of observations, we cannot interpret this as evidence for a null result.In the following, we discuss this limitation together with other shortcomings of our design and present one interesting finding from our sample that could be interesting for future research. .
. Limitations of the experiment
We report results from a relatively small sample.This sample size was chosen due to the intensive data elicitation process that required a significant number of experimenters, labor hours of assistants, and their focus when counting components.Thus, to ensure the experiment could be adequately controlled and data collection went smoothly, we opted for a straightforward design with only two treatments and, thus, a smaller sample.Therefore, our experiment is a good starting point for investigating humanhuman interaction in the presence of physical and visible robots in a natural manufacturing context.This opens the potential to, among other things, investigate whether algorithm aversion (Dietvorst et al., 2015(Dietvorst et al., , 2018) ) or appreciation (Logg et al., 2019) carry over to physical robots or whether our results on the allocation of responsibility are robust to the provision of incentives for shifting responsibility to the robots.
Given this small sample size, however, any effects would need to be rather large to be picked up by statistical tests.Our experiment employed a comparably light treatment difference from an economist's perspective.We kept monetary and nonmonetary incentives identical across treatments, and, whereas our treatment was administered in the production round, we measured treatment differences in the subsequent stage that, in itself, did not differ across treatments.Yet, the post hoc statistical power for the tests of our two hypotheses is arguably too low to make our results conclusive.In combination with correcting for multiple comparisons (List et al., 2019) and the resulting statistical implications for the results of this paper, our experiment should be seen as a starting point, demonstrating that economic experiments in ecologically valid but yet fairly controlled environments, namely learning factories, are feasible.
Participants received the chocolate bar if their individual performance crossed a threshold.We implemented it this way to allow for sampling Worker 1s even if Worker 2 was too slow to cross that threshold.Yet, this performance is actually independent of the robots' productivity, at least for Worker 1.This might explain why we see a slightly higher WTA for Worker 2 in Table 3, even though only statistically significant in specification (4), which is not the fully saturated specification.On the other hand, Figure 7A instead seems to suggest that any potential treatment effect would be lower for Worker 2 than Worker 1, leading us to believe that the same elicitation based on a threshold for the group performance would not have lead to a greater difference than the one we reported.
Our participant sample consisted predominantly of students with some connection to engineering and manufacturing subjects or STEM fields.As such, they are more exposed to robotics and artificial intelligence and are likely to be keener to use technology.More generally, students are relatively young and interact more regularly with new digital technologies in their private lives than other strata of society.Even though our participant pool is constant across treatments, this would be problematic if it affected how strongly participants perceive the Robot treatment to differ from the NoRobot treatment.In fact, it seems plausible that we would observe larger effects on people outside the context of a technical university for whom production robots would be novel and unusual.
We cannot discuss generalizable claims from our small sample, but we can discuss how hopeful one can be to obtain generalizable results in future studies with larger samples and potentially in other countries.In this study, we used a German sample, i.e., we ran our experiment in a country with a relatively high share of GDP attributed to the secondary or manufacturing sector.Bartneck et al. (2005) though show that there are no large differences in robotic attitudes to other industrialized countries, e.g., the US, Japan, or the Netherlands, when it comes to interaction with a robot.We see this as an indication that researchers can more broadly gain valuable insights from using learning factories for studies on human-robot interaction and human-human interaction in robot-augmented setups.
For Hypotheses , our post hoc statistical power is at .% for the non-parametric test, and for the corresponding regression analysis, this figure is at .%.For Hypothesis , these figures are .and .%, respectively.
We compared the means of age and gender in our sample as well as the distribution of study subjects to the summary statistics of the subject pool at the time of the experiment as well as to another experimental dataset of colleagues and found no systematic di erences.Thus, as far as we can infer from these characteristics, the invitation to the learning factory did not attract a specific, tech-savvy subset of the subject pool.Finally, our setup did not allow for a trade-off between quality and quantity.The production task for both Worker 1 and Worker 2 was very simple, and bad quality and non-completion (i.e., simply handing the raw/input materials to the shelves) was almost indistinguishable.As such, we only counted pieces of "good" quality because incomplete intermediate products could not be processed any further.However, as the production steps were so simple, we essentially never observed a product of "bad" quality.
. . The allocation of responsibility in hybrid human-robot teams
We asked participants to state who or what was responsible for team production not being greater than it actually was.The bully game decision might have resulted from responsibility being shifted away from one participant to the other participant, the robot, or the transfer station, respectively.Thus, a shift in how workers allocate responsibility between themselves and the robots (Kirchkamp and Strobel, 2019) or even blame-shifting (Bartling and Fischbacher, 2012;Oexl and Grossman, 2013) could be a mechanism explaining the increased amounts transferred in the Robot treatment.Overall, between a participant and the respective human coworker, there was no pronounced difference in the allocation of responsibility between the two treatments (p = 0.439, Mann-Whitney U test).This still holds when we consider the two roles separately (p = 0.939 for Worker 1 and p = 0.262 for Worker 2, Mann-Whitney U tests).This can also be seen in Figure 8.
When we consider how responsibility was divided between a participant and the transfer station (NoRobot treatment) or the robot (Robot treatment) in Figure 9, we see that Station TR was The way we measure perceived blame or responsibility in the survey is similar to Kirchkamp and Strobel ( ), Hohenstein and Jung ( ), and Leo and Huh ( ).
Frontiers in Behavioral Economics frontiersin.orgallocated less responsibility in the Robot treatment than in the NoRobot treatment.This is in line with Leo and Huh (2020), who also found that humans allocate more responsibility (or as the authors call it, "blame") for a mistake or bad outcome to themselves than to robots.This difference is not statistically significant overall (p = 0.101, Mann-Whitney U test).Still, looking at the two roles separately, we find a marginally statistically significant treatment difference only for Worker 1 (p = 0.072 for Worker 1 and p = 0.434 for Worker 2, Mann-Whitney U tests).This might have been driven by the fact that Worker 1s could watch the robot throughout Stage 2 in their peripheral view.In contrast, Worker 2 would only see the robot when turning to get another intermediate product for their production step.Consider Figure 10.When we asked participants how they would divide responsibility between the transfer station (NoRobot treatment) or the robot (Robot treatment) on the one side and the human coworker on the other, we found that the participants allocated more responsibility to the other participant than the robot.
This difference is statistically significant (p = 0.014, Mann-Whitney U test).Looking at the two roles separately, we also find this statistically significant treatment difference for Worker 1 but not Worker 2 (p = 0.034 for Worker 1 and p = 0.168 for Worker 2, Mann-Whitney U tests).As with the previous response to the allocation of responsibility, this might be due to different visual exposure to the robot between the roles.
Note that these variables were included in the Context-related attitudes in our regression specifications where we saw the largest effect both in size and statistical significance of the treatment coefficient (see Tables 1, 2).
. Conclusion
We report evidence from a field-in-the-lab experiment in which we varied the team composition from a human-human team to a hybrid human-robot team.We find suggestive evidence that the robots in our experiment changed the social context of the work interaction, leading to more prosocial behavior among the human workers in the bully game.We find no statistically significant evidence that the valuation for earned income differs between our treatments.Our data suggests that the participants blamed themselves and the other participant in their team more than the robot for not having produced more in the production round.This has important implications for future research into the diffusion of responsibility in hybrid human-robot teams.The fact that they do not use the robots as scapegoats for productivity issues shows a relatively high acceptance of robots in the task.The negative reading is that people might rely too strongly on the robots' performance and overly search for responsibility in their human coworkers.This could create tensions in the long run.Future studies, investigating how social pressure during the production round and prolonged and more complex interaction in hybrid teams affect these behaviors, potentially in settings in which workers need to make more autonomous decisions during production, could answer these questions.
Beyond the study of social interactions between humans and income valuation in hybrid human-robot teams, our fieldin-the-lab approach (Kandler et al., 2021) offers a promising methodology for studying various aspects of human-machine interaction in work environments, including issues related to "robotic aversion, " a more direct investigation of the allocation of responsibility in hybrid human-robot teams, and the optimal design of hybrid-team work environments.A key advantage of this approach is its ability to replicate real-world conditions and processes in a controlled laboratory setting.It allows researchers to manipulate variables of interest and measure the impact on human behavior and performance.For example, when studying "robotic aversion, " researchers could manipulate a robotic co-worker's autonomy level and measure the impact on human attitudes and behavior toward the robot.Another advantage of the field-in-the-lab approach is its ability to capture dynamic interactions between humans and machines over time.By running experiments over multiple rounds or sessions, researchers can track how attitudes and behaviors evolve as individuals become more familiar with their robotic coworkers.This is particularly relevant for studying issues related to shared responsibility and blame-shifting, as these behaviors may change as humans become more accustomed to working with robots.
FIGURE
FIGUREComponent to be produced by the production teams.
FIGURE
FIGURE Production inputs for participants in the role of Worker at Station .(A) Clips.(B) Magnets.(C) Pole housing.
FIGURE
FIGURE Production inputs for Station TR. (A) Armature shaft with ring magnet.(B) Brush holder.
FIGURE
FIGURE Setup of both treatments.(A) Robot treatment.(B) NoRobot treatment.Team members and experimenters are depicted with ellipses, stations, and shelves with rectangles.The light gray ellipses for the robots in (A) represent their switched-o state in this treatment.Station TR comprised both the robots and the conveyor belt in the Robot treatment and only the conveyor belt in the NoRobot treatment.Thick black lines represent visual covers, thick gray dashed arrows indicate the flow of production, thick gray lines represent solid walls, and the thin dashed line around Station TR depicts the conveyor belt with arrows indicating the direction of movement.
FIGURE
FIGURE Share transferred in the bully game.(A) Across treatments.(B) Across treatments and roles.Dots represent means.Whiskers represent % confidence intervals.Dashed lines indicate the boundaries of admissible choices.The dotted line provides a visual reference at zero.
A
set of further exploratory analyses on the survey responses can be found in Appendix B.Frontiers in Behavioral Economics frontiersin.org
FIGURE
FIGURE WTA for the chocolate bar.(A) Across treatments.(B) Across treatments and roles.Dots represent means.Whiskers represent % confidence intervals.Dashed lines indicate the boundaries of admissible choices.
FIGURE
FIGUREDegree to which participants allocated responsibility to the other participant (high value) as compared to themselves (low value) for not having produced more components.Dots represent means.Whiskers represent % confidence intervals.Dashed lines indicate the boundaries of admissible choices.
FIGURE
FIGURE Degree to which participants allocated responsibility to the transfer station/robot (high value) compared to themselves (low value) for not having produced more components.Dots represent means.Whiskers represent % confidence intervals.Dashed lines indicate the boundaries of admissible choices.
FIGURE
FIGURE Degree to which participants allocated responsibility to the transfer station/robot (high value) compared to the other participant (low value) for not having produced more components.Dots represent means.Whiskers represent % confidence intervals.Dashed lines indicate the boundaries of admissible choices.
TABLE Linear regressions of the share given or taken in the bully game.Robust standard errors in parentheses; * p < 0.10, * * p < 0.05.
TABLE Ordered Probit regressions on whether Share transferred was strictly positive ( ), equal to zero ( ), or strictly negative ( ).Robust standard errors in parentheses; * p < 0.10.The complete regression results with all coefficients for all controls can be found in Supplementary Table10.Results are robust to specifying clustered standard errors on the team level, as can be seen in Supplementary Table11.
TABLE Linear regressions of the WTA for the chocolate bar.Robust standard errors in parentheses; * * p < 0.05, * * * p < 0.01.The complete regression results with all coefficients for all controls can be found in Supplementary Table12.Results are robust to specifying clustered standard errors on the team level, as can be seen in Supplementary Table13. | 13,662.6 | 2023-11-06T00:00:00.000 | [
"Computer Science",
"Engineering",
"Psychology"
] |
Toxocara spp . seroprevalence in pregnant women in Brasília , Brazil
Introduction: The impact of gestational toxocariasis is an understudied topic on female reproductive health. We estimated anti-Toxocara IgG prevalence among pregnant women in Brasília, Brazil, and investigated the association of the infection with history of abortion and contact with pets. Methods: Infection was diagnosed using ELISA with excretory/secretory antigens. Participant information was obtained via questionnaires. Results: Of 311 pregnant women, 23 were anti-Toxocara IgG positive. Twenty-two percent of anti-Toxocara IgG-positive participants and 26% had previously miscarried. Previous contact with pets was associated with higher toxocariasis prevalence. Conclusions: A direct relationship between toxocariasis and contact with pets was observed, but there was no relationship with the miscarriage prevalence.
Toxocariasis is a disease caused by accidental infection of man by Toxocara canis or Toxocara cati, roundworms found in dogs and cats, respectively.This zoonosis is widespread throughout the globe and is transmitted to humans through food or water contaminated with feces containing parasite eggs.Larvae released in the small intestine may result in damage to several tissues and systemic inflammatory responses, showing a wide range of symptoms (1) .The diagnosis can be made by using histological or serological methods.However, because of the difficulty of demonstrating the presence of larva in tissue obtained by biopsy, immunodiagnosis is the first-choice test.Enzyme-linked immunosorbent assay (ELISA) using secretoryexcretory antigens (TES) of Toxocara spp.larvae is the most recommended method, due its low cost and high sensitivity (2) .
Gestational toxocariasis is an understudied topic, and few articles have been published on this subject.It has been estimated that the infection rate in pregnant women can reach 39%, depending on the analyzed area.Only one study has investigated the prevalence of toxocariasis among pregnant women in Brazil, reporting an infection rate of 6.4% (3) .Data on the impact of the disease on women's reproductive health are also scarce, but there is evidence of increased infertility with tubal blockage and abortions (4) .Furthermore, congenital transmission of this parasite has been reported and has been linked to newborn ocular injury (5) .Thus, toxocariasis is a major public health problem that has not received the attention needed to improve identification, treatment, and control.The current study aimed to determine the prevalence of Toxocara spp. in a low-income population of pregnant women in Brasília (Federal District), Brazil.In addition, this study investigated the association of the infection with abortion and contact with dogs or cats.
To address these aims, we conducted a cross-sectional study on Toxocara spp.seroprevalence in and administered a questionnaire to young pregnant women (2030 years old) in 311 patients from the antenatal clinic of the University of Brasília Hospital.Patients resided in several administrative regions of Federal District, and the majority (90%) lived in very low income suburbs.The sample size was determined using Epi-Info version 6.0 software, based on an expected prevalence of 30% for anti-T.canis-positive immunoglobulin G (IgG), to evaluate with a 95% degree of confidence and a tolerated error of 5%.The exclusion criterion was based on a recent infection report (last 2 years) for other helminths.For each patient, 3mL of peripheral blood was collected via venipuncture of the forearm.A questionnaire was administered and included age; race; marital status; address; and history of intestinal parasites, previous pregnancy, and contact with domestic animals.Only patients who signed the consent form were included in the study.All procedures were carried out in compliance with Brazilian regulations and international guidelines.
Of the 311 tested samples, ELISA identified anti-Toxocara spp.antibodies in 23 pregnant women, corresponding to a specific IgG-positive prevalence of 7.2% (Figure 1A).The distribution of seropositivity of toxocariasis according to history of contact with cats or dogs is shown in Figure 1B.Note the incidence of toxocariasis is higher (56.5%) in those with a history of contact with dogs or cats than in those without previous contact with these animals (26.4%).No relationship was observed for presence of infection and occurrence of abortions, since positive and negative pregnant women reported similar rates of miscarriage (Figure 1B).
The prevalence of toxocariasis among adults in Brazil
Federal District is unknown.Accordingly, the only comparison that can be made is to a study conducted at University of Brasília Hospital that investigated anti-Toxocara spp.IgG in children, showing a prevalence of 22% (6) .This contrasts with other studies showing no statistically significant difference in the prevalence anti-Toxocara spp.antibody positivity in adults and children from the same region (7) .Since IgG antibodies are produced after host exposure to a particular antigen, initial infections of childhood often persist into adulthood (8) , thus the percentage of positive individuals should remain the same.To clarify this issue, a cohort investigation should be conducted, taking into account pregnant women and type of antibody.Some hypotheses can explain the lower prevalence found in the current study.One may be related to the hemodynamic and immunologic changes that occur in pregnant women and may affect IgG antibody detection.During pregnancy, due to placental demand, there is a 30-50% increase in plasma volume (9) .In consequence, this hemodilution decreases the relative concentration of blood elements, including immunoglobulins.As antibody detection using ELISA consists of an antigen-antibody reaction, the decrease in plasma IgG concentration can lead to antibody titers below the minimum detectable level.In addition, physicochemical and biological changes occur with IgG during pregnancy, interfering with the formation of antigen-antibody complexes detectable by serological tests.These molecules, termed asymmetric IgG, are unable to form the insoluble aggregates detected by ELISA, although they maintain their ability to bind to an antigen (10) .Thus, patients with positive serology prior to pregnancy could show seroconversion during gestation.
Our data suggest no significant difference in abortion history between toxocariasis-positive and -negative groups.This finding, however, differs from other data in the literature.For instance, in a study including 52 postpartum women, 35.3% of those with positive anti-Toxocara spp.IgG had at least one abortion in previous pregnancies, while the frequency of abortion in those with negative serology was 8.6% (4) .Moreover, Gasanova (11) found a prevalence of abortion in 56% of T. canisinfected patients, as well as reproductive changes, reinforcing the tropism of Toxocara spp. to the genital tract.
Contact with domestic dogs or cats is considered a risk factor associated with T. canis or T. cati infection.The present study showed a higher prevalence of toxocariasis in pregnant women who had contact with dogs or cats.This finding is in agreement with other published data showing a strong association between contact with domestic animals and development of toxocariasis (3) (12) .Indeed, a recent article revealed that a history of owing a dog increases the rate of T. canis seropositivity almost 13-fold (13) .
In conclusion, toxocariasis is one of the most important neglected diseases listed by the Centers for Disease Control, having significant morbidity in socioeconomically disadvantaged countries.Due to the lack of adequate surveillance programs, the true number of cases of toxocariasis is likely to be underestimated (2) .In endemic areas, poor sanitation habits and education, including poor hygiene and contact with animals, are keystones for disease establishment.
There are few reports of toxocariasis in pregnant women, and there is disagreement in the literature regarding whether toxocariasis is linked to a higher rate of abortions in pregnant women.However, diagnosis based on antibody detection may fail to identify the infection in pregnant women.Unfortunately, molecular tests and direct optical diagnosis also have several limitations (14) ; accordingly, there is a need to develop new strategies to improve identification of Toxocara infections.Further investigation is needed to confirm or refute the suspicion that the prevalence of anti-Toxocara spp.IgG positivity among pregnant women is related to infertility and genitourinary changes.
Ethical considerations
The research protocol was approved by the Ethical Committee on Human Research of Brasília University (protocol number 060/2003).After collection, blood was centrifuged to obtain serum and was sent to the Tropical Medicine Institute of São Paulo.Detection of anti-Toxocara spp.IgG was performed using ELISA, following previously standardized protocols (6) .Briefly, 96-well plates were sensitized with 0.130µg of TES antigen.Serum samples were pre-adsorbed with the SoAs antigen to avoid cross-reaction with Ascaris proteins and then incubated with Toxocara spp.antigens.The second antibody consisted of peroxidase-conjugated anti-human IgG.Optical densities were determined at 492nm.The cut-off absorbance value was defined as the mean absorbance for negative controls plus three standard deviations.Test and control serum assays were run in duplicate.Epi-info version 6.0 software was used for statistical analysis.The chi-squared test was employed to confirm the difference among groups.The level of significance was established at α = 0.05.
FIGURE 1 .
FIGURE 1. Prevalence of toxocariasis in pregnant women of Brasília, Brazil, and infection-related factors.(A) Anti-Toxocara spp.IgG-positive and -negative patients determined using an enzyme-linked immunosorbent assay to detect the presence of secretory-excretory antigens.(B) Outer circle: association of anti-Toxocara spp.IgG serology with canine or feline contact (p < 0.01.);inner circle: abortion (p = 0.86).IgG: immunoglobulin G. | 2,023 | 2016-09-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Elastic evolution of a self-healing ionomer observed via acoustic and ultrasonic resonant spectroscopy
Self-healing poly (ethylene co-methacrylic acid) ionomers (EMAA) are thermoplastic materials that when punctured, cut, shot or damaged in a variety of ways, are capable of autonomously reorganizing their physical structure to heal and, in many instances, permanently seal the damaged location. However, a complete picture of the mechanisms responsible for their unusual behavior is not well understood. In this article we report the observation of time dependent acoustic and ultrasonic spectral evolution, measured using resonant acoustic and ultrasonic spectroscopy, for both pre and post-damage EMAA samples. The results provide a means to differentiate healing phases, quantify healing timescales, and potentially elucidate the composition parameters that most significantly impact healing behavior.
The discovery of spectral variation in damaged EMAA samples led to an increased interest in acoustic methods as a means to determine healing timescales and improve sample characterization. Building on that work, we used TDRS to observe post-damage resonant spectral shifts and determine the approximate secondary and tertiary post-damage healing timescales of an EMAA ionomer. In addition, it was discovered that persistent spectral evolution was present in all samples for both undamaged EMAA samples as well damaged EMAA samples after their post-tertiary healing phase was complete.
Experimental Methods and Results
Determination of the spectral evolution was accomplished by measuring the acoustic and ultrasonic spectra of both pre and post-damage EMAA samples. The results reported in this article are from three samples of EMAA-0.6Na, known as DuPont Surlyn 8920, with 60% of the methacrylic acid groups neutralized by sodium. Samples were cut into parallelepiped shapes from an approximately 1.4mm thick EMAA pressed film using a utility razor. Details of the EMAA pressed film production can be found in the literature 11 . Data from the three identically prepared samples reported here had dimension (3.56 ± 0.04) × (4.64 ± 0.05) × (1.464 ± 0.002) mm 3 , (7.11 ± 0.04) × (7.86 ± 0.07) × (1.43 ± 0.01) mm 3 and (4.23 ± 0.09) × (4.11 ± 0.02) × (1.33 ± 0.01) mm 3 , hereafter called samples a, b and c respectively. Prior to spectral measurement, all samples were stored at room temperature in hermetically sealed containers with desiccant in order to minimize moisture exposure.
The resonant spectra of all EMAA samples were measured using a Magnaflux RUSpec resonant spectral system, details of which can be found in the literature 11,29 . Each sample was mounted between two piezo transducers and a range of frequencies, typically 5-50 kHz, were swept. Prior to damage all samples were repeatedly scanned without removal from the RUSpec system over a period of up to 48 hours, in order to provide a baseline for subsequent comparison with damaged samples. Results for sample b are shown in Fig. 1. These polymers dissipate their vibrational energy rapidly resulting in resonances that are broad and often overlap, which can complicate the analysis. By using a multi-peak fitting algorithm it was possible to extract the individual resonances, as shown in Fig. 1(b) 30 .
As can be seen in Fig. 1(c) and (d), it was discovered that the undamaged samples exhibited low-level spectral evolution. In addition, the observed elastic variation was persistent and appeared in all tested EMAA samples with an average value of (7.5 ± 2) × 10 −4 kHz/min. In contrast, spectral measurement of typical high quality crystalline materials studied using an identical experimental configuration, such as samples composed of aluminum, steel, Si, rare earth scandates and others, have not exhibited detectable frequency drift within the experimental resolution of a few parts per million 29 . However, since the RUSpec system was not humidity controlled, sample uptake of moisture is possible during the experiment. In this case the observed low-level spectral and elastic evolution is consistent with the hygroscopic nature of EMAA-0.6NA, with air and moisture exposure affecting sample resonant spectra over time 31 . In addition, other recent experiments have shown that EMAA samples of similar composition become brittle with age, consistent with the elastic evolution reported here 32 .
The undamaged samples were then removed from the RUSpec system and damaged using a 3 mm Bostitch pin punch driven into the samples using a 300 gram mass-loaded Dytran 5805 A impulse hammer resulting in significant damage to the samples as shown in the Fig. 2.
Damaged samples were then returned to the RUSpec system as quickly as possible. The time from impact to initiating the spectral scans varied from a minimum of 23 seconds for sample a up to a maximum of 157 seconds for sample b. The time difference in loading the samples was due in part to samples healing around the pin punch.
The resonant spectra of the damaged samples were then repeatedly measured over a period of up to 22 hours without sample removal from the RUSpec chamber. Since the physical geometry is altered during the damage event, the location of the resonant peaks can vary significantly from that of the undamaged samples as can be Three EMAA self-healing samples are shown before damage (left) and after damage (right) from a 3mm pin punch. The initial molten state of the EMAA material caused the samples to heal around the 3mm pin punch immediately after impact, leaving behind the cavity shown in the figures. The black mark on each sample was used to preserve orientation during remounting the sample in the RUSpec chamber. seen in Fig. 3. However, the change to the resonant spectrum due to the new geometry occurs almost instantly after damage during the first healing phase, typically on the order of less than 1 second. All post-damage spectral evolution reported in this article occurred after this time, where all macroscopic changes in sample geometry had ceased to be visible. As can be seen in Fig. 3(b), dramatic evolution appears in the post-damage resonant spectra after this time.
Once the spectrum of each post-damage sample was measured, the identified resonant frequencies were plotted as a function of time. Partial spectra illustrating the time dependence for the three different post-damage samples are shown in Fig. 4. As can be seen in Fig. 4, all samples exhibited frequency changes that increased with time and lasted for approximately 3-5 minutes after damage, which typifies the secondary healing phase. For the samples reported here the average rate was (1.9 ± 0.6) × 10 −1 kHz/min, approximately two orders of magnitude larger than the undamaged sample rates. After this secondary post-damage state, a transition is visible in all the spectral evolution plots. During the next roughly 30 to 60 minutes, resonant frequency variation continued to occur during the tertiary healing phase, but at a reduced rate of (4 ± 2) × 10 −2 kHz/min, approximately one order of magnitude larger than the undamaged sample rates. During this phase most, but not all (e.g. sample c of Fig. 4), transitions resulted in increased resonant frequency. It is important to note that each resonant mode can couple to different elastic moduli. In the case of sample c, the illustrated mode appears to couple to elastic moduli that experience significant elastic softening during the tertiary healing phase.
After approximately 30-60 minutes all samples returned to an average rate of spectral evolution of (1.9 ± 0.9) × 10 −3 kHz/min, which is their approximate pre-damage rates. Prior experiments reported elsewhere estimated that the long-term post-damage elastic evolution of similar EMAA samples may take over 18 hours to reach equilibrium 11 . However, based on data from the long-term post-damage phase and the evolution discovered in undamaged samples reported here, that result can be interpreted as a manifestation of the persistent low-level elastic evolution that is always present in this type of EMAA sample.
In addition, the relative impact force, as measured using the impulse hammer, for samples b and c were 82% and 50% respectively relative to sample a. Also, the total volume of samples a and c were approximately 30% of that of sample b. Thus the observed elastic evolution and healing timescales, which were similar for all tested samples, especially during the secondary phase, do not appear to be significantly influenced by variation in the sample size or impact force. In contrast, the consistent nature of the healing timescales and associated transitions do correlate with the size of the damaged area, which was identical for all samples.
A summary of the healing phases and the approximate timescales using the dominant resonant frequency of each post-damage sample is shown in Table 1.
Conclusion
In summary, resonant modes whose resonant frequencies change with time were discovered in both undamaged and damaged EMAA samples, indicative of persistent elastic evolution. Immediate post-damage spectral evolution was observed to increase by approximately two orders of magnitude, and in all cases was characterized by an elastic stiffening phase. Afterwards subsequent post-damage elastic variation, both stiffening and softening, continued for the next 30 to 60 minutes, at approximately one order of magnitude above the pre-damaged values.
For the post-damage interval, the magnitude of the spectral evolution rates made it possible to differentiate the elastic evolution into distinct healing phases, consistent with previous experiments: an elastic melt state, instantly after the damage, a secondary, short term welding and solidification phase, and a third long-term phase. Also, the TDRS method, when applied to post-damage EMAA samples, provided a mechanism to quantify the post-damage timescales.
After the tertiary healing phase, spectral evolution decreased to levels comparable to their pre-damage evolutionary rates, indicating persistent elastic evolution is always present in these EMAA samples and that environmental exposure may affect the long-term self-healing efficacy. This attribute may significantly influence the applicability of EMAA self-healing materials for extended exposure applications. Additionally, differences in sample composition are expected to alter the sample resonant spectrum and potentially the associated time-dependent spectral evolution both before and after damage, if so, then it should be possible to use the TDRS technique to quantify and differentiate the influence of EMAA sample composition, such as ionic content, molecular weight and age, as well as sample geometry and damage conditions on the sample elastic variation and overall healing behavior.
Data Statement. Access to data presented in this work is available upon reasonable request from the corresponding author. | 2,381.4 | 2017-10-31T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Location-aware Event Attendance System using QR Code and GPS Technology
—Attendance process in a university’s event is time consuming and tracking the attendance can be harder. In this paper, a smart event attendance system for a university using QR code and GPS technology is proposed with objective to speed up the process of taking students’ attendance and tracking full attendance. The method of developing the system is based on two views; user view which is the mobile application used by the students, and admin view which is the web administration system used by the event organizer. From the evaluation, students’ attendance can be traced from the GPS location combine with QR code. The results indicate that full attendance increases as the system validates attendance through users’ identification, location and timestamp during user login and logout. The proposed system contributes to high satisfaction among the users that claim that the mobile application helps to speed up the event registration process.
INTRODUCTION
In this era, smartphones play a significant role in our daily lives.The emergence of mobile application, has been impacted by the convergent factors such as high-speed data network, relatively cheap devices, high-performing devices, easy-to-use market places for apps, and the need for simple, targeted applications while mobile [1].
Universiti Teknikal Malaysia Melaka (UTeM) is the 14th public university in Malaysia.This university consists of three campuses which are the main campus, technology campus, and city campus.The university organizes various events for the students from different campuses.Hence, there will be hundreds of students that will take part in the events, thus making the attendance taking process time consuming and may delay the start time of the event.
Therefore, the purpose of this study is two-fold: First, to investigate the requirements in event attendance for a university"s event, and second, to develop a mobile application that utilizes the QR code and GPS location.A proof of concept for the proposed solution is developed.The system consists of admin view for event"s organizer to create an encrypted QR code, and a user view for students to log in the university site by using unique matric number and password, scanning the QR code shown by the organizer and their current location which is tracked by the GPS as the attendance.The user view will then communicate the information collected to the admin view to confirm the attendance.This paper is organized as follows.Section II discusses previous studies in attendance management system, QR code, and GPS solution.Section III describes the methodology used to develop the student attendance system.Section IV outlines the implementation of this study, the discussion on system evaluation is provided in Section V, followed by conclusion in Section VI.
II. RELATED WORKS
Conventional attendance system is still used in most universities.However, this type of attendance system suffers problem like missing name, false attendance, missing attendance sheet, and tedious management.The advancement in attendance system has incorporate technological tools to improve the shortcomings in conventional system.In this section, various technologies used to support current work in the attendance system will be discussed.
An efficient web-based application for attendance management system is designed to track students" activity in the class by using the electronic methods [2].Besides, the attendance records are stored in the database and this system is developed with the usage of Model, View and Controller (MVC) architecture with the assistance of power of Laravel Framework.The purpose of this system is to differentiate the hours of theoretical and practical lessons since the calculation method for the absence rate of students for these lessons are different.
On the other hand, biometric technologies such as face [3], fingerprint [4], and iris [5] recognition have been introduced as students" identification and reduce the false attendance problem.Although biometric identification prevents fake attendance and proxies, it requires some efficient recognition algorithms [6] and higher computation power on the mobile phone, thus increasing deployment cost.
The emergence of sensors has innovated the technology in smartphones and Student Identification (ID) card which facilitate the authentication process.Technology such as barcode [7], Bluetooth [8], RFID [9] and NFC [10] are used in attendance system to improve the weaknesses in biometric system.However, there is concern in substantial additional cost to the university, namely hardware reader to track the ID [11].
Hence, QR code based system which is a combination of mobile devices to display and scan the QR code is introduced.An online student attendance monitoring system (SAMS) based on QR code and mobile devices is developed in [11].It www.ijacsa.thesai.orgseems quantitatively easier to discern the students based on their diligence in attending classes and predict their performance due to the correlation between the attendance and academic performance.Besides, the main advantage of this system is to record and monitor student attendance in a more accurate and quicker way.There are two main components in this system, which are SAMS server and SAMS application.The system itself is available online and designed for access via mobile devices.A unique QR code is generated and sent for each student by email and it is used to record attendance.These QR code are presented by students to their lecturer either using their smart phone or with a print out, and later scanned by the lecturer using the SAMS application.However, we noticed that SAMS requires lecturers" intervention which can disturb the class delivery.Therefore, in order to avoid interruption, a proposed solution is offered by Masalha and Hirzallah [12], where students are required to scan the QR code by using a specific mobile application before or during the class.The QR code is pasted on each displayed lecture slide.The identity of the student is identified when he or she scans the generated QR code, and the attendance is taken and sent to the university"s server [12].Other solution in [12] also includes face recognition that is applied to perform identity verification.A location check will be performed to verify the users" location.However, lecturer need to design and develop a specific QR code for each student and this method is not suitable for the process attendance of event.
Nonetheless, the main weakness in current works in attendance system is the current location of students is not tracked when they take the attendance by using the student attendance system.This weakness can be seen in SAMS [11], the attendance checking system using QR code in University Sulaimaniyah in Iraq [13] and the smart attendance system in Institute of Hydropower Engineering and Technology [14].Hence, cheating phenomena could have happened among students when they used these systems.Besides, the SAMS [11] is inappropriate for the process of students" attendance in event because a unique QR code is generated and delivered to each student by email, which is not suitable due to high overhead when the students coming from different faculties.
Consequently, we propose to develop an improved Event
Attendance System which based on the features that has been discussed above.The event attendance system implemented in this project is a software application created using Android Studio to ensure only the authorized students can login into the system by using their unique matric number and password.Besides, the login and logout time of students and their current location which is tracked by GPS sensor will be recorded and stored in database as the attendance.In addition, the process of taking attendance can be speed up as the event organizer only needs to create an encrypted QR code with the event information provided.
A. Requirement Analysis
Prior to this project, we performed requirement analysis in a meeting with the IT Operation officers from IT Centre (PPPK) in the UTeM.Based on the discussions, the functional requirements specification for the proposed system has been identified (refer Table I).Hardware and software requirement has been specified in Table II To meet the requirements set by PPPK, the non-functional requirements will evaluate [15]: 1) Performance: The system is capable to scan the QR code based on various setting of lighting, angle and distance.
2) User Acceptance: To demonstrate how the design affects the usage of the application by the user, a preliminary study is conducted that use a quantitative methodology.A structured questionnaire is used to collect data survey from 20 students who have different levels of IT skill.The objective of this study is to measure the user satisfaction toward the application.
B. System Flow
Figure 1 shows the flow chart of user view for Event Attendance System.The android application allows a student www.ijacsa.thesai.org to login into the system.After successfully login into the system, the student will select "SCAN THE QR CODE" button to scan the QR code which is generated by the university"s event organizer.After the scanning process, the information about the event which is included in the QR code, the student"s location, and student identity will be sent to database server.This is to ensure that the student is within the event hall/location when he or she is registering their attendance.The attendance will only be saved in database when the student scans the QR code 15 minutes before the event starts, and needs to logout within 15 minutes after the event ended.Figure 2 shows the flow chart of the admin view.The system administor is able to choose different options in the main menu page.If administrator select the Student Attendance option, he or she can search an event name for viewing all students" attendance who participated in the event.Besides, administrator is allowed to view the particular event details if he or she indicates the Event Details selection.In addition, administrator is able to add a new event details which will be saved in database and generate a QR code that consists of the event data in Add Event and QR code generator respectively.Lastly, administrator is capable to view the student details on Student Details page according to the searched matric number.
A. System Architecture
Figure 3 shows the architecture of the proposed system.First, the user needs to log in the system by using their email address and password.After that, the student needs to scan the QR code that is provided by the organizer.When the student uses the android application to scan the QR code, the application will request the location of the student to ensure he or she is at the correct location to take the attendance.
B. Experimental Setup
The system evaluation is conducted in Universiti Teknikal Malaysia Melaka (UTeM).20 students who have different levels of IT skill from different faculties were selected to test the system.We created two events for the purpose of evaluation; Hacking Event and Workshop 2 Briefing, with different location for each event.We conducted three types of evaluation to meet the system requirements, namely system functionality, system performance and user acceptance.
The system consists of two views; user view which accessible through mobile application and admin view from web administration system.Further discussion on both views is provided in the Section C and D.
C. User View: Mobile Application
Mobile application contains four modules: Login, QR code Scanner, GPS and Attendance modules.In Figure 4, the details of the student such as username, name, faculty, course and login time which are stored in database will be displayed if they successfully log into system.For attendance input, the student needs to press on the "SCAN THE QR CODE" button for scanning the QR code.In Figure 5, the student needs to scan the QR code which contains the event"s information by using the QR code scanner.The event"s details that is extracted from the QR code will be displayed in this page and saved in database (refer Figure 6).Additionally, student can request their current GPS location by pressing Send Request button.The current location of the student will be tracked and later saved in database.The acceptance of student attendance is notified by displaying the username, student name and event"s details on the mobile page.7 shows the application tracking the current location of the student.In Figure 8, the event"s details such as event name, current location and event end time will be displayed in this page.Besides, the username, student name, and login time of the student will be called from database and displayed on the page.To ensure the attendance is taken, Logout button should be pressed once the event ended for student to logout from the system.However, if student logout from the system before the event ended, the attendance will not be taken.
Identify applicable sponsor/s here.If no sponsors, delete this text box (sponsors).www.ijacsa.thesai.org
D. Admin View: Web Administration System
On Web Administration main menu page (refer Figure 9), there are five modules that needs to be manage; Add Event (refer Figure 10), QR Code Generator (refer Figure 11), Event Details (refer Figure 12), Student Attendance (refer Figure 13), and Student Details (refer Figure 14).
To register new event, administrator can add the event details and save to the database as in Figure 10.Administrator will generate the event QR code on the page.Besides, administrator can choose the size and error correction of the QR code (refer Figure 11).Event details will be displayed on the page (refer Figure 12).In Figure 13, administrator can view the students" attendance according to a specific event.Students" attendance will be displayed on the page.Students" attendance list can be downloaded in Excel format.
In Figure 14, administrator can view the student"s details according to a specific matric number and the student details will be displayed.
V. EVALUATION AND DISCUSSION
System evaluation is conducted to verify system functionality, system effectiveness, and user satisfaction.
A. System Functionality
User view and admin view are evaluated in integration test to examine the functionality of all the components together.Testing used wireless services provided by the university without increment of bandwidth from the IT Center and also mobile service provider.All modules in mobile application successfully connected to the database in the Web Administration system.This concludes that the user view and the admin view are functionally capable to operate.Authentication for the system is unique as they used email which consists of student matric number.The application will check the identity based on the student database stored in the Firebase cloud.Registration of the event is design to make sure the student will participate in the event to the end.In this case, unless the student did not logout, their attendance will not be counted.In addition, there is condition to be met; they can only logout after the event ended.
B. Performance Evaluation
Three parameters are used to evaluate the performance of the system: 1) Angle degree: the angle degree for handling the device when scanning the QR code.
2) Distance: the distance between the device and QR code 3) Brightness level: the level of brightness of the device These performance parameters are selected to ensure the system can be performed in a high effectiveness [15].
Table IV, V and VI illustrate the performance of each parameter selected in this study.In Table IV, four angle degrees are evaluated namely 30, 45, 90 and 120 degrees.From the table, it can be seen that successful QR scanning applies only for 90 degrees" angle.The results from Table IV shows that 90 degree is the preferred angle to handle the device when scanning the QR code.However, the status is Fail when user handle the device by using 30 degree, 45 degree, and 120 degree.This may be due to handling the device using these three angles will produce partial view of the QR code, thus making the system unable to detect the QR code.Besides, distance between the device and QR code plays an important role in this test.
In Table V, four distance evaluation is performed namely 3, 6, 9 and 12 cm.From the table it can be seen that successful QR scanning applies only on distance of 3 cm and 6 cm.From the results in Table V, 3cm and 6cm are the most suitable distance among these distances during the scanning of the QR code, whereas the status of 9cm and 12cm are Fail.These two distances of device are too far away from the QR code and the system is unable to detect the QR code.
In Table VI, four level of brightness evaluation is performed namely 10%, 30%, 60% and 100%.From the table, it can be seen that successful scanning applies on 60% and 100% brightness.www.ijacsa.thesai.orgFrom Table VI, 60% and 100% are the ideal brightness for the device to detect the QR code and extract the data inside it.Nevertheless, 10% and 30% of brightness is not suitable for device to detect the QR code because there is not enough light to decode the data that encoded in the QR code.
C. User Acceptance Evaluation
Upon data collection, the level of acceptance was investigated by evaluating the application user interface quality, reliability, satisfaction and future use.Table VII shows the feedback received from the users after they used the application.The highest average mean among three categories is the interface quality of the application which is 4.90.Besides, all users agree that they do not need technical support when using the application, hence the mean is 5.00.This application is accepted by most of the users because the system is user friendly and convenient to use, as the mean is 4.95.In terms of reliability, the average mean is 4.52.We conclude that this application is able to scan the QR code in high effectiveness due to its highest mean of 4.90.For tracking user"s current location, this application score 4.15.Certain smartphones are slow to display GPS location; thus the score is low.The average mean of the satisfaction and future use category is 4.80.Therefore, it can be concluded that most of the users are satisfied with this application for taking the attendance as its mean is 4.90.
VI. CONCLUSION AND FUTURE WORK
Location-aware Event Attendance System using QR code and GPS technology is implemented using android application and Firebase database in cloud to manage the attendance information.From the evaluation, the proposed system was capable to take the student attendance by scanning the QR code.The GPS location, time login and logout were tracked to ensure full attendance.We found positive feedback for the system in the user acceptance test.However, this system can only support android application which makes it inconvenient for iOS users.Furthermore, the proposed system is only capable of tracking the location without calculating the distance to the event venue.In addition, the application also needs strong Internet connection.
For future work, we plan to improve the application operability to support both android and iOS smart phone.To calculate the distance between the user and the venue, we propose to incorporate Google Maps Distance Matrix API in the application.To decrease false attendance and secure authentication, the authors also plan to apply factor-based authentication scheme with low cost method in the application [16].This study can be extended to other areas such as recommender system.
Figure
Figure7shows the application tracking the current location of the student.In Figure8, the event"s details such as event name, current location and event end time will be displayed in this page.Besides, the username, student name, and login time of the student will be called from database and displayed on the page.To ensure the attendance is taken, Logout button should be pressed once the event ended for student to logout from the system.However, if student logout from the system before the event ended, the attendance will not be taken.
TABLE IV .
RESULT FOR ANGLE DEGREE
TABLE V .
PERFORMANCE TEST OF DISTANCE BETWEEN THE DEVICE AND THE QR CODE
TABLE VI .
PERFORMANCE TEST OF BRIGTNESS LEVEL OF THE DEVICE WHEN SCANNING THE QR CODE
TABLE VII .
USER ACCEPTANCE TEST | 4,503.8 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Mathematical Modeling of Prediction of Horizontal Wells with Gravel Pack Combined with ICD in Bo tt om-Water Reservoirs
: During the development of horizontal wells in bo tt om-water reservoirs, the strong heterogeneity of reservoir permeability leads to premature bo tt om-water breakthroughs at locations with high permeability in the horizontal wellbore, and the water content rises rapidly, which seriously a ff ects production. To cope with this problem, a new technology has emerged in recent years that utilizes gravel fi lling to block the fl ow in the annulus between the horizontal well and the borehole and utilizes the In fl ow Control Device (ICD) completion tool to carry out segmental water control in horizontal wells. Unlike conventional horizontal well ICD completions that use packers for segmentation, gravel packs combined with ICD completions break the original segmentation routine and increase the complexity of the production dynamic simulation. In this paper, the fl ow in di ff erent spatial dimensions, such as reservoirs, gravel-packed layers, ICD completion sections, and horizontal wellbores, is modeled separately. Furthermore, the annular pressures at di ff erent locations are used as the solution variable for the coupled solution, which realizes the prediction of oil production, water production, and the water content of gravel packs combined with ICD completion of horizontal wells. The model is used to calculate the e ff ects of di ff erent crude oil viscosities, di ff erent reservoir permeabilities, di ff erent permeabilities of gravel-packed layers, and di ff erent development stages on the water control e ff ects of gravel packs combined with ICD completions and conventional ICD completions under fi eld conditions
Introduction
In light of the recent strides in drilling and completion technologies, horizontal wells have emerged as the predominant well configuration employed for exploiting bottomwater reservoirs [1].The elongation of horizontal well trajectories augments the interfacial expanse connecting the wellbore and the reservoir, thereby amplifying well productivity.However, concomitant with these enhancements, a plethora of challenges emerge.First, the protracted wellbore length engenders friction-induced pressure differentials within the horizontal conduit.Consequently, a discernible pressure deficit manifests at the "heel" segment relative to the "toe" counterpart, engendering disparately distributed inflows along the wellbore trajectory.Second, the reservoir's inherent heterogeneity bestows non-uniform fluid influx longitudinally within the horizontal wellbore, hastening premature incursions of bottom water.This influx disparity precipitates escalated aqueous encroachment, thereby engendering elevated aqueous content and a precipitation diminution in oil production rates [2].emplacement of a screen encircling the perforated base pipe, succeeded by injecting highpermeability gravel into the wellbore, executed via a recirculatory modality.This procedural configuration engenders the meticulous occupation of the interstitial zone between the screen and the reservoir formation, culminating in establishing a resilient subterranean milieu characterized by a sustained fluid production devoid of entrained formations of sand.The assimilation of gravel-packing technology within conventional oil wells has reached a state of pronounced maturity, progressively extending its purview to horizontal well configurations within bottom-water reservoirs [3].Antecedently, early endeavors in horizontal well sand management entailed the integration of prepacked screens within unobstructed wellbores, thereby orchestrating sand production control.However, the efficacy of this approach was swiftly overshadowed by mounting challenges, as evidenced by a disconcerting failure incidence, peaking at 25% for the prepacked screen completions in the Gulf of Mexico [4].Subsequent contemplation precipitated the realization that an open-hole gravel-packing regimen engenders an efficacious conduit toward enhancing the dependability, efficacy, and enduring viability of horizontal well sand control [5].In preliminary forays aimed at imbuing horizontal wells with gravel packing, the densitybalancing paradigm was embraced to counteract the adversarial influences of gravity.Alas, the outcome was met with limited success [6].Consequently, the alpha/beta wave methodology emerged as a pervasive and efficacious approach extensively deployed to effectuate gravel packing across a diverse spectrum of horizontal wells, reaping augmented triumph.This methodology effectually curtails the risk of erosive manifestations, concurrently augmenting the tractability of the circulation pathway attributed to a continuum of enhancements iteratively infused into the constituent tool architecture [7].
The amalgamation of gravel pack and inflow control device (ICD) completion embodies a synergistic fusion of gravel-packing technology and the ICD completion strategy.This composite approach entails depositing gravel materials within the annular interstice between the ICD completion tubing and the circumferential borehole wall.The resultant configuration engenders an axial confinement, effecting a circumferential seal within the annular void.This seal exerts a localized impediment upon the ingress of fluid influx across discrete well segments.Functionally akin to deploying multiple packers, this integrated methodology assumes a multifaceted role, prominently encompassing the attenuation of inflow emanating from high-permeability strata.Additionally, it assumes the mantle of a selective flow regulator, culminating in a dualistic objective: controlling water encroachment while concurrently augmenting oil productivity [8].In the context of bottom-water reservoirs, the fusion of gravel packs and the inflow control device completion method extends its purview to encompass sand prevention endeavors.This inclusive methodology is a composite assemblage with a packing assembly, a blind tube, a screen tube string, and a double-stage filtering floating shoe.Of these components, the screen tube string emerges as a pivotal constituent, comprising a foundational base tube, a filtration screen element, a protective screen shield, and a water control apparatus.
ICD Types
To address this quandary, the adoption of inflow control devices (ICDs) is progressively gaining traction within horizontal wells situated in bottom-water reservoirs, offering a singular pathway toward attaining precision control and the optimization of subsurface hydrodynamics within an individual well or reservoir milieu.The underlying premise of effectuating ICD completion resides in the endeavor to orchestrate uniformity in the inflow traversing the longitudinal expanse of the horizontal wellbore, a feat facilitated via the judicious application of the choking phenomenon intrinsic to the ICD apparatus, thereby ameliorating the manifestations stemming from the oscillations between the "heel" and "toe" and permeability gradients [9].It is imperative to underscore that deploying necessitates the meticulous consideration of inaugural reservoir parameters and assumes an immutability post-installation, precluding all subsequent adjustments or replacements.
The research on ICD completion technology began in the early 1990s.Norsk Hydro [10] first applied ICD completion technology to the Troll oilfield.Through monitoring and testing of horizontal well production, it was proved that the ICD water-controlled completion device effectively balanced the inflow profile of horizontal Wells, effectively delayed the time of bottom water coning, and effectively improved the recovery rate.Since then, ICD completion technology has been widely used in foreign countries.Diverse variants of inflow control devices (ICDs) exist, each predicated on distinct mechanisms to induce the requisite pressure decrement concomitant with fluid flow.At present, Baker Hughes, Halliburton, Schlumberger, Weatherford, and other companies have developed different types of ICDs, which are divided into helical channel-type ICD, nozzle-type ICD, orifice-type ICD, and hybrid ICDs according to their internal structural characteristics [11].Among these, the prevailing archetypes encompass the channel and nozzle configurations, prominently featured as two principal categories.While nuanced discrepancies in designs characterize these divergent ICD types, it is salient to underscore that their underlying operational tenets converge upon a shared foundational principle [12].
The channel-type inflow control device (ICD) is an inaugural manifestation within the pantheon of ICD categories, characterized by utilizing distinct channel lengths to modulate fluid dynamics [13].Fundamentally rooted in its design, the channel-type ICD harnesses an extended conduit, thus engendering an augmented pressure differential consequent to fluid traversal.This orchestrated pressure dichotomy engenders a corresponding subdued flow velocity, mitigating the propensity for erosive and obstructive events.Nonetheless, concomitantly, in scenarios typified using heightened oil-water viscosity ratios, the emergent frictional interactions furnish a pronounced pressure differential variance, as shown in Figure 1.The flow pattern can be clearly seen in Figure 1, where fluid flows from the reservoir into the channel-type ICD and the wellbore through internal channels in the channel-type ICD.The nozzle-type inflow control device (ICD) constitutes an alternative category characterized by employing diminutive nozzles or orifices to effectuate a targeted pressure descent [14].In stark contradistinction to the channel-type ICD archetype, the nozzle-type variant changes after the dynamic interplay of fluid density and velocity rather than being predominantly contingent upon viscosity.This design paradigm, notable for its conceptual simplicity and malleability, accommodates facile reconfiguration.However, it also manifests heightened vulnerability to abrasion stemming from sand particulates.
In addition, a range of ICD types to choose from increases the selection of completion techniques, encompassing the nozzle-channel hybrid ICD and mixed channel ICD, among others [15].Mixed channel ICD adopts the principle of distributed step-by-step throttling, and the plurality of partitions are set in the internal structure to form a plurality of flow channels, thereby generating pressure drop.Compared with the nozzle-type ICD structure, the flow area through the flow channel is relatively large, so the fluid erosion and blockage are greatly reduced.
Mathematical Method
The amalgamation of the gravel pack and inflow control device (ICD) completion methodology has hitherto manifested a partial implementation within the ambit of the South China Sea, yielding discernible outcomes.However, the comprehensive elucidation of this amalgam's efficacy remains delimited by a paucity of mathematical models proficiently encapsulating both the granular comportment of gravel packing and the intricate attributes inherent to ICD-driven water control completions.Presently, commercially available software platforms are amenable to the dynamic prognostication of water control completions within horizontal wells ensconced in bottom-water reservoirs, such as the Eclipse and Netool software suites.Eclipse software embodies a multifaceted framework engendering coupled simulations, encompassing both the fluid dynamics within horizontal wellbore conduits and the reservoir seepage phenomena, conjoined within the ambit of a segmented well mathematical model, as shown in Figure 2. To meet the variegated exigencies of water control completions, Eclipse software has burgeoned to encompass an augmented simulation functionality for an assorted array of ICD completion tools, affording users the prerogative of tailored tool selection.In particular, the labyrinth-type ICD and spiral channel-type ICD are denoted with the keywords WSEGLABY and WSEG-SICD, respectively [16].Conversely, the Netool software augments predictive capabilities by invoking a steady-state production model to unravel the reservoir inflow dynamics vis à vis the horizontal wellbore.This is further complemented with a multiphase flow model that effectively unravels the intricate nuances governing the variable mass flow within the horizontal wellbore.A network of nodes underpins the amalgamation of diverse flow paradigms, enabling an integrated solution.Facilitated by its nodal architecture, Netool extends an extensive repertoire of well completion simulations, encompassing open-hole configurations, perforated completions, water-controlled methodologies, gravel-packed implementations, and more [17].Numerous investigations have been disseminated on the matter of water control within horizontal wells situated in bottom-water reservoirs, stratified mainly into analytical, semi-analytical, and numerical simulation paradigms.The analytical framework for comprehending water control completions in bottom-water reservoirs is predicated on a steady-state production-centric mathematical scaffold, distinguished for its expeditiousness and adaptability.Wang et al. delved into the inquiry of variable mass flow dynamics in the context of horizontally disposed wellbores, establishing an analytical foundation for comprehending the interplay between wellbore and reservoir.This study assesses the fluid production profile variations in horizontal wells, duly accounting for the mitigating influences engendered via ICD-based water control under steadiness conditions [18].Similarly, Rao et al. established an experimental simulation setup encapsulating dual porosity formations and wellbore dynamics and conceived an integrated model.Comparative investigations encompassing scenarios devoid of water control, alongside instances employing packers and ICDs, as well as gravels and ICDs, were undertaken.These analyses were underscored via a foundation of steady-state mathematical modeling, engendering a comprehensive perspective [19].Meanwhile, the semi-analytical realm embodies a computational methodology, an outcome of fusing an analytical framework grounded in pointsource solutions with an iterative-based numerical framework.This composite platform, endowed with the capacity to integrate considerations about permeability proximate to the wellbore, skin factor influences, and diverse water control tools, operational across heterogeneous well segments, furnishes a rapid avenue for the dynamic prognostication of horizontal or multi-lateral well behaviors.Ozkan et al. articulated a semi-analytical mathematical architecture underpinned in point-source solutions, encompassing reservoir-wellbore interplay, thus enunciating determinants influencing wellbore flow and pressure profiles, spanning the gamut from steadiness to dynamic conditions [20].The tandem articulation of unsteady and steady-state solutions has been effectuated by Lian et al., wherein a novel integrated construct was devised catering to the nuanced particulars of fractured horizontal wells, invoking Green's functions and Newman's product principle.The resultant model, tailored to finite conductivity scenarios, converges via a combination of the quasi-Newton methodology and Particle Swarm Optimization algorithm, thus encapsulating a holistic perspective [21].Ouyang et al. scrutinized singlephase and multi-phase flow dynamics within horizontal wellbores, centrally addressing the quandary of pressure dissipation within such scenarios [22].In a parallel endeavor, Zhang et al. elucidated a theoretical construct facilitating an optimal water control completion design predicated on the framework of source functions and a network model.This model, distinguished by its incorporation of parameters spanning well trajectory, heterogeneity, skin factor, and annulus flow considerations, embodies a comprehensive vista [23].The realm of the reservoir numerical simulation entails the solution of the reservoir mass conservation equation, predicated on finite difference techniques, thereby simulating subsurface oil-water transport and prognosticating the spatiotemporal distribution of hydrocarbons within the reservoir at distinct junctures.While numerical simulation methods offer a versatile purview, they necessitate extensive data and computationally intensive processes.An et al., adopting a tripartite perspective spanning the reservoir, ICD, and the horizontal wellbore, undertook a pioneering endeavor.Their approach entailed the construction of a Jacobi matrix that interlinked pressure attributes across the three spatial scales, culminating in an integrated model for ICD-driven water control completions in horizontal wells, realized using a fully implicit solution approach [24].In this paper, the ICD production prediction of a gravel pack horizontal well in the bottom-water reservoir is realized by establishing the coupling model of different dimensions of flow, which innovatively increases the simulation of the gravel pack and forms the coupling model.
Regarding the amalgamation of gravel packing and inflow control device (ICD) completion, this innovative paradigm for horizontal well water control represents a nascent venture.However, predictive methodologies for ascertaining its production capacity remain limited.To address this lacuna while concurrently catering to considerations of computational efficiency and expediency, we proffer an innovative mathematical framework conjoining the intricacies of flow within bottom-water reservoirs, gravel packing, and ICD characteristics.The intricacies of horizontal wellbore flow are thus elucidated via an iterative solution methodology.
Flow Modeling in Different Spatial Dimensions
During the production phase, the interplay of biphasic oil-water fluids within the confines of the bottom-water reservoir necessitates negotiating the intricate labyrinth of flow resistance manifest across multiple spatial scales.These scales encompass the macroscopic dimensions of the reservoir itself, the mesoscopic stratification of the gravelpacked stratum, the distinct ICD completion segment, and the longitudinal expanse of the horizontal wellbore.Therefore, as a fundamental prerequisite, formulating flow models spanning multi-scale domains assumes paramount significance.
Bottom-Water Reservoir Flow Model
We adopt a stratagem rooted in semi-analytical and numerical simulation methodologies to discretize the horizontal well configuration and ensure expediency.In doing so, to simplify model derivation and highlight major model contributions, we purposefully omit considerations of inter-segment perturbations, thereby allowing us to treat each horizontal segment in isolation.Employing analytical expressions tailored to the specifics of each distinct horizontal segment, we diligently resolve their productivity equations, as shown in Figure 3.We postulate a scenario wherein the upper reservoir surface serves as a confined boundary while the lower surface persists as a constant-pressure demarcation.Within this contextual backdrop, the reservoir is treated as an anisotropic entity while the prevailing regime sustains a condition of steady-state flow, with capillary pressure effects duly disregarded in order to simplify model derivation and highlight major model contributions.
To render tractable analysis, we approximate the intricate three-dimensional seepage field as two discrete two-dimensional counterparts: one operating in the vertical plane and the other in the horizontal plane.The ensuing evaluation furnishes distinct seepage resistances within the vertical and horizontal domains, harmoniously amalgamated to engender the production capacity equation governing a designated section of a submerged reservoir's horizontal wellbore [25].
where Q is the volume flow; pe is the reservoir pressure; pwf is the bottom hole pressure; Rh is the resistance to seepage in a horizontal plane; Rv is the resistance to seepage in a vertical plane volume flow; K is the permeability of the reservoir; h is the thickness of the reservoir; µo is the viscosity of the oil; Bo is the volume factor of the oil; kro is the relative permeability of the oil; µw is the viscosity of the water; Bw is the volume factor of the water; krw is the relative permeability of the water; a is the long half-axis of the elliptical drain area; L is the length of the horizontal well; zw is the vertical position of the horizontal well; and rw is the radius of the horizontal well.
ICD Flow Model
As previously delineated, an assortment of inflow control device (ICD) variants exists, encompassing channel-type ICDs, nozzle-type ICDs, and labyrinth-type ICDs, among others.Research investigations have consistently underscored a discernible trend, irrespective of the specific ICD taxonomy.Since the pressure drop caused by ICD cannot be expressed analytically, we use empirical formulas to express it.Through the experiment on the flow law of ICD, the relationship between pressure drop and flow rate can be obtained so that K can be calculated The formula for the characteristic curve of the ICD is where ΔPICD is the pressure drop across the ICD, ρm is the density of the oil-water mixture, and K is the coefficient of ICD (obtained via experimentation).
Gravel-Packed Layers Flow Model
Upon the comprehensive imbuing of the annular cavity between the base pipe and the lateral wellbore wall with gravel, a distinctive scenario materializes, giving rise to a high-permeability domain orchestrated via the gravel's strategic placement within the axial extent of the horizontal borehole.Notably, manufacturer specifications indicate that ultralight gravels within the 20-40 mesh classification engender an exceptional permeability of up to 27.5 Darcy units, while their 40-60 mesh counterparts confer a commendable permeability of up to 17.7 Darcy units.Evidently, the augmentation in permeability ensuing gravel packing does not substantively engender a state of pipe flow within the overall annular expanse of the horizontal well.On the other hand, this annulus predominantly accommodates seepage.As such, the canonical Darcy's law is aptly invoked to underpin the formulation of the pertinent flow mathematical model.
where Δpwb is the pressure drop across the packed gravels, µm is the viscosity of the oilwater mixture, Aanu is the area of the annulus of the horizontal well, and L is the length of the horizontal well.According to our calculations, the permeability of the gravel pack is much higher than that of the reservoir; the seepage space is also small, resulting in a pressure drop that is not small and, therefore, cannot be ignored.
Horizontal Wellbore Flow Model
The dynamics governing the biphasic flow of oil and water within the horizontal wellbore invariably elicit pressure differentials.These differentials emanate from an array of causative agents; for instance, the undulating trajectory of the horizontal section precipitates a gravitational pressure decrement, disparities in the smoothness of the wellbore wall or elevated fluid viscosity give rise to frictional losses, and alterations in the fluid flow rate within the wellbore introduce acceleration-induced pressure fluctuations.The cumulative effect of these influences imparts a non-uniform pressure distribution spanning the wellbore's trajectory, extending from its inception at the heel to its termination at the toe.In light of this intricacy, our approach is predicated on formulating distinct mathematical models, each circumscribing the distinct impact of gravity-induced pressure attenuation, friction-induced pressure diminution, and acceleration-induced pressure fluctuations.
In this study, it was assumed that the fluid within the wellbore behaves as a onedimensional, isothermal, incompressible fluid, and the horizontal wellbore was divided into n small segments with an equal length of L.
(1) Gravity Pressure Drop During the oil-water two-phase flow in a horizontal wellbore, the pressure loss caused by the wellbore undulation can be expressed as where Δph is the pressure drop due to gravity, Δh is the vertical height of the wellbore between different segments, and i is the horizontal well segment.
(2) Friction pressure drop The frictional pressure drop of each section of the horizontal well wall is where Δpf is the pressure drop due to friction, f is the friction factor, and Δx is the length of a wellbore segment.
(3) Acceleration pressure drop The pressure due to the change in the oil-water two-phase kinetic energy can be expressed as where ΔPa is the pressure drop due to acceleration, min is the mass flow rate of the mixture, and A is the cross-sectional area of the wellbore in the horizontal segment.
During reservoir coupling, we ignore the acceleration pressure drop.
Integrated Coupling Model
The comprehensive depiction of distinct flow models within diverse spatial dimensions coalesces around the intricate interplay connecting flow and pressure phenomena.Consequently, the crux of achieving a synergistic solution across the disparate flow fields resides in the astute identification of nexus points engendering the fusion of these domains.In this investigative pursuit, our focus is squarely fixed on the conjunctive articulation of bottom-water reservoir flow, the gravel-packing dynamics, and the underpinning influences exerted by the ICDs.Our methodology commences with the assimilation of these interconnected components, facilitating the determination of production rates contingent on the initial pressure distribution prevalent within the incipient horizontal well section.Subsequently, a judicious application of an iterative algorithm is harnessed to distill the precise pressure distribution pervading the horizontal section while concurrently discerning the concomitant production rate.
Assumption
The coupled model was established based on the following assumptions: 1. Bottom-water reservoirs are equal-thickness reservoirs where the top boundary is closed, and the bottom boundary is driven by bottom water, which satisfies Darcy seepage and ignores the effect of capillary forces.2. Bottom-water reservoir permeability is heterogeneous but isotropic, and the nearwell zone permeability corresponding to each horizontal well section is uniform.3. Reservoir fluids are two-phase oil-water flows where the fluid is incompressible, has constant viscosity and volume coefficient, and is pressure independent.4. The flow process was assumed to be isothermal, with no heat exchange with the external environment.
5. Each horizontal well section is independent of and does not interfere with each other's production during the production process.6.The density of the fluid flowing into the ICD is assumed to be the mixed density at 50% water content.7.Only the axial resistance of the gravel-packed layer is considered, and the effect of the radial resistance of the gravel-packed layer is neglected.
Model Coupling
In this study, a coupling method for the flow models of different spatial dimensions was proposed based on the node analysis method, as shown in Figure 3.As seen in the figure, the reservoir fluid first enters the gravel-packed layer and then passes through the ICDs to enter the horizontal wellbore.Since the gravel-packed layer itself has a certain permeability, the fluid will choose the entry route according to the difference in the entry resistance.According to the fluid flow law, the coupling is divided into four parts, which are reservoir flow, internal flow of filled particles, water control tool, and horizontal wellbore.We use the gravel pack as an axial wellbore packer or as a packer to stage the wellbore, with water control valves at each stage.
Taking the horizontal well in the bottom-water reservoir in the figure as an example, there are four water-control screen tubes, and one ICD is installed in each water-control screen tube, so it can be assumed that the horizontal well is divided into four segments, and each segment corresponds to one reservoir pressure, one pressure of the gravelpacked layer, and one bottom hole pressure.Therefore, the following equation can be listed: where pwb is the gravel-packed layer pressure of each segment, pwf is the bottom hole pressure of each segment, and i is the segment number.We first calculate the case of constant bottom hole pressure and set the initial value of bottom hole pressure for each horizontal segment, take the corresponding gravelpacked layer pressure of each horizontal segment as an unknown, and carry out a joint solution to calculate the gravel-packed layer pressure of each horizontal well segment.We calculate the horizontal section pressure drop using the production rate of each horizontal section and obtain the flow pressure of each horizontal section under the current production rate.Then, we compare the initial value of the bottom hole pressure, and if the error is large, the above process is repeated with the calculated bottom hole pressure as the initial value again until the error requirement is met.Given different bottom hole pressures, the above steps can be repeated to obtain production at different bottom hole pressures, and the corresponding oil production, water production, and water content can be obtained, as shown in Figure 4. Two salient considerations warrant explication herein.Firstly, the resolution of Equation (7) engenders a nonlinear system of equations demanding adept handling.Employing the Newton-Raphson method for linearization emerges as a judicious avenue for attaining the sought-after solution.Secondly, when confronted with an operational scenario defined using a fixed production rate, an efficacious approach entails a sequential computation strategy.Initially, the production rate is estimated across diverse flow pressure regimes, and thenceforth, the ensuing inverse analysis furnishes the corresponding subterranean pressures.With this achieved, the method outlined above can be adroitly wielded to ascertain pivotal parameters, including oil production rate, water production rate, and the water content manifest within the horizontal well configuration.
Case Study
Utilizing a representative horizontal well within a bottom-water reservoir as a pivotal case study, we have orchestrated the employment of a coupling model emblematic of horizontal wells seamlessly integrating gravel packing and inflow control device (ICD) completions in the context of bottom-water reservoirs.This paradigmatic construct has been harnessed as the fulcrum for our comprehensive computational endeavors.Within this investigative ambit, we have carried out an intricate array of sensitivity analyses, systematically probing the nuanced ramifications stemming from diverse oil viscosities, reservoir permeabilities, gravel-packed layer permeabilities, and water saturations at distinctive production stages.This systematic exploration casts an illuminating spotlight on the efficacy underpinning water control measures.A comprehensive juxtaposition of strategies, including gravel pack combined with ICD completions, conventional ICD completions, and traditional screen tube completions, has been rigorously conducted.Inherently, the horizontal wells probed herein exhibit an extended length of 500 m, with water control production aptly governed via nozzle-type ICDs.The horizontal well configuration is thoughtfully segmented into 50 discrete sections, undergirded via meticulous alignment with ICD design parameters, grounded in the horizontal well permeability profile.Moreover, conventional ICD completions have been adroitly applied, featuring the imposition of dual packers to effectively seal the horizontal well conduit, as shown in Figure 5. Table 1 illustrates the basic parameters.
Oil Viscosity
Oil viscosity stands as a pivotal determinant exerting substantive influence over the efficacy of reservoir recovery mechanisms in the context of bottom-water reservoirs.In the realm of horizontal wells, endowed with the amalgamation of gravel packing and inflow control device (ICD) completion methodologies, the augmentation in oil viscosity assumes paramount significance.A cardinal implication of heightened oil viscosity resides in its catalytic role in amplifying pressure differentials.This manifests as a tangible escalation in the requisite pressure drop, whereby a commensurate output mandates an augmented pressure gradient.The consequential impact of elevated oil viscosity assumes palpable dimensions: a discernible surge in pressure drop accompanied by a concomitant diminution in bottom hole pressure.These intricate dynamics, in turn, promulgate a notable escalation in the interstitial pressure discrepancies traversing distinct locations ensconced within the confines of the packed gravels.This cascading effect duly extends to encompass the inter-segmental flow dynamics unfurling within the expanse of the gravelpacked strata.An incisive elucidation of the water control ramifications, encapsulated within diverse oil viscosity scenarios, has been adroitly conducted leveraging the framework of our coupled model.The outcomes of this analytical venture are eloquently presented in Figure 6.We used the above configuration for oil viscosity sensitivity analysis.The graphical representation lucidly attests to the conspicuous superiority of water control outcomes within bottom-water reservoirs, as achieved via the fusion of gravelpack-combined with inflow control devices (ICDs) and conventional ICDs, compared to conventional screen tube completions.Intriguingly, the interplay of escalating oil viscosity manifests as a discernible determinant, precipitating a gradual ascent in water content ratios across distinct completion methodologies.This trend, in turn, coincides with a gradual attenuation in the efficacy of water control endeavors.Notably, the ascent in water content ratios within the domain of gravel-pack-combined ICDs exhibits an accelerated trajectory relative to conventional ICDs.However, an intriguing inflection point emerges as the viscosity of subsurface crude oil surpasses the threshold of 160 mPa.s.At this juncture, the water control performance of gravel-pack-combined ICDs begins to diverge unfavorably from the benchmarks established using conventional ICD completions.
Reservoir Permeability
Reservoir permeability stands as a consequential determinant critically influencing the developmental efficacy of bottom-water reservoirs.Following established tenets, heightened permeability imparts a cascading series of benefits.These encompass augmented production rates coinciding with attenuated pressure drop phenomena.In particular, within the confines of the packed gravels, the interstitial pressure disparities are diminished in magnitude, concurrently engendering reduced flow rates via the packed gravels under conditions of minimal pressure gradients.This corollary bears significance, as it underscores an amplified efficacy in the blocking function of the gravels.Employing our interlinked model, we systematically unravel the implications stemming from divergent reservoir permeabilities.A comprehensive synthesis of these insights is visually conveyed in Figure 7. Evidently discernible within the graphical representation, a positive correlation unfolds between the augmentation of reservoir permeability and the progressive attenuation of water content within both gravel-pack-combined ICDs and conventional ICDs.This trend inescapably culminates in the augmentation of the water control efficacy of the well, signifying a marked improvement relative to conventional screen tube completions.However, a noteworthy pivot materializes as the permeability ventures below the threshold of 200 mD.At this juncture, the efficacy of gravel-pack-combined ICD completion begins to manifest a gradual decline in conventional ICD completion.
Gravel-Packed Layer Permeability
The permeability of the gravel-packed layer hinges upon both the gravel composition and the degree of packing.In particular, a discernible inverse correlation manifests between the mesh number of the gravels and the resultant permeability of the gravel-packed stratum.Finer gravels, typified by higher mesh numbers, invariably yield lower permeability within the gravel-packed layer.This phenomenon aligns with a prevailing trend, wherein, given an equivalent pressure drop, diminished fluid flow rates via the gravelpacked stratum conduce to a more efficacious sealing effect.The orchestrated evaluation of water control outcomes across varied gravel-packed layer permeabilities, facilitated in our interlinked model, unfolds with clarity via the presentation of findings depicted in Figure 8.
The graphical representation depicts the relationship between gravel-packed layer permeability and ensuing outcomes.Notably, the water content of screen tubes and conventional ICDs exhibits consistent stability across diverse permeabilities.Conversely, a discernible upward trend is observed in the water content of gravel-pack-combined ICDs, correlating with a concomitant decline in water control effectiveness.Intriguingly, this trend assumes an accentuated trajectory, culminating in a substantial deterioration in water control outcomes as the permeability of the gravel-packed layer surpasses the threshold of 40D.Importantly, within this context, the water control efficacy of gravel-packcombined ICDs markedly lags behind that achieved using conventional ICDs.
Production Stage
We use numerical simulation models to extract characteristic water saturation parameters.In the early production stage of horizontal wells in the bottom-water reservoir, the water saturation of the reservoir is relatively low.In the middle production stage, water cones begin to appear where permeability is high.In the late production stage, most of the horizontal wells are in the high-water-containing area.The saturation distribution along the direction of the horizontal wells in different stages is shown in Figure 9.The calculation of water control effects under different production stages using the coupled model.The results are shown in Figure 10.
The graphical elucidation distinctly portrays the evolving water control efficacy across distinct production stages.During the initial production phase, the gravel pack combined with ICDs demonstrably outperforms conventional ICDs, yielding a notable differential.Advancing into the intermediate production stage, this comparative effectiveness persists, albeit with a gradually narrowing gap.However, as the production trajectory transitions to the latter phase, a remarkable shift emerges.Herein, the water control effectiveness of conventional ICDs supersedes that of gravel packs combined with ICDs.Furthermore, it is pertinent to observe that the water control effectiveness of gravel packs combined with ICDs even surpasses that achieved using screen tubes within this context.This observed phenomenon predominantly stems from the late-stage production dynamics, wherein, in pursuit of heightened production outcomes, an expansion in pressure drop ensues.Regrettably, this amplified pressure drop precipitates a decline in the sealing efficacy of the packed gravels, a trend contributing to the observed variations.Water content(%)
Conclusions
In this paper, we delve into the intricate dynamics of a novel paradigm: a gravel pack combined with inflow control device (ICD) completion for horizontal wells within bottom-water reservoirs.Our comprehensive analysis underscores the transformative impact of this pioneering water control completion, engendering heightened complexity in the flow patterns characterizing horizontal wells within such reservoirs.To rigorously comprehend and predict the nuanced outcomes of this innovation, we systematically devise distinct mathematical models encapsulating the intricate flow dynamics across bottomwater reservoirs, ICD completions, gravel-packed layers, and horizontal wellbores.A pivotal facet of our study lies in synthesizing these diverse flow models across varying spatial dimensions facilitated using a novel coupling approach.The resultant solutions thus unveiled further illuminate the multifaceted interactions underlying this intricate confluence of flows.Our investigations extend to diverse scenarios, encompassing the influences of oil viscosity, reservoir permeability, gravel-packed layer permeability, and production stage.Impressively, our findings underscore the robust applicability of the proposed mathematical model.It emerges as an adept tool for effectively predicting the performance of gravel packs combined with inflow control device completions within horizontal wells within bottom-water reservoirs, characterized by its expeditious and adaptable attributes.In this paper, we innovatively implemented the simulation of a gravel pack.The ICD production prediction of gravel-packed horizontal wells in bottomwater reservoirs is realized by establishing a coupling model with different dimensions of flow.However, the current model can only solve the problem of water control effect prediction under static conditions and cannot predict production dynamics.In the future, we will further optimize the model to predict the production performance of gravel-packed horizontal well ICD in the bottom-water reservoir.
Figure 2 .
Figure 2. Schematic diagram of the multi-segment method.
Figure 3 .
Figure 3. Schematic diagram of horizontal well with gravel pack combined with ICDs.
Figure 4 .
Figure 4. Flow chart for solving the coupled model.
Figure 5 .
Figure 5. Flow chart for solving the coupled model.
Figure 6 .
Figure 6.Comparison of water control effects of different oil viscosities.
Figure 7 .
Figure 7.Comparison of water control effects of different reservoir permeability.
Figure 8 .
Figure 8.Comparison of water control effects of different gravel-packed layer permeability.
gravel packed combined ICDs completion oil production of ICDs completion oil production of Screen completion water content of gravel packed combined ICDs completion water content of ICDs completion water content of Screen completion
Figure 9 .
Figure 9. Distribution of saturation at different production stages.
Figure 10 .
Figure 10.Comparison of water control effects of different production stages. | 8,009.4 | 2023-09-17T00:00:00.000 | [
"Geology"
] |
Molecular mechanism of the wake-promoting agent TAK-925
The OX2 orexin receptor (OX2R) is a highly expressed G protein-coupled receptor (GPCR) in the brain that regulates wakefulness and circadian rhythms in humans. Antagonism of OX2R is a proven therapeutic strategy for insomnia drugs, and agonism of OX2R is a potentially powerful approach for narcolepsy type 1, which is characterized by the death of orexinergic neurons. Until recently, agonism of OX2R had been considered ‘undruggable.’ We harness cryo-electron microscopy of OX2R-G protein complexes to determine how the first clinically tested OX2R agonist TAK-925 can activate OX2R in a highly selective manner. Two structures of TAK-925-bound OX2R with either a Gq mimetic or Gi reveal that TAK-925 binds at the same site occupied by antagonists, yet interacts with the transmembrane helices to trigger activating microswitches. Our structural and mutagenesis data show that TAK-925’s selectivity is mediated by subtle differences between OX1 and OX2 receptor subtypes at the orthosteric pocket. Finally, differences in the polarity of interactions at the G protein binding interfaces help to rationalize OX2R’s coupling selectivity for Gq signaling. The mechanisms of TAK-925’s binding, activation, and selectivity presented herein will aid in understanding the efficacy of small molecule OX2R agonists for narcolepsy and other circadian disorders.
O rexin signaling in the brain is the primary mechanism connecting diurnal circadian rhythms to arousal and wakefulness. Orexin A and B are neuropeptides (33 and 28 amino acids respectively, also known as hypocretins) derived from the preproorexin precursor and produced by a small set of dedicated neurons in the lateral hypothalamus, which stimulate release of neurotransmitters in diverse brain regions to promote wakefulness 1 . Orexin release is controlled by the circadian clock, and levels of orexin neuropeptides in mammalian cerebrospinal fluid follow a 24-hour cycle, peaking during the wake period 2,3 . The orexin peptides act by binding to the GPCRs OX1R and OX2R on target neurons to activate cellular signaling by G proteins and arrestins, particularly stimulating Gq/11-mediated release of calcium from the endoplasmic reticulum 4 .
The human disorder narcolepsy type 1 is characterized by a deficiency of orexin signaling, resulting in a pentad of symptoms including excessive daytime sleepiness, disturbed nighttime sleep, hypnagogic/hypnopompic hallucinations, sleep paralysis, and cataplexy, a sudden loss of muscle tone triggered by strong emotions 5 . Narcolepsy type 1 affects approximately 100,000 adults in the U.S 6 ., and up to 3 million patients may be affected by the disorder worldwide. While the etiology of this disorder varies, the proximal cause of narcolepsy type 1 is the death of orexin neurons 7 . The resulting loss of orexin expression in the narcoleptic brain has been validated in human clinical studies 8 , and the causal relationship between orexin deficiency and the narcolepsy phenotype has been validated using preproorexin knockout mice and targeted killing of the orexin neurons 9,10 .
Experiments in canine and mouse models also established that orexin control of circadian rhythms and wakefulness occurs predominantly through the actions of OX 2 R 11,12 , while signaling through OX 1 R is critical for stimulation of reward pathways 13 . These discoveries paved the way for small molecule drug discovery efforts targeting the orexin receptors in sleep/wake disorders, culminating in FDA approval of the dual orexin receptor antagonist (DORA) suvorexant (Belsomra®) for insomnia in 2014 14 . Numerous small molecule chemotypes have been developed as DORAs or OX 2 R-selective antagonists 15 , however the discovery of small molecule OX 2 R agonists has lagged far behind, with very few potent lead compounds reported in the primary literature 16 or patents. A breakthrough in this area was recently achieved with the small molecule TAK-925 17 , which displays lownanomolar potency for OX 2 R activation and >5000-fold selectivity for OX 2 R over OX 1 R 18 . In animal studies, subcutaneous delivery of TAK-925 during the sleep period results in a dosedependent increase in wakefulness and reduction in sleep time. TAK-925 is currently being explored as an orexin agonist in humans with narcolepsy, with multiple trials ongoing.
Insights into the molecular basis for orexin receptor activation and inhibition have come from structural studies of OX 1 R and OX 2 R bound to different ligands. The first structure of OX 2 R with suvorexant revealed that the drug binds in a narrow membraneembedded pocket analogous to the classic orthosteric site of adrenaline and beta-blocker binding in the β 2 -adrenergic receptor 19 . Structures of OX 1 R and OX 2 R with other antagonists showed that these compounds invariably bind to the same suvorexant site [20][21][22] . However, mutagenesis studies of the orexin receptors and other neuropeptide-activated GPCRs showed that interactions of the peptide agonists with the solvent-exposed extracellular loops (notably ECL2) are required to trigger full activation 23 , possibly underlying the difficulty in identifying small molecule agonist drug candidates. A recent cryo-EM study showed how a small molecule could mimic orexin B (OxB) to stabilize an active conformation of OX 2 R bound to a G protein 24 .
In this work, we use cryo-EM of different OX 2 R-G protein complexes and associated pharmacological studies to understand the molecular mechanism of the drug candidate TAK-925. These structural and functional data reveal how TAK-925 activates OX 2 R with high potency and subtype selectivity, and with a preference for signaling through G q .
Results
Structure determination of TAK-925-bound OX 2 R coupled to G q and G i1 . To capture active structures of OX2R bound to TAK-925, we purified complexes of the receptor together with heterotrimeric G proteins. Using recombinant protein from Sf9 insect cells, we elucidated the cryo-EM structure of OX 2 R coupled to a mini-G αs/q/iN /G β /G γ heterotrimer (referred to as OX 2 R-mG sqiN in this manuscript) in lauryl maltose neopentyl glycol (LMNG) micelles at 3.3 Å resolution ( Supplementary Figs. 1a, b, and 2), serving as a model for the G q signaling complex leading to calcium release. In parallel, we solved the cryo-EM structure of OX 2 R coupled to a DNG αi1 /G β /G γ heterotrimer (referred to as OX 2 R-G i1 ) in digitonin at 3.2 Å resolution ( Supplementary Figs. 1c, d, and 3), representing a G i signaling complex. Both cryo-EM efforts took advantage of scFv16 binding 25 , requiring generation of a modified mini-G s/q 71 26 construct capable of binding the antibody fragment. The global cryo-EM envelope, atomic model, and density for TAK-925 in OX 2 R-mG sqiN are shown in Fig. 1a-c, while the analogous data for OX 2 R-G i1 are in Fig. 1d-f. Differences between OX 2 R models in these two structures are greatest at the G protein interface, although the models of the agonist-bound receptor are generally in close agreement, with a root mean square deviation (rmsd) of 0.7 Å. Both cryo-EM maps have well-defined sidechain density in the transmembrane region ( Supplementary Fig. 4), and allowed placement and refinement of TAK-925 (Fig. 1c, f). The stability of the modeled ligand conformation, particular regarding the saturated rings, was validated by quantum chemistry calculations ( Supplementary Fig. 5).
Binding of TAK-925 to OX 2 R. The molecular interactions between TAK-925 and OX 2 R are highly similar in the mG sqiN and G i1 complexes (Fig. 2a), which solidifies the interpretation of the details of these interfaces. The cryo-EM density for the ligand binding pockets in the OX 2 R-mG sqiN and OX 2 R-G i1 complexes are shown in Supplementary Fig. 6a and b, respectively. TAK-925 adopts a compact, U-shaped conformation, contacting OX 2 R residues on TM2, TM3, TM5, TM6 and TM7 and burying 460 Å 2 of solvent-accessible surface area. The methyl carbamate and sulfonamide 'arms' of TAK-925 extend toward polar residues on either side of the orthosteric pocket, the latter group engaging in a hydrogen bond with Gln134 3.32 , while the phenyl-cyclohexane 'tail' projects deeper into the transmembrane region contacting Val138 3.36 , Phe227 5.42 , and Ile320 6.51 . TAK-925 and suvorexant share an overlapping binding site in OX 2 R despite having opposing ligand efficacy (Fig. 2b). The diarylsulfonamide agonist Compound 1, reported in a previous cryo-EM structure 24 , shares elements of the TAK-925 pocket, but also extends further toward the extracellular loops of the receptor. Modulation of the position of Gln134 3.32 was previously observed in the OX 2 R-Compound 1 and OX 2 R-OxB complexes, and we similarly find that Gln134 3.32 undergoes the largest conformational switch of the orthosteric pocket residues with TAK-925 bound, compared to the suvorexant-bound inactive state (Fig. 2c). Despite their very different chemotypes, TAK-925 and Compound 1 both have sulfonamides at similar positions in the active OX 2 R complexes (Fig. 2b), which interact with Gln134 3.32 and promote a shifted position of TM3 (Fig. 2c). The overall structure of active OX 2 R with TAK-925 is similar to active OX 2 R with Compound 1, with an rmsd of 0.6 Å, despite the use of a stabilizing extracellular nanobody in the Compound 1 complex.
To further establish that the conformational switch seen for Gln134 3.32 is important for receptor activation, we carried out mutagenesis and measured G q signaling in an inositol phosphate (IP) accumulation assay (Fig. 2d (18 nM) close to the wild-type receptor (7.5 nM) in this assay (Fig. 2d, top panel), although this mutant has a lower E max likely due to lower plasma membrane expression ( Supplementary Fig. 7). In the structure of OX 2 R-G i1 , N324 6.55 hydrogen-bonds to TAK-925's carbonyl group with a distance of 2.7 Å between heteroatoms, however in OX 2 R-mG sqiN this distance is 3.3 Å. The non-ideal hydrogen bond between the ligand and OX 2 R helps explain why this contact is largely dispensable for TAK-925 signaling through G q (Fig. 3a). Unlike TAK-925, OxB fails to activate the N324 6.55 A mutant (Fig. 2d, bottom panel), and the OX 2 R-OxB-G q complex 24 previously showed that the neuropeptide residue Thr27 makes a specific hydrogen bond to Asn324 6.55 . This polar residue was also previously demonstrated to be essential for antagonist binding and inhibition of OX 2 R in structural 19,20 and mutagenesis 27 studies. The residue Thr111 2.61 interacts with the ligand in the OX 2 R-mG sqiN structure such that the sidechain oxygen is 3.8 Å from TAK-925's sulfonamide nitrogen. While the structure does not indicate a hydrogen bond between these groups, we found that mutation of this residue to alanine led to significantly reduced potency of the agonist (Fig. 2d, top panel, EC 50 2.1 μM). This loss of activity could be due to solvent-mediated effects, as discussed below. Finally, we carried out β-arrestin recruitment assays for a panel of OX 2 R mutants, and found that the residues important for TAK-925 activation of this pathway agree with the interactions that TAK-925 makes with the active OX 2 R orthosteric pocket in the cryo-EM structures ( Supplementary Fig. 8, Supplementary Tables 3 and 4). Notably, large reductions in β-arrestin recruitment by TAK-925 were measured for residues Gln134 3.32 as well as Tyr317 6.48 at the base of the orthosteric pocket, both of which undergo changes from the inactive to active conformations.
From ligand structure-activity relationship (SAR) studies 17 , we found that critical determinants of TAK-925's potency include the stereochemistry of substituents on the cyclohexane and piperidine rings, which enforces TAK-925's stable U-shaped conformation seen in our structures (Fig. 1c, f, Supplementary Fig. 5). Furthermore, the sulfonamide of the ligand is essential for potency, and this group forms a tight polar interaction with the important residue Gln134 3.32 in the active state (Fig. 2a, c). The SAR around the phenyl ring is narrow and replacement with aliphatic groups results in a reduction in potency, which is consistent with the phenyl-cyclohexyl tail of TAK-925 projecting into the deep hydrophobic pocket lined by nonpolar residues from TMs 3, 5, and 6.
Selectivity of TAK-925 for OX 2 R over OX 1 R. We have previously shown that TAK-925 has >5000 fold selectivity for OX 2 R over OX 1 R in calcium mobilization assays 18 . Our complex structures with TAK-925 show that, as with the DORA suvorexant 19,20 , the agonist contact sphere is highly conserved in OX 1 R, with only two substitutions: Thr111 2.61 →Ser and Thr135 3.33 →Ala (Fig. 3a). It is possible that selectivity is enforced locally by these small differences in the respective binding pockets, however selectivity could also be the result of longerrange allosteric differences. To distinguish these possibilities, we carried out subtype swap experiments similar to what we previously performed for OX 1 R-and OX 2 R-selective antagonists 20 . When we mutated the two divergent binding pocket residues in OX 1 R to the corresponding amino acid in OX 2 R and carried out IP accumulation assays, we observed saturable activation by TAK-925 (EC 50 = 300 nM versus >100 μM non-saturating for wild-type OX 1 R). The two single mutants showed a partial increase (left-ward shift) in potency, indicating that both positions contribute towards selectivity (Fig. 3b). Conversely, mutation of these two positions in OX 2 R to the corresponding amino acids in OX 1 R leads to a reduction (right-ward shift) in potency, and the double mutant results in EC 50 3.6 μM versus 7.5 nM for wild-type OX 2 R (Fig. 4c). We also found that when OX 2 R Thr111 2.61 is mutated to alanine (instead of serine), receptor activation by TAK-925 is diminished ( Fig. 2d top panel, EC 50 2.1 μM). Collectively, these data indicate that the high selectivity of TAK-925 for OX 2 R is largely caused by differences in the deep orthosteric pockets of the two receptor subtypes, rather than in the more divergent extracellular loops. If the active OX 1 R adopted a different conformation at the deep orthosteric pocket where TAK-925 binds, it would not be possible to rescue signaling with the S103T/A127T double mutant. Likewise, this result indicates that the allosteric pathway of activation for OX 1 R can be turned on by agonist binding at this site, and rules out the possibility that TAK-925 has minimal potency at OX 1 R due to differences in the allosteric transmission mechanism.
TAK-925 activation of OX 2 R. How does TAK-925 stabilize the active conformation of OX 2 R? Compared to the inactive state, binding of TAK-925 causes a rotation in TM3 at the orthosteric pocket with sidechains moving towards TM2, including the shift of Gln134 3.32 described above. The 'pull' on TM3 occurs together with an opposing downward 'push' on Tyr317 6.48 induced by the deeper projection (relative to suvorexant) of TAK-925's phenylcyclohexyl tail against TMs 3 and 6 ( Fig. 4a). This conformational change is disfavored by suvorexant due to the position of its benzoxazole, which packs against TM3 and prevents its rotation. A downward shift in the aromatic residue at position 6.48 (Ballesteros-Weinstein numbering 28 ) is observed in several active GPCRs, and has been suggested to initialize the hallmark outward rotation of TM6 during GPCR activation 29 . The signal is then further propagated down the receptor, where the 'PIF switch' (PVF in OX 2 R) 30 adopts an active conformation (Fig. 4b), and the vacancy left by TM6's outward movement is filled by the inward movement of TM7 toward TM3. In particular, the microswitch residue Tyr364 7.53 moves~4 Å inward (Fig. 4c) and sits between residues Leu145 3.43 , Ile148 3.46 , and Arg152 3.50 , while another microswitch residue Leu306 6.37 moves~5 Å outward as a result of the repositioning of Arg152 3.50 . In parallel with the TM3-TM6-TM7 repacking, the ionic interaction between Asp151 3.49 and Arg152 3.50 is broken, and the cytosolic side of the receptor opens to accommodate the G protein α5 helix (Fig. 4d). These latter conformational changes are also seen in other GPCRs 31 .
OX 2 R-G protein interaction. A major signaling cascade activated by the orexin neuropeptides in neurons is G q -mediated calcium release 32 . We were able to characterize TAK-925-bound samples of active OX 2 R with both mG sqiN (a G q mimetic) and G i1 , and are thus able to compare the interfaces between these complexes. The cryo-EM density for the receptor-G protein interface in the OX 2 R-mG sqiN Fig. 2 Binding of TAK-925 to OX 2 R. a Overlay of contact residues (sticks with purple carbons for mG sqiN -coupled OX 2 R and sticks with blue carbons for G i1 -coupled OX 2 R) within 4 Å of TAK-925 (yellow carbons) when superimposing OX 2 R-mG sqiN and OX 2 R-G i1 . The OX 2 R backbone (silver) is from OX 2 R-mG sqiN . The hydrogen bond from Gln134 3.32 to TAK-925's sulfonamide is not shown because this residue is behind the ligand from this viewpoint (same as in Fig. 2c). b Overlay of TAK-925 (sticks with yellow carbons), Compound 1 (sticks with orange carbons) and suvorexant (sticks with cyan carbons) when superimposing the OX 2 R polypeptides from the OX 2 R-mG sqiN complex (this work), the OX 2 R-mini-G sqi complex (PDB 7L1V) and the antagonist-bound inactive conformation (PDB 4S0V). c Overlay of contact residues within 4 Å of TAK-925 when superimposing OX 2 R-mG sqiN (this work, magenta), OX 2 R-mini-G sqi /Compound 1 (PDB 7L1V, orange) and the suvorexant-bound inactive conformation of OX 2 R (PDB 4S0V, cyan). TAK-925 is shown as transparent spheres. d Stimulation of G q by OX 2 R wild type (WT) and mutants when bound to TAK-925 (top) and orexin B (bottom). Each data point represents an average from n ≥ 3 independent experiments (each performed in duplicate), where n is shown in Supplementary Table 2. Error bars are ±SD. Data were normalized to the WT E max and fitted to the three-parameter model 'log(agonist) vs response' in GraphPad Prism 9. Source data are provided as a Source Data file.
Supplementary Fig. 6c and d, respectively. A majority of the contacts occur between OX 2 R and the C-terminal α5 helix of each G protein (Fig. 5a, b), as seen in previous activated receptor complexes 25,33 . The OX 2 R-mG sqiN interface has features similar to previously characterized GPCR-G q/11 complexes [34][35][36][37] . In particular, the end of the C-terminal α5 helix forms a 'hook' that packs against the TM7-Helix8 junction, and the sidechain of Tyr235 H5.23 from this hook extends across the center of the interface to form a hydrogen bond with ICL2 of the receptor (Ser164 ICL2 in OX 2 R, Figs. 4d and 5a). A similar interaction has been observed in other GPCR-G q complexes 36,37 , and Tyr235 H5.23 is not conserved in G i . The overall conformation of the OX 2 R-G i1 interface is similar to that of OX 2 R-mG sqiN (rmsd for receptor and Gα subunits together equals 1.3 Å). However, the OX 2 R-mG sqiN interface features more extensive contacts between OX 2 R and the mG αsqiN , resulting in burial of more surface area (1039 Å 2 versus 814 Å 2 ), Notably, the interaction of OX 2 R with the G i1 α5 helix is predominantly mediated by hydrophobic contacts, and G i1 makes only two polar contacts to the receptor's polypeptide backbone (Fig. 5b). In contrast, active OX 2 R makes 7 specific polar interactions with the mG αsqiN α5 helix, including from Ser164 ICL2 to Tyr253 H5.23 and from Thr302 6.33 to the mG αsqiN C-terminus (Fig. 5a, Supplementary Figs. 9 and 10). The backbone amide of Phe371 8.50 also hydrogen-bonds to Asn244 H5.24 of mG αsqiN (common G α numbering system 38 ) (Fig. 5a), and this OX 2 R residue undergoes a downward shift during activation concomitant with the upward movement of Tyr364 7.53 (Fig. 4c).
To confirm whether OX 2 R can signal effectively through both G protein complexes, we carried out G q and G i signaling assays in HEK293 cells transfected with OX 2 R and G proteins. Both orexin B and TAK-925 showed poor ability to stimulate G i1 as measured by reduction in cAMP levels after forskolin treatment (Fig. 5d). In contrast, both orexin B and TAK-925 induced robust G qmediated IP accumulation with EC 50 values consistent with previous studies (8.2 nM and 8.1 nM, respectively) (Fig. 5c) Discussion TAK-925 can fully activate OX 2 R despite its low molecular weight and non-peptidic structure. Earlier mutagenesis studies showed that residues on the ECL2 β-hairpin of OX 2 R are important for activation by OxB 23 , and we previously found that an α-helix preceding TM1 is involved in orexin binding 20 , which helps to position the neuropeptide C-terminal region to interact with the membrane-embedded site including N324 6.55 . These findings made it difficult to imagine how a drug-like small molecule could recapitulate the interactions needed to activate OX 2 R. Our present structures of active OX 2 R with TAK-925, along with the recent structure with Compound 1 24 , explain how a low-MW drug can activate the receptor (Fig. 5d) without fully mimicking the orexin neuropeptide. Indeed, TAK-925 binds in the same deep membrane-embedded site as the inhibitor suvorexant ( Fig. 2a, b), but is able to engage in concerted interactions with the receptor that stabilize the active conformation (Figs. 2c and 4a). The deep burial of TAK-925 at the membrane-embedded site also facilitates high affinity and potency, which the orexin neuropeptides can only achieve through multivalent contacts analogous to Class B GPCR/peptide interactions 39 . The mechanism of TAK-925 activation of OX 2 R sets a precedent that small molecule drug-like full agonists are possible even for the most challenging peptide-activated GPCR targets. Subtype selectivity remains a major challenge in GPCR drug development, both for agonists and antagonists 40,41 . TAK-925 has a high degree of OX 2 R selectivity, and this profile may be important to avoid OX 1 R activation of brain reward pathways associated with addiction 13 , while maintaining desirable sleep/ wake effects. A typical strategy for achieving GPCR ligand selectivity is to exploit subpockets of the orthosteric site that are not conserved between subtypes, or to focus on more highly divergent allosteric sites 42 . Our structures of TAK-925-bound active OX 2 R and functional data (Fig. 3) show that subtype selectivity can be achieved even when the binding pockets are extremely highly conserved (19 out of 21 residues within 5 Å of TAK-925 are identical, with only Thr→Ser and Thr→Ala substitutions). How then does TAK-925 bind so much weaker to OX 1 R? As with OX 2 R-selective antagonists such as EMPA 21 , the contact sphere surrounding the ligand does not have to be very different between subtypes. Instead, we propose that small cavities resulting from the two smaller amino acids in a potential c Rewiring of micro switches on the intracellular side of OX 2 R when bound to agonist, comparing the active conformation (purple sticks, gray cartoon) and the inactive conformation (cyan sticks). TAK-925 is shown as yellow sticks. d Conformational changes of DRY motif when coupled to G protein, comparing the active conformation (gray cartoon and purple stick) and the inactive conformation (cyan sticks). The H5 helices of G q and G i are shown as orange and blue cartoon, respectively. Hydrogen bonds are shown as dotted lines.
OX 1 R/TAK-925 complex lead to subtly diminished steric complementarity and less favorable desolvation upon surface burial 43 , with a large effect on binding free energy. These cavities are filled in the OX 1 R S103 2.61 T/A127 3.33 T double swap mutant, allowing for activation of this receptor by TAK-925.
Other selective small molecule OX 2 R agonists such as Compound 1 24 and the structurally similar YNT-185 16 may also rely on the subtle differences between orthosteric pockets that are important for TAK-925, however the structure of Compound 1-bound OX 2 R shows that this type of diarylsulfonamide also makes contacts with the more divergent extracellular loops. Intriguingly, a recent medicinal chemistry effort 44 succeeded in making a potent dual orexin receptor agonist by modifying YNT-185 on the biphenyl moiety that is predicted to interact with the extracellular loops (given the structural analogy with Compound 1). This finding implies that OX 2 R-selective arylsulfonamide agonists may derive selectivity from interactions that are far from the TAK-925 interface. Determining the importance of different contacts for selectivity of this other agonist chemotype will require additional functional studies of mutant receptors. In the case of the antagonist EMPA, a previous structural and computational study 22 found that binding of a trapped ordered water molecule in a cavity formed at A127 3.33 of the putative OX 1 R complex is a major determinant of OX 2 R selectivity for this ligand. A similar phenomenon may occur to make TAK-925's interaction with OX 1 R unfavorable, and will require computational approaches to elucidate. Our recent demonstration 45 of converting a dual selective antagonist (suvorexant) into an OX 1 R-selective antagonist by filling in this cavity with an aliphatic group suggests that modifying TAK-925 may be a viable strategy for creating OX 1 R-selective agonists.
Few GPCRs have been rigorously characterized for their ability to activate multiple G protein classes, such as β 2 -adrenergic receptor coupling to G s and G i 46 . Several studies of OX 2 R using cAMP sensors or other engineered reporter assays have indicated that OX 2 R can also stimulate G s and G i signaling, although with reduced orexin potency 47,48 . From these studies, one may ask whether OX 2 R has any preference for activating a particular signaling cascade in cells or in vivo. Our structural data indicates that OX 2 R distinguishes between G q and G i , both in the extent of interaction and in the number of specific polar contacts (Fig. 5a, b, Supplementary Figs. 9 and 10). Meanwhile, our functional data shows that OX 2 R cannot substantially activate G i in HEK293 cells (Fig. 5d), however the caveat remains that we did not measure this activity in transfected neurons. These results bolster observations that orexins function mainly by stimulating calcium release through activating G q 32,49 , and suggest that G protein promiscuity plays a limited role in orexin signaling.
Several examples have recently been described of activated GPCR structures in complex with multiple G proteins (or G protein mimetics), which can provide insights into G protein selectivity. In most cases, the G protein known to couple most efficiently with the receptor has a larger buried surface area at the interface relative to subordinate G protein transducers 37
Signaling through G q orexin B TAK-925
Fig. 5 Interfaces of OX 2 R-G protein complexes and activation of G i and G q . a Interactions within 4 Å between OX 2 R and mG sqiN (gray cartoon and purple sticks for OX 2 R and orange cartoon and sticks for H5 helix of G q α subunit). Hydrogen bonds are dotted lines. b Interactions within 4 Å between OX 2 R and G i1 (gray cartoon and purple sticks for OX 2 R and blue cartoon and sticks for H5 helix of G i α subunit). Hydrogen bonds are dotted lines. c G q signaling mediated by OX 2 R with TAK-925 or orexin B. Each data point represents an average from n ≥ 3 independent experiments (each performed in duplicate), where n is shown in Supplementary Table 2 holds true for OX 2 R in this study (see above). The multivalent interactions between the receptor and the G protein are divergent, and selectivity is largely dependent on the unique features of each GPCR. In the GCGR-G s complex, ICL2 of the receptor engages more closely with G s compared to G i , and this interaction was demonstrated to be important for preferred G s coupling 50 . On the other hand, the GCGR-G i complex features unique contacts between ICL1 and G β and between ICL3 and G iα 50 . In the CCK1R-G q complex, ICL3 interacts with G αq and was shown to be important for G q coupling potency 37 . We do not observe a similar ICL3-G protein interaction in either of our OX 2 R-G protein complexes (Fig. 5a, b), and the tip of ICL3 is disordered in both structures, however the caveat remains that we have used a G q mimetic (mG sqiN ) rather than wild-type G q protein in our study. In another example of multiple CCK1R-G protein structures, a different conformation was observed for the G protein α5 helix between mG sqi and G s complexes, which may reflect the potential for structural differences or dynamics of this key divergent G protein epitope to confer selectivity 36 . In our two OX 2 R-G protein structures, we observe highly similar conformations of the α5 helix (including the C-terminal 'hook') for the mG sqi and G i1 complexes (Fig. 5a, b, rmsd of 1.3 Å as described above). On the other hand, we find that our OX 2 R-G i1 interface is dominated by hydrophobic contacts, and the OX 2 R-G sqiN interface is a mixture of polar and hydrophobic interactions (Fig. 5a, b, Supplementary Figs. 9 and 10). A similar differentiation of interface properties is seen in comparing the preferential GCGR-G s complex to the GCGR-G i complex 50 . Intriguingly, several of the hydrogen bonds between OX 2 R and the mG sqiN α5 helix are conserved in CCK1R-G q structures: 36,37 Y235 H5.23 -S164 ICL2 and N231 H5.19 -A155 3.53 from OX 2 R-mG sqiN are analogous to Y391 H5.23 -Q153 ICL2 and N387 H5.19 -A142 3.53 in CCK1R-G q . This similarity may represent a shared contributor to G q selectivity for multiple GPCRs.
An optimal drug for narcolepsy type 1 patients would be an oral pill that substitutes for deficient orexin neuropeptides and restores normal circadian patterns of orexin signaling, reversing debilitating symptoms such as sleep attacks and cataplexy 5,8 . This goal presents a complex set of pharmacokinetics challenges, since the drug must be orally bioavailable, brain penetrant, and have clearance kinetics on the order of hours. TAK-925 is administered intravenously (ClinicalTrials.gov NCT04091438), and more recently other small molecules have been discovered with improved oral absorption (ClinicalTrials.gov NCT04096560). The structures of active OX 2 R bound to TAK-925 presented here will help in understanding the important interactions that must be maintained to potently and fully activate the receptor by small molecule agonists. 60, 7.59), N (6.60, 6.55).
Methods
The sample used for structural biology and pharmacology studies was assessed to be >96% pure by LC-MS.
Cloning and expression of the human OX 2 R-mG sqiN complex. The cloning, expression, purification, cryo-EM data collection, and structure determination for the two OX 2 R-G protein complexes described in this paper were carried out independently. For the OX 2 R-mG sqiN complex the receptor construct contained residues 1-406 of wild-type human OX 2 R (OX 2 R 1-406 ). The C-terminal residues 407-444 of OX 2 R were removed to confer slightly improved expression and purification for this complex. Compared to wild-type OX 2 R, OX 2 R 1-406 displayed similar potencies for activation by TAK-925 and OxB ( Supplementary Fig. 11, Supplementary Table 2). The resulting construct was cloned into a modified pFastBac (ThermoFisher) baculovirus expression vector with the HA signal sequence followed by a FLAG tag at the N-terminus. We were unable to isolate a stable complex between OX 2 R and full-length wild-type G αq using the coexpression and purification strategy described below. Instead, an engineered G αq construct called mG αsqiN was made by modifying the chimeric G α protein 'mini-G s/ q 71' previously described 26 . Briefly, the linker GGSGGSGG was deleted and residues 1-27 at the N terminus was replaced by residues 1-30 of G αi1 . A second pFastBac baculovirus was made with this mini-G αqiN gene. An additional pFastBac-Dual baculovirus was made with human G β1 and human G λ2 genes. An 8xHis tag was placed at the N-terminus of G λ2 . OX 2 R 1-406 , mG αsqiN , G β1 , and His 8 -tagged G λ2 were co-expressed in Spodoptera frugiperda (Sf9) cells with the addition of all three baculoviruses (ratio of 1 OX 2 R 1-406 :1 mG αsqiN :1G β1 G λ2 ) to Sf9 cells at a density of 3 × 10 6 per ml, along with 1 μM TAK-925 added to the media during growth. After 48 h, cells were harvested and stored at −80°C.
Cloning and expression of the human OX 2 R-G i1 -scFv16 complex. The coding sequence of wild type human OX 2 R (residues 2-444) was synthesized and subcloned into pFastbac with an N-terminal FLAG tag followed by a fragment of β 2 AR N-terminal 1-24aa before OX 2 R, and a C-terminal 2xMBP-His 8 tag after OX 2 R. A TEV cleavage site was inserted between OX 2 R and the MBP tag. The prolactin precursor sequence was used as a signal peptide to increase protein expression. A dominant-negative bovine G αi1 construct called DNG αi1 was generated by sitedirected mutagenesis to incorporate mutations G203A 51 and A326S 52 to decrease the affinity of nucleotide binding and increase the stability of the G αβγ complex. All three G protein complex components, human G αi1 , human G β1 and human G γ2 , were cloned into pFastbac individually. For OX 2 R-G i1 , scFv16 was cloned into pFastbac with a GP67 signal peptide at the N-terminal and a TEV cleavage-His 8 tag at the C-terminus. Baculoviruses for OX 2 R, DNG αi1 , His 8 -tagged G β1 , G γ2 , and scFv16 were co-expressed in Sf9 cells. Cell cultures were grown to a density of 3.5 × 10 6 cells/mL and infected with all five baculoviruses at a ratio of 1:1:1:1:1. 48 h after infection, cells were harvested and stored at −80°C for further use.
Cloning, expression and purification of scFv16. For OX 2 R-mG sqiN , scFv16 was expressed and purified separately. A synthesized DNA fragment encoding scFv16 25 was cloned into pFastBac, with a melittin signal sequence at the N-terminus and a 10xHis tag at the C-terminus. The resulting construct was expressed in Sf9 cells. scFv16 was purified as previously described 53 . In brief, secreted scFv16 in the cell culture media was separated from Sf9 cells by centrifugation, and 10 mM Tris buffer pH 8.0 was added to balance pH. Then 1 mM Ni 2 SO 4 and 5 mM CaCl 2 were added to quench chelating agents. The media containing scFv16 was loaded onto Ni-NTA affinity resin by gravity, and the column was washed with 20 column volumes of buffer consisting of 50 mM HEPES pH 7.4, 150 mM NaCl and 10 mM imidazole. Protein was eluted from Ni-NTA resin with 10 column volumes of buffer consisting of 50 mM HEPES, 150 mM NaCl and 250 mM imidazole. The eluate was concentrated and run on a Superdex 200 gel filtration column. The peak corresponding to monomeric scFv16 was collected, concentrated and frozen until further use.
Purification of the human OX2R-G i1 -scFv16 complex. Cell pellets from 10L culture were thawed at room temperature and suspended in 20 mM HEPES pH 7.2, 50 mM NaCl, 5 mM MgCl 2 . Complex was formed on membranes with addition of 10 μM TAK-925 and Apyrase (25 mU/mL, NEB), and incubation for 1.5 h at room temperature. Cell membranes were collected by ultra-centrifugation at 100,000 × g for 35 min. The membranes were then re-suspended and solubilized in buffer containing 20 mM HEPES, pH 7.2, 100 mM NaCl, 10% glycerol, 0.5% (w/v) n-Dodecyl β-D-maltoside (DDM, Anatrace), 0.1% (w/v) cholesteryl hemisuccinate TRIS salt (CHS, Anatrace), 0.1%(w/v) digitonin (Sigma) for 3 h at 4°C. The supernatant was isolated after centrifugation at 100,000 × g for 45 min and then incubated overnight at 4°C with pre-equilibrated Flag G1 resin (Genscript). After batch binding, the resin with immobilized protein complex was manually loaded onto a gravity column and washed with 10 column volumes of 20 mM HEPES, pH 7.2, 100 mM NaCl, 0.1% digitonin (w/v), 10 μM TAK-925. Protein was treated with TEV protease and eluted with the same buffer supplemented with 200 μg/ml flag peptide. Elution was concentrated and loaded onto a Superdex 200 10/300 GL increase column (GE Healthcare) pre-equilibrated with buffer containing 20 mM HEPES, pH 7.2, 100 mM NaCl, 0.075% digitonin, and 10 μM TAK-925. The total yield of the complex was~5 mg and the eluted fractions of monomeric complex were collected and concentrated for cryo-EM experiments (Supplementary Fig. 1b). Note that a different detergent was used for this complex compared to OX 2 R-mG sqiN -scFv16 (digitonin versus LMNG) because these purifications were developed independently, and not due to any specific preference of each complex for a different detergent micelle.
Cryo-EM data acquisition for the OX 2 R-mG sqiN -scFv16 complex. Prior to freezing grids, the OX2R-mG sqiN -scFv16 complex was concentrated to 9.5 mg/ml. Cryo-EM grids were prepared by applying 3.5 μl of this sample to glow-discharged Quantifoil R1.2/1.3 300-mesh Au holey carbon grids (Quantifoil Micro Tools GmbH, Germany), blotted for 4.5 s under 100% humidity at 4°C and plunge frozen in liquid ethane cooled by liquid nitrogen using a Mark IV Vitrobot. SerialEM software was used for automated data collection. Images were recorded on a Titan Krios microscope (FEI) operated at 300 kV with a K3 direct electron detector (Gatan) in super-resolution correlated-double sampling counting mode using a slit width of 20 eV on a GIF-Quantum energy filter (Supplementary Fig. 12a). Images were recorded at a nominal magnification of ×81,000, corresponding to a pixel size of 1.08 Å, and a target defocus range from −1.6 to −2.6 μm. Each movie stack was dose-fractionated over 32 frames for a total of 9 s under a dose rate of 8 e − /pixel/ sec, resulting in a total dose of~64 e − / Å 2 .
Cryo-EM data acquisition for the OX 2 R-G i1 -scFv complex. For cryo-EM grid preparation, 3 μl purified OX 2 R-G i1 -scFv16 complex at a concentration of 8 mg/mL was applied to an EM grid (Quantifoil, 300 mesh Au R1.2/1.3, glow discharged for 30 sec using a Solarus II plasma cleaner (Gatan)) in a Vitrobot chamber (FEI Vitrobot Mark IV). Protein concentration was determined by absorbance at 280 nm using a Nanodrop 2000 Spectrophotometer (Thermo Fisher Scientific). The Vitrobot chamber was set to 95% humidity at 4°C. The sample was blotted for 2 s before plunge-freezing into liquid ethane. Cryo-EM movie stacks were collected on a Titan Krios microscope operated at 300 kV under EFTEM mode ( Supplementary Fig. 12b). Nanoprobe with 1 μm illumination area was used. Data were recorded on a post-GIF Gatan K2 summit camera at a nominal magnification of 130,000, using counting mode. Bioquantum energy filter was operated in the zero-energy-loss mode with an energy slit width of 20 eV. Data collection were performed using Leginon with one exposure per hole. The dose rate was~8.4 e − /Å 2 /sec. The total accumulative electron dose was~50 e − /Å 2 fractioned over 40 subframes with a total exposure time of 6 s. The target defocus range was set to −1.3 to −1.9 μm.
Image processing of the OX 2 R-mG sqiN -scFv complex. Data was processed in Relion 3.0 54 following the same general protocol as previously described 53 . Dosefractioned images were gain normalized, 2×2 Fourier binned, motion corrected, dose-weighted, and summed using MotionCor2 55 . Contrast transfer function parameters were estimated using GCTF 56 . Approximately one thousand particles were picked manually and subjected to 2D classification. Representative projections of the complex were selected as templates for automated particle picking from all images. 8,929,023 extracted particles were 4×4 binned and subjected to 2D classification. A total of 2,661,363 particles were finally selected for 3D classification using the initial model generated by Relion as a reference. After 3 rounds of 3D classification, the classes showing good secondary structure features were selected, combined and re-extracted at the original pixel size of 1.08 Å. After 3D refinement and postprocessing, the resulting 3D reconstitution from 155,322 particles yielded a map at 3.3 Å resolution using the gold-standard FSC criterion (cryo-EM dataprocessing flowchart is shown in External Data Fig. 2). Local resolution map was calculated using Relion 3.0.
Image processing of the OX2R-G i1 -scFv complex. A total of 8453 movie stacks were collected. Each movie stack was aligned and dose weighted by 1.04 Å per pixel using MotionCor2 55 . CTF parameters were determined using CTFFIND4 57 . A total of 2,735,296 particles were auto picked using template picker and extracted with a box size of 256×256 pixels in cryoSPARC 58 . All particles were subject to two rounds of reference-free 2D classification. The following 2D classifications, 3D classifications and refinements were all performed in cryoSPARC. 677,124 particles were selected after two rounds of 2D classification based on the presence of intact complex. This particle set was used to do Ab-Initio reconstruction in four classes, which were then used as 3D volume templates for heterogeneous refinement. 332,173 particles were used for final non-uniform refinement. The global resolution of the final reconstruction is 3.13 Å (cryo-EM data-processing flowchart is shown in External Data Fig. 3). Local resolution map was calculated using cryoSPARC. Surface coloring of the density map was performed using UCSF Chimera 59 .
Model building and refinement of the OX2R-mG sqiN -scFv complex. The active structure of μOR (PDB: 6DDF) as well as structures of mini-G αq (PDB: 6WHA), G β1λ2 and scFv16 (PDB: 6VMS) were used as initial models for model rebuilding and refinement. A polyalaline model was made from the μOR structure, and together with structures of mini-G αq , G β1λ2 , and scFv16, components were docked into the cryo-EM density map in Chimera. The resulting model was subjected to autobuilding in Buccaneer 60 , iterative building in Coot 61 and refinement in Phenix.real_space_refine 62 . Initial coordinates and refinement parameters for the ligand TAK-925 were prepared with DRG web server (http://davapc1.bioch. dundee.ac.uk/cgi-bin/prodrg). MolProbity was used to evaluate the final structures. In the Ramachandran plot, 97.6% and 2.4% of residues were in favored and allowed regions, respectively. Statistics for data collection and refinement are included in Supplementary Model building and refinement of the OX2R-G i1 complex. The structures of OX 2 R (PDB:5WQC) and G i -scFv16 (PDB: 6DDE) were individually placed and rigid-body fitted into the cryo-EM map using Chimera. Manual inspection of the model was performed in Coot 61 , interspersed with restrained real-space refinement using Phenix.real_space_refine 62 . The ligand was placed by hand and then refined in Phenix. All regions of the model were checked thoroughly for registration correctness and fit to density during refinement. Statistics for data collection and refinement are included in Supplementary Table 1. The cryo-EM density map has been deposited in the Electron Microscopy Data Bank under accession code EMD-25389. Atomic coordinates have been deposited in the PDB under accession code 7SQO. Structural figures were made using Pymol and UCSF Chimera.
Quantum chemistry calculation. Northwest Computational Chemistry Package (NWChem) 6.8.1 63 was used to perform all electronic structure calculations. All geometries were optimized at the B3LYP/6-31G* level of the Density-functional theory (DFT) in gas phase first and then with the solvation model based on density (SMD) model 64 in aqueous solution. Vibrational frequencies were computed in each case to confirm that each structure was a local minimum on the potential energy surface and to compute thermodynamic quantities. The standard Eckart projection algorithm, as implemented in NWChem, was applied to project out the translations and rotations of the nuclear hessian. Based on the frequencies obtained from the projected hessian, the zero point energy for the molecular system was calculated.
Inositol phosphate accumulation assay for G q signaling. The IP assay was adapted from a prior precedent 65 . HEK293 cells were maintained in DMEM-high glucose (Millipore Sigma) supplemented with 5% fetal bovine serum (Corning) and penicillin-streptomycin (Millipore Sigma). Receptor constructs used were the fulllength wild-type (or mutant) human OX 1 R and OX 2 R sequences cloned into pCDNA3. To enhance the E max in the IP assay, a plasmid encoding full-length human G αq subunit 66 was co-transfected into HEK293 cells with plasmids encoding orexin receptor constructs using Lipofectamine 3000 (Thermo Fisher). 24 h after transfection, the cells were plated on poly-D-lysine coated (Millipore Sigma) tissue culture-treated 96 well white plates with clear bottoms (Perkin Elmer) at 30,000 cells per well in Inositol-free DMEM (MP Biomedicals) supplemented with 5% fetal bovine serum, 4 mM L-glutamine (Millipore Sigma), penicillin-streptomycin, and about 5 uCi (5 ul) per ml myo-[2-3H] inositol (Perkin Elmer). The following day, the media was replaced with agonists that were diluted in HBSS (Millipore Sigma) containing 10 mM lithium chloride (Millipore Sigma) and incubated at 37°C for 45 min. Then agonists were removed and cells were lysed on ice for 30 min with 10 mM formic acid (Millipore Sigma). 1.25 mg of polylysine coated YSI SPA beads (Perkin Elmer) was added to each well. The plates were mixed on an orbital shaker for 30 min. The plates were read on a MicroBeta scintillation counter (Perkin Elmer) the following day. IP accumulation assays on different constructs were performed in n ≥ 3 independent experiments (each done in duplicate), where n is shown in Supplementary Table 2. Cpm data were normalized to OX 2 R WT E max , and dose responses were fitted to the threeparameter model 'log(agonist) vs response' in GraphPad Prism 9 (GraphPad Software). Fitted pEC 50 values were analyzed for statistical significance compared to OX 2 R WT (with TAK-925 or orexin B) using one-way ANOVA followed by Dunnett's test. All pharmacological parameters are displayed in Supplementary Table 2.
β-arrestin PathHunter assay. β-Arrestin recruitment activity was measured by using the PathHunter system (DiscoveRX) according to the manufacturer's instruction. Receptor constructs used were the full-length wild-type (or mutant) human OX 2 R sequences cloned into pCMV-ProLink (DiscoveRX). CHO-K1 cells stably expressing EA β-arrestin2 (DiscoveRX) were plated on 384 well white plates (Thermo Fisher) at 4000 cells/well in HamF12 (Wako Pure Chemical Industries) supplemented with 10% fetal bovine serum (Corning) and penicillin-streptomycin (Wako Pure Chemical Industries) and incubated overnight at 37°C under 5% CO 2 . Cells were transfected with plasmids encoding wild type or each mutant ProLinktagged human OX 2 R using FuGENE HD (Promega). The following day, the media was replaced with HBSS (GIBCO) containing 20 mM HEPES (GIBCO) and 0.1% BSA (Wako Pure Chemical Industries), and treated with TAK-925 for 2 h at 37°C. Detection reagent was added and incubated for 1 h at room temperature. Luminescent signal was detected using an EnVision plate reader (PerkinElmer). βarrestin assays on different constructs were performed in two or three independent experiments (each done in quadruplicate). Dose response curves (shown in Supplementary Fig. 8) were fitted to the three-parameter model 'log(agonist) vs response' in GraphPad Prism to determine pEC50 values. Fitted pEC 50 values were analyzed for statistical significance compared to OX 2 R WT (with TAK-925 or orexin B) using one-way ANOVA followed by Dunnett's test.
Inhibition of forskolin-stimulated cAMP production. To measure G i signaling 67 , HEK293 cells were transfected with pGloSensor™−22F cAMP Plasmid (Promega) and CMV expression plasmid encoding human OX 2 R. To enhance the signal for the cAMP assay, we co-transfected a G i expression plasmid (CMV-driven) in which the full-length human G αi1 subunit sequence was cloned into pcDNA3. The following day, cells were harvested and washed with assay buffer, HBSS with 20 mM HEPES pH 7.4 (Millipore Sigma). Cells were suspended in assay buffer containing 0.5 mg/ml Luciferin (Gold Biotechnology) and plated on 96 well tissue culture-treated white plates with opaque bottoms (Thermo Fisher). The cells were incubated at 37°C for 90 min. Forskolin (Millipore Sigma) diluted in assay buffer was added to wells for a final concentration of 5 mM. Luminescence was read at room temperature repeatedly in a CLARIOstar microplate reader (BMG Labtech) until the signal stopped increasing. Agonist diluted in assay buffer was added from a 7X stock and again the plates were read repeatedly until the signal was stable. Assays for either TAK-925 or orexin B were performed in 3 independent experiments (each done in duplicate). cAMP luminescence data were plotted as a fraction of the maximal signal with forskolin, and dose responses were fitted to the threeparameter model 'log(inhibitor) vs response' in GraphPad Prism 9 (GraphPad Software). Both TAK-925 nor orexin B (Fig. 5d) displayed minimal diminution of cAMP signal comparable to cells without transfected OX 2 R. As a positive control, cells were transfected with pGloSensor™−22F cAMP, D2 dopamine receptor, and G αi1 plasmids, and tested for their response to dopamine (Millipore Sigma) after forskolin treatment. The measured IC 50 for dopamine was 4.9 ± 1.1 nM.
Enzyme-linked immunosorbent assay (ELISA) for cell surface expression. The cell surface expression levels of wild-type and mutant human OX 1 R and OX 2 R constructs were quantified by an enzyme-linked immunosorbent assay (ELISA). As above in IP accumulation and cAMP assays, wild-type and mutant human OX 1 R and OX 2 R constructs were cloned into pcDNA3. HEK293 cells were transfected with pCDNA3 expression plasmids encoding receptor constructs and full-length human G αq subunit, using Lipofectamine 3000 (Thermo Fisher). 24 h after transfection, the cells were plated on poly-D-lysine coated (Millipore Sigma) tissue culture-treated 24 well clear plates (Corning) at 200,000 cells per well in DMEMhigh glucose (Millipore Sigma) supplemented with 5% fetal bovine serum (Corning) and penicillin-streptomycin (Millipore Sigma). After 24 h at 37°C w/5% CO 2 , the media was aspirated and cells were washed with 200 μL/well of TBS buffer twice. 400 μL/well of 4% paraformaldehyde (PFA) were then added for fixation of cells, and cells were incubated for 30 min on ice. Cells were washed with 200 μL/ well of TBS buffer three times, followed by addition of 1% BSA. Incubation proceeded for 1 h at room temperature. After 1% BSA in TBS was aspirated, 200 μL/ well of 9.7 μg/ml of M1-Flag antibody (Sigma) in TBS w/1%BSA was added and cells were incubated at room temperature for 1 h. Cells were then washed with 200 μL/well of TBS w/1%BSA buffer three times, followed by addition of HRPcoupled secondary antibody with 1:2000 dilution in TBS w/1%BSA. Incubation proceeded for 1 h at room temperature. After cells were washed with 200 μL/well of TBS w/1%BSA buffer three times, 200 μL/well of TMB-ELISA (Thermo Fisher) was added. After a short time of incubation, 100 μL/well of colored solution was transferred to 96 well clear plates (Corning) containing 100 μL/well of 1 M H 2 SO 4 to stop reaction. Absorbance produced by HRP activity was immediately measured at 450 nm using a CLARIOstar microplate reader (BMG LABTECH). After measuring HRP activity, cells were washed with 200 μL/well of TBS twice and incubated with 200 μL/well of 0.2% (w/v) Janus green for 30 min. To remove extra Janus green, cells were washed with water three times, followed by addition of 800 μL/well of 0.5 M HCl. 200 μL/well of colored solution was transferred to the same 96 well clear plates (Corning). Absorbance at 595 nm was recorded by a CLARIOstar microplate reader (BMG LABTECH). For each well condition, the normalized cell surface expression was determined by dividing absorbance at 450 nm by absorbance at 595 nm. Experiments were repeated three times. The data were analyzed by one-way ANOVA (and Nonparametric or Mixed) and plotted as column with scattered points using GraphPad Prism 9 (GraphPad Software).
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The structural data generated in this study have been deposited in the Protein Data Bank (PDB) under accession codes 7SQO and 7SR8, and cryo-EM maps have been deposited in the Electron Microscopy Data Bank (EMDB) under accession codes EMD-25389 and EMD-25399. The pharmacological data generated in this study are compiled in the Source Data file provided with this paper. Source data are provided with this paper. | 12,223.4 | 2022-05-25T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Mathematical model for the evaluation of risk of emergency situations at a dangerous technical object based on artificial neural networks
In the work performed adaptation of artificial neural networks in modern security systems potentially dangerous technical objects — high-rise buildings as tools for assessing and forecasting in management decision. The study obtained the main scientific results: the mathematical model of risk assessment of man-made emergencies based on artificial neural networks; the mathematical model, adapted to the cumulative model of development technogene emergency-fire; provided risk assessment technique manmade emergencies based on artificial neural networks; represented private man-made fire risk assessment methodology using artificial neural networks.
Introduction
Modern methods of risk assessment are nothing more than a set of all possible methods of assessment, analysis and security of a particular object.At the moment the number of known methods of tens, but not all of them found practical application, and left only a hypothetical model in relation to security.In addition, it is worth noting that the problem is that not all existing modern knowledge applied in the field.So, for example, a great impetus to the development of the security management system has been the emergence of risk assessment [1,2], based on the theory of probability.Therefore, we can conclude that any of the promising direction or even a parallel region will be able to give impetus to the further development of the theory of security.Proof of this is the extensive integration in the safety of the most advanced ideas: modern integrated security systems [3], based on the software-hardware complex; the application of mathematical analysis in reliability.Despite the large number of methods, the quality of the risk assessments have not yet at a sufficient level therefore requires consideration of innovative methods in this area.This can be attributed, and the adaptation of artificial neural networks, further ANN.
The aim of this work is to improve the quality of risk assessment of emergency situations at hazardous activities through adaptation of artificial neural networks (ANN) in modern security systems.The targets set for the implementation of the above objectives the following: an analysis of concepts of artificial neural networks, their architecture; adaptation of neural network by applying an integral development model for evaluating the development emergency hazards of man-made fire.
Mathematical model
Artificial neural networks in general understanding constitute a mathematical model and its software or hardware incarnation, built on the principle of organization and functioning of the biological neural networks-networks of nerve cells alive the body [4].Generalized model of biological neuron was introduced back in 1943 year Makkalak and Pits [4], presented in Fig. 1 and the analytical version (1).
Where is, … numeric signs The figure shows that the neuron is a "black box", which enters a few parameters, and leaves only one option, provided that the neuron active according to a specific function activation.Because neural network consists of more than one neuron, then it follows that the network itself is superposition of individual neurons.This hypothesis is reflected in the ideas, and then the Perceptron in layered neural networks, Fig. 2. To solve the problems by using ANN, consider the following situation: indoor complex dangerous technical object, such as a high-rise building or a floating nuclear power plants going explosion and fire, which resulted in the fire and hazardous factors of fire (hereinafter 2 SHS Web of Conferences 44, 00069 (2018) https://doi.org/10.1051/shsconf/20184400069CC-TESC2018 FHF).The problem is adequately predict fire development and its FHF and then deciding to manage the system.In General, this question has a solution in the form of modern security systems installations (hereinafter SS), automatic fire extinguishing systems, preliminary modeling of fire development, with a view to analysing the proliferation FHF, etc. [5].However, this work will be considered a special case of this adverse event.Namely, the design, configuration and run ANN in the hardware-software complex of modern SS.In this scenario, the neural networks based on a mathematical model for calculating cumulative laid gas exchange during a fire in the building [5].Basis of this model make up expression of gas flow between rooms (1), the equation of mass balance and optical density (2), (3) energy conservation equation ( 4), the equation for determining gas temperature and concentration of combustion products in the premises buildings (5), main gas law equation: where: ji G -Gas consumption through a doorway between the two (j and i) adjacent premises, kg/s; discharge coefficient of aperture ( = 0.8 for closed openings и = 0.64 for open); Fcross-sectional area of the doorway, m 2 ; the density of the gases passing through the doorway, kg/m 3 ; ji P the average difference between complete j and i location, PA.
where: j Vroom volume, m 3 ; ttime, с; k k G -expenditure included in the room rate, kg/s; i i G -the cost of premises, kg/s; burnout speed of fire load, kg/s. where: , p C C specificizohornaja and isobaric heat capacity, KJ/(kg * k);
where: , i j optical smoke density in the i-m and j-m premises; m Dforming ability of fire load, where: j Qthe amount of sources (sewage) heat in a volume of j-facilities and equipment to heat, retiring in enclosures; given the coefficient of heat transfer; 0 Tinitial temperature indoors; ст j Fthe surface area of the frame structures in j indoors.When adapting an integrated mathematical model [5] in artificial neural networks, in the above expressions one can distinguish two kinds of parameters: the constant (the volumes of space, initial temperature, the initial pressure, specific heat capacity, etc.), as well as calculated parameters (number of heat in a room different coefficients, etc.).Regarding both types of neural network parameters will be initial parameters to find the summary results that need user for management decision.If you apply these settings regarding a multi-layer neural network (fig. 2 The concentration of the individual components of a gas mixture, X Lj(n) , kg/kg 5 Optical smoke density in the i and j (measured by instruments) 0…No On the basis of intervals of values, you can perform the initial configuration of ANN, and further her education, with a view to forecasting the FHF as well, further, and the adoption on the basis of this management decision.Training a neural network consists of the following aspects:-the destination array data (Fig. 2) is a collection of numbers, based on which, in the future maybe the adoption of managerial decisions (intervals of variable values of the input parameters, X, 1-Xn); -Once became known the output (output parameters and intervals) it is possible to perform learning ANN.The process of machine learningwill occur at this stage.The user specifies the input parameters, as well as settings that wait.Training will automatically pick weights w, and configures the dependence between input and finite intervals.
Results
It is worth noting that learning occurs in several iterations, until the rejection of the results: as a result of the above transactions the neural network as a tool for forecasting was formed.To solve the problem of management decision to build additional ANN, who is trained in the same way as neural network for prediction, only without the prior calculations.As a result of the work of the neural network to the output we get managerial decision.If you think of an example, the following view is emerging: output array data from previous neural network and become inputs for new neural networks; you know that under certain parameters some result occur (favourable or adverse event), so the output will be determined ANN.In other words, this operation will produce an automatic decision, and provide it to the user in the system security; as a result of neural network learning system will turn the decision in accordance with [6] (in the context of the considered emergency decision based on providing user SHS Web of Conferences 44, 00069 (2018) https://doi.org/10.1051/shsconf/20184400069CC-TESC2018 information with a reasonable degree of accuracy, for example, about case of fire and its dangerous affect factors).Thus, in the course of this work carried out adaptation of artificial neural networks in modern security systems as tools for assessing and forecasting in management decision.
Conclusions
The study obtained the main scientific results: the mathematical model of risk assessment of man-made emergencies based on artificial neural networks; the mathematical model, adapted to the cumulative model of development technogene emergency-fire; provided risk assessment technique manmade emergencies based on artificial neural networks provided private man-made fire risk assessment methodology using artificial neural networks.
the temperature of the gases in the i and j indoors, К; Г Qthe amount of heat generated indoors by combustion, kW; w Qheat flow absorbed by structures and emitted through openings, kW.
Table 1 .
.), you can propose a mathematical model of primary and result parameters in ANN, proposed in table 1. Мathematical model adapted for integrated development model of fire at a dangerous technical object. | 2,048.2 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Quasi-matter bounce and inflation in the light of the CSL model
The continuous spontaneous localization (CSL) model has been proposed as a possible solution to the quantum measurement problem by modifying the Schrödinger equation. In this work, we apply the CSL model to two cosmological models of the early Universe: the matter bounce scenario and slow roll inflation. In particular, we focus on the generation of the classical primordial inhomogeneities and anisotropies that arise from the dynamical evolution, provided by the CSL mechanism, of the quantum state associated to the quantum fields. In each case, we obtained a prediction for the shape and the parameters characterizing the primordial spectra (scalar and tensor), i.e. the amplitude, the spectral index and the tensor-to-scalar ratio. We found that there exist CSL parameter values, allowed by other non-cosmological experiments, for which our predictions for the angular power spectrum of the CMB temperature anisotropy are consistent with the best fit canonical model to the latest data released by the Planck Collaboration.
Introduction
After approximately three decades since the cosmological inflationary paradigm was conceived [1][2][3][4], all of its generic predictions have withstood the confrontation with observational data, in particular, those coming from the Cosmic Microwave Background (CMB) radiation [5][6][7]. That has led a large group of cosmologists to consider inflation as a well established theory of the early Universe. Inflation was originally proposed to provide a solution to the puzzles of the hot Big Bang theory (e.g. the horizon and flatness problems). However, the modern success of inflation is that, allegedly, it a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>can offer us an explanation about the origin of the primordial inhomogeneities [8][9][10][11]. The standard argument is also rather pictorial: the quantum fluctuations of the vacuum associated to the inflaton are stretched out to cosmological scales due to the accelerated expansion of the spacetime; those fluctuations are considered the seeds of all large scale structures observed in the Universe. Furthermore, in Ref. [12] the detectability of possible traces of the quantum nature concerning the primordial perturbations was investigated.
On the other hand, proponents of alternative scenarios to inflation argue that even if it is the most fashionable model of the early Universe, that does not mean it is necessarily true [13,14]. Furthermore, another feature that would make alternative models worthwhile of study is that they might avoid some long standing puzzles of the inflationary paradigm. Among those issues, we can mention: the subject of eternal inflation, a feature that is present in almost every model of inflation [15] and which also leads to the controversial topic of the multiverse; the initial singularity problem and the trans-Planckian problem for primordial perturbations [16], which are related by the fact that one is interpolating the solutions provided by General Relativity in regimes where it may no longer be valid; and finally, it has been argued that the potentials associated to the inflaton, that best fit the latest observed data, need to be fine-tuned [17,18]. Although the aforementioned problems are not considered real problems by some scientists [19,20], others seem to disagree [18,21]. However, we think that if other alternative models can reproduce the main results linked to inflation, we should make use of the observational data available to test them.
One of the alternative models to inflation that seems to be consistent with the latest data is the so called matter bounce scenario (MBS) [21][22][23][24][25][26][27][28]. In this cosmological model, the initial singularity of the standard model is replaced by a nonsingular bounce. That is, instead of an ever-expanding Universe, it assumes an early contracting matter-dominated Uni-verse, which continues to evolve towards a bouncing phase and, later, enters into the expanding-phase of standard cosmology. The Universe described by the MBS relies on a single scalar field satisfying an equation of state that mimics that of a dust-like fluid. Additionally, in order to describe successfully a bouncing phase with a single scalar field, one needs to use cosmologies beyond the realm of general relativity, such as, loop quantum cosmologies, teleparallel F(T ) gravity or F(R) gravity. Proponents of the MBS claim that the potential associated to the scalar field is less fine-tuned than that of inflation, and also solves the historical problem of requiring very special initial conditions for the Big Bang model [22,23], which originally motivated the development of the inflationary framework. However, the MBS is not exactly problem free. A complete assessment of the present conceptual issues is provided in Ref. [23]. In spite of not being completely finished from a theoretical point of view, the MBS is quite simple in its treatment of the primordial perturbations. That makes it an interesting case of study for the purpose of this article. In particular, the generation of the primordial perturbations is depicted during the contracting phase, i.e. in a regime where gravity is well described by general relativity, and the perturbations correspond to inhomogeneities of a single scalar field.
In addition to the prior puzzles and successes of inflation and the MBS, there remains an important question: what is the precise physical mechanism that converts quantum fluctuations of the vacuum into classical perturbations of the spacetime? This question has been the subject of numerous works in the past and the consensus seems to favor the decoherence framework [29][30][31][32][33]. 1 Nevertheless, decoherence cannot address that question by itself. 2 In other words, even if one would choose (or not) to embrace the decoherence program, a particular interpretation of Quantum Mechanics must be selected (implicitly or explicitly). The Copenhagen −orthodox− interpretation requires to identify a notion of observer that performs a measurement on the system; which, in the decoherence framework, is equivalent to identify the unobservables or external degrees of freedom of the system. It is not clear how to do such identifications if the system is the early Universe. Other interpretations such as many-worlds, consistent histories and hidden variables formulations, might be adopted with varying degrees of success (see for instance [38][39][40]).
In the present article, in order to address the quantum to classical transition of the primordial perturbations, we will choose to work with the continuous spontaneous localization (CSL) model. The CSL model belongs to a large class of mod-els known as objective dynamical reduction models or simply called collapse models. Collapse models attempt to provide a solution to the measurement problem of Quantum Mechanics [41][42][43][44][45]. The proponents of these models state that the measurement problem originates from the linear character of the quantum dynamics encoded in the Schrödinger equation. The common idea shared in these collapse models is to introduce some nonlinear stochastic corrections to the Schrödinger equation that breaks its linearity. According to the collapse models, a noise field couples nonlinearly with the system (usually with the spatial degree of freedom of the system), inducing a spontaneous random localization of the wave function in a sufficiently small region of the space. Suitably chosen collapse parameters make sure that micro-systems evolve essentially (but not exactly) following the dynamics provided by the Schrödinger equation, while macro-systems are extremely sensible to the nonlinear effects resulting in a perfectly localization of the wave function. Furthermore, there is no need to mention or to introduce a notion of an observer/measurement device as in the Copenhagen interpretation, which is a desired feature in the context of the early Universe and cosmology in general.
The CSL model has been applied before to the inflationary Universe in an attempt to explain the quantum to classical transition of the primordial perturbations [46][47][48][49][50]. Also, recently a new effective collapse mechanism, independent of the CSL model, has been proposed to deal with the measurement problem during the inflationary era [51]. However, among those works, the key role played by the collapse mechanism varies and also yields different predictions for the primordial power spectrum, some of which might be consistent with the observational data. In the present article, we will subscribe to the conceptual point of view first presented in [47,52], which was developed within the semiclassical gravity framework, and latter in [50,53] was extended to the standard quantization procedure of the primordial perturbations using the so called Mukhanov-Sasaki variable [8,54]. The main role that we advocate for the dynamical reduction mechanism of the state vector, modeled in this paper by the CSL model, is to directly generate the primordial curvature perturbations. Specifically, the initial state of the quantum field-the vacuum state-evolves dynamically according to the modified Schrödinger equation provided by the CSL model. This evolution leads to a final state that does not share the initial symmetry of the vacuum, i.e. it is not homogeneous and isotropic. 3 In this way, the collapse mechanism generates the inhomogeneities and anisotropies of the matter fields. These asymmetries are codified in the evolved quantum state and, thus, are responsible for generating the perturbations of the spacetime. 4 Note that the previous prescription, regarding our approach to address the birth of the primordial perturbations, does not require the inclusion of an exponential expansion phase in the Universe that "stretches out" the quantum fluctuations of the vacuum (or the squeezing of the field variables as usually argued). Therefore, in principle, it should be possible to extend our picture to alternative scenarios dealing with the origin of the cosmological perturbations. Moreover, since the cosmic observations are well constrained, it should also be feasible to test the predictions that result from applying our framework in those alternative cosmological models. In the present article, we focus on the implementation of the CSL model within the framework of the MBS and, in parallel, we present the same appliance of the CSL model to the slow roll inflationary model of the early Universe. In this way, we can appreciate more clearly where the CSL model enters into the picture; particularly at the moment when computing the theoretical predictions. The main motivation behind the present work is that if the CSL model can be truly considered as a physical model of the quantum world, which also avoids the standard quantum measurement problem, then it should also be possible to use it in different contexts from the traditional laboratory settings. The cosmological context provides a rich avenue to explore such foundational issues and, more important, there exist sufficient precise data to test the initial hypotheses. As a consequence, we will analyze the predictions resulting from implementing the CSL model in the MBS and in the inflationary model of the early Universe, and we will compare the corresponding results with the one provided by the best fit standard cosmological model. Additionally, we will focus on the range of values allowed for the parameters of the CSL model, experimentally tested [55,56] in non-cosmological frameworks.
The paper is organized as follows: in Sect. 2, we provide a very brief synopsis of the main features of the CSL model, with particular emphasis on those that will be useful for the next sections. In Sect. 3, we present the characterization of the primordial perturbations within the two cosmological models that we are considering, i.e. the MBS and standard slow roll inflation. In Sect. 4, we show the connection between the observational quantities and the theoretical predictions that result from adopting our conceptual point of view concerning the CSL model. In Sect. 5, we explicitly show the implementation of the CSL model to the MBS and inflation, and we also present the predictions for the primordial power spectra (scalar and tensor) in each case. In Sect. 6, we discuss the implications of the results obtained; additionally, we com-pare the predicted scalar power spectra with the standard one. In Sect. 7, we analyze the viability of the CSL model using the data extracted from the CMB when considering the best fit cosmological model. Finally, in Sect. 8, we end with our conclusions. We include an Appendix containing the computational details that led to the results presented in Sect. 5.
A concise synopsis of the CSL model
In this section, we provide a brief summary of the relevant features of the CSL model; for a detailed review, we refer the reader to Refs. [43,44].
In the CSL model, the modification of Schrödinger's equation induces a collapse of the wave function towards one of the possible eigenstates of an operatorΘ, called the collapse operator, with certain rate λ. The self-induced collapse is due to the interaction of the system with a background noise W(t) that can be considered as a continuous-time stochastic process of the Wiener kind. The modified Schrödinger equation drives the time evolution of an initial state as withT the time-ordering operator. The probability associated with a particular realization of W(t) is, The norm of the state | , t evolves dynamically, and Eq.
(2) implies that the most probable state will be the one with the largest norm. From Eqs. (1) and (2), it can be derived the evolution equation of the density matrix operatorρ. That is, The density matrix operator can be used to obtain the ensemble average of the expectation value of an operator Ô = Tr[Ôρ]. Henceforth, from Eq. (3) it follows that The average is over possible realizations of the noise W(t), each realization corresponding to a single outcome of the final state | , t . One of the most important features of collapse models is the so-called amplification mechanism. That is, assuming that the reduction (collapse) rates for the M constituents of a macroscopic object are equal (λ i = λ), it can be proved that the reduction rate for the center of mass of an M-particle system is amplified by a factor of M with respect to that of a single constituent [41,57]; in other words, λ macro = Mλ.
The parameter λ sets the strength of the collapse process. In the original model, proposed by Ghirardi-Rimmini-Webber (GRW), the authors suggested a value of λ GRW 10 −16 s −1 for r C 100 nm. However, Adler suggested a greater value λ Adler 10 −8 s −1 for r C 100 nm [58] (the parameter r C is called the correlation length of the noise and provides a measure for the spatial resolution of the collapse [41,43,57]). Recent experiments have been devised to set bounds on the parameter λ [59,60]. Furthermore, it is claimed that matter-wave interferometry provides the most generic way to experimentally test the collapse models [55,56]. Those results suggest that the range between λ GRW and λ Adler is still viable for some variations of the original CSL model (e.g. by considering non-white noise).
Consequently, the main characteristics of the CSL model are: (1) The modification to Schrödinger's equation is nonlinear and leads to a breakdown of the superposition principle for macroscopic objects; (2) The random nature of Quantum Mechanics is concealed in the noise W(t) and is consistent with Born's rule; (3) An amplification mechanism exists, through the parameter λ which is related to the strength of the collapse. This strength is weak for microscopic objects and strong for macroscopic bodies.
Another main aspect of the collapse models is that the collapse mechanism injects energy into the system. In fact, previous works have performed a preliminary analysis using cosmological data to set bounds on the value of λ [61]. The energy increase is minimal, e.g. for a particle of mass m = 10 −23 g, one obtains δ E/t 10 −25 eV s −1 [43]. In other words, an increase of 10 −8 eV will take 10 10 years. However, even if the energy increase can be ignored at the phenomenological level, a more realistic model should remove this issue.
Moreover, the increase of energy in the collapse models leads to difficulties when trying to formulate relativistic collapse models. Additionally, the collapse mechanism occurs in such a way that is nonlocal. This implies that the collapse of the wave function must be instantaneous or superluminal (but the nonlocal features cannot be exploited to send signals at superluminal speed). Also, the nonlocality is necessary to ensure that the models are consistent with the violation of Bell's inequalities. Several relativistic models have been proposed so far [62][63][64], none of which can be considered completely finished. In spite of the lack of a relativistic collapse model, we will apply the CSL model to the primordial Universe, i.e. to inflation and the MBS, but in order to provide a more detailed picture, we need first to establish the mathematical framework of the primordial Universe in the two approaches considered in this work.
Two approaches: accelerated expansion or quasi-matter contraction
This section presents the details of the two cosmological approaches, describing the dynamics of the Universe, that we will be considering in the rest of the manuscript. In particular, we are going to work with the following two scenarios: 1. An accelerated expansion of the early Universe given by the simplest inflationary model, that is, a single scalar field in the slow roll approximation with canonical kinetic term. Since such a model is probably very well known for most readers, we will not dwell into much detail here. 2. The MBS [22][23][24][25][26], a cosmological model in which the Universe undertakes a quasi-matter contracting phase, then experiences a non-singular bounce and finally enters into the standard cosmological expansion. Since in this model the primordial perturbations are born during the contracting stage of the Universe, we will focus exclusively on that cosmic stage. We will refer to such a stage as the quasi-matter contracting Universe (QMCU).
The background
The inflationary Universe and the QMCU are both described by Einstein equations G ab = 8π GT ab (c = 1), while the matter fields are characterized by a single scalar field. In the case of inflation the scalar field is the inflaton φ, and in the QMCU the scalar field will be denoted by ϕ.
As mentioned earlier, for inflation, we will consider standard slow roll inflation. In that case, the background spacetime is described by a quasi-de Sitter Universe, characterized by H −1/[η(1 − )], with H ≡ a /a the conformal expansion rate, a being the scale factor and the slow roll parameter is defined as ≡ 1 − H /H 2 ; a prime denotes partial derivative with respect to conformal time η. The energy density of the Universe is dominated by the potential of the inflaton V , and during slow roll inflation the condition 2 1 is satisfied, with M 2 P ≡ (8π G) −1 the reduced Planck mass. Since we will work in a full quaside Sitter expansion, another useful parameter to characterize slow roll inflation is the second slow roll parameter, i.e. δ ≡ − /2H 1. In the case of the QMCU, the starting point is also a flat FLRW geometry that leads to the Friedmann and conservation equations. The field ϕ is separated into an homogeneous part ϕ 0 (η) plus small inhomogeneities δϕ(x, η). The homogeneous part satisfies where W is the potential associated to the field ϕ.
In the QMCU, it is assumed that the equation of state associated to the scalar field almost mimics that of ordinary matter, i.e. P = ωρ such that |ω| 1; the latter implies ϕ 2 0 2a 2 W . Consequently, the scale factor (in conformal time) evolves as a(η) η 2 /9.
The quasi-matter contraction is characterized with a small parameter | | 1, which plays the same role as the slow roll parameter in inflation. The parameter is defined as (see e.g. [22]) The case = 0 corresponds to an exact matter dominated contracting phase (note that = ω). Furthermore, for sake of completeness we introduce another parameter such that |δ 2 | | |. The parameter |δ 2 | is analog to the δ parameter of slow roll inflation and it is related to the running of the spectral index in the QMCU model (see e.g. [22]).
As is well known, it is not straightforward to accomplish a non-singular bounce within the framework of General Relativity by considering a single canonical scalar field, since the null energy condition (NEC) is violated (see for instance [14,23]). As a consequence, one possible option is to work with cosmologies within the context of modified gravity theories. In the case of the QMCU presented in [22,23], the authors worked within the framework of holonomy corrected loop quantum cosmology and teleparallel F(T ) gravity.
It is also important to note that even if a non-singular bounce cannot be achieved within general relativity, the origin of the primordial perturbations is assumed to take place during the contracting (pre-bounce) phase of the Universe, where the curvature and energy scales are low enough to be described by General Relativity. On the other hand, one must present the conditions that need to be fulfilled such that the shape of the primordial spectrum, associated to the perturbations, remains practically unchanged when passing through the bounce. We will discuss this subject in more detail in the next section.
Perturbations
In the inflationary Universe and in the QMCU, one can separate the scalar field into an homogeneous part plus small inhomogeneous perturbations. Moreover, the metric associated to the spacetime, in both cases, is described by a FLRW background metric plus perturbations; which are classified as scalar, vector and tensor types (in this paper we will not consider vector perturbations). One useful quantity to describe the scalar (an also the tensor) perturbations is the so called Mukhanov-Sasaki (MS) variable. During inflation, the MS variable is defined by with the gauge invariant quantity known as the Bardeen potential [65], which, in the longitudinal gauge, corresponds to the curvature perturbation. A similar expression to Eq. (8) can be used in the QMCU by replacing the fields φ 0 and δφ with ϕ 0 and δϕ, respectively. The advantage of relying on the MS variable is that, when expanding the action of a scalar field minimally coupled to gravity into second order scalar perturbations, one obtains δ (2) with v k the Fourier modes associated to the MS variable, z = aφ 0 /H during inflation, and z = aϕ 0 /H when considering the QMCU. However, it is important to note that during the bouncing phase, the action given by the Lagrangian in Eq. (9) remains the same but the expression for z changes (see Ref. [25] for an explicit calculation within F(T ) theories). On the other hand, during the contraction phase, the quantity z /z can be written explicitly in terms of the QMCU parameter¯ , as in a similar fashion using the slow roll inflation parameters, i.e. where Note that, since |δ 2 | | |, the δ 2 parameter does not enter into the expression z /z at first order for the QMCU.
The CSL model is based on a nonlinear modification to the Schrödinger equation; consequently, it will be advantageous to perform the quantization of the perturbations in the Schrödinger picture, where the relevant physical objects are the Hamiltonian and the wave functional. The Hamiltonian associated to L in Eq.
where the indexes R, I denote the real and imaginary parts of v k and p k . The canonical conjugated momentum associated to v k is p k = ∂L/∂v k , i.e. Since We promote v k and p k to quantum operators, by imposing canonical commutations relations characterizes the state of the system. Furthermore, in Fourier space, the wave functional can be factorized into modes components From now on, we will deal with each mode separately. Henceforth, each mode of the wave functional, associated to the real and imaginary parts of the canonical variables, satisfies the Schrödinger equation with the Hamiltonian provided by (12). Note that one can also choose to work with the wave functional in the momentum representation, i.e.
. The standard assumption is that, at an early conformal time τ → −∞, the modes are in their adiabatic ground state, which is a Gaussian centered at zero with certain spread. This applies to both, the inflationary Universe and the QMCU. In addition, this ground state is commonly referred to as the Bunch-Davies vacuum. Thus, the conformal time η is in the range [τ, 0 − ).
Given that the initial quantum state is Gaussian, its shape will be preserved during its evolution. The explicit expression of the Gaussian state, in the field representation, is: and, equivalently, in the momentum representation Therefore, the wave functional evolves according to Schrödinger equation, with initial conditions given by = 0 corresponding to the Bunch-Davies vacuum, which is perfectly homogeneous and isotropic in the sense of a vacuum state in quantum field theory. The fact that we are introducing the wave functional in the field and momentum representations is related to the choice of the collapse operator in the CSL model, i.e., since there is no physical reason to choose one over the other, both choices are equally acceptable (at least from the phenomenological point of view). In the next section, we will show how to extract the physical quantities from the theory to be compared with the observations.
Theoretical predictions and observational quantities
We begin this section by making some key remarks about the conceptual aspects of our approach and, then, we proceed to identify the relevant physical quantities that will be related with the observed data. We encourage the reader to consult Refs. [34,35,52] for a complete discussion regarding our full picture of the role played by the dynamical reduction of wave function in the cosmological setting. As a matter of fact, the relation between the observables and the predictions from the theory, using the Mukhanov-Sasaki variable during inflation and the CSL model, has been previously exposed in [50]; however, in this section we reproduce the key arguments of such a reference to make the present paper as self-contained as possible. Thus, there is no original work in the following of this section.
The main role for invoking the collapse of the wave function is to find a physical mechanism for breaking the initial homogeneity and isotropy associated to both, the quantum state and the spacetime. More specifically, we assume that a nonlinear modification to the Schrödinger equation, which in the present work is provided by the CSL model, can break the homogeneity and isotropy associated to the vacuum state and, in turn, it can generate the metric perturbations, which correspond to the primordial curvature perturbation.
Note that in the literature one can found statements suggesting that the vacuum fluctuations somehow become classical when the proper wavelength associated to the perturbations becomes larger than the Hubble radius [29,66]. Nevertheless, there is nothing in the dynamics governed by the traditional Schrödinger equation that can change the symmetry of the vacuum state, the symmetry being the homogeneity and isotropy. As a consequence, if the quantum state is perfectly symmetric and the Quantum Theory teaches us that the symmetries of a physical system must be encoded in the quantum state, then there is no clear way to describe the inhomogeneities and anisotropies of the spacetime in the quantum sense. If the quantum state of the system is perfectly symmetric, then its classical description must also be exactly symmetric. Thus, there is a lack of a proper explanation concerning the emergence of the primordial inhomogeneities and anisotropies in the Universe. That is why some nonstandard interpretations of Quantum Mechanics, that make use of the Schrödinger equation (e.g. many-worlds, consistent histories, etc.), cannot provide a satisfactory answer to the problem at hand. It is important to note that the previous discussion applies to both cosmological models, the QMCU and inflation.
The modified Schrödinger equation given by the CSL model can successfully change the symmetries of the vacuum state and, at the same time, be responsible for the birth of the primordial curvature perturbation.
Specifically, in the comoving gauge, the curvature per- The question that arises now is: how to relate the quantum objectsv(x, η) and R(x, η)? Furthermore, one may wonder how to relate the physical observables, such as the temperature anisotropies of the CMB, with the quantum objects that emerge from the quantum theory? The traditional answer relies on the quantum correlation functions, in particular, the two-point quantum correlation function 0|R(x, η)R(x , η)|0 and its relation with the two-point angular correlation function of the temperature anisotropies δT /T 0 (n 1 )δT /T 0 (n 2 ), where the bar denotes an average over different directions in the celestial sky andn 1 andn 2 are two unitary vectors denoting some particular directions. We do not find the previous answer to be completely satisfactory, and for a detailed explanation we invite the reader to consult Refs. [34,35].
In order to illustrate our approach, we begin by focusing on the temperature anisotropies of the CMB observed today and its relation to the comoving classical curvature perturbation encoded in the quantity R. Such a relation is approximately given by (i.e. for large angular scales) On the other hand, the observational data are described in terms of the coefficients a lm of the multipolar series expan- here θ and ϕ are the coordinates on the celestial two-sphere, with Y lm (θ, ϕ) the spherical harmonics. Given Eq. (18), the coefficients a lm can be further reexpressed in terms of the Fourier modes associated to R, i.e.
where R D is the comoving radius of the last scattering surface and j l (k R D ) the spherical Bessel function of order l of the first kind.
Finally, we can include the effects of late time physics that give rise to so called acoustic peaks. These effects are encoded in the transfer functions l (k), and thus the coefficients a lm are given by where R k is the primordial comoving curvature perturbation. Also note that for large angular scales l (k) → j l (k R D ). The next step is to relate R k with the quantum operator R k . Clearly, if one computes the vacuum expectation value 0|R k |0 and makes it exactly equal to R k , then one obtains precisely zero; while it is clear that for any given l, m, the measured value of the quantity a lm is not zero. As matter of fact, the standard argument is that it is not the quantity a lm that is zero but the average a lm . However, the notion of average is subtle, since in the CMB one has an average over different directions in the sky, while the average that one normally associates to the quantum expectation value of an operator is related to an average over possible outcomes of repeatedly measurements of an observable associated to an operator in the Hilbert space of the system (it is evident that concepts such as measurements, observers, etc. are not well defined in the early Universe).
On the other hand, we will assume that the quantity R k , i.e. the classical value associated to the Fourier mode of the comoving curvature perturbation R(x, η), is an adequate description if the quantum state associated to each mode is sharply peaked around some particular value. In consequence, the classical value corresponds to the expectation value ofR in that particular "peaked" state [53]. In other words, our assumption is that the CSL mechanism will lead to a final state such that the relation holds. Therefore, in our approach, the coefficients a lm in Eq. (21), will be given by where | corresponds to the evolved state according to the non-unitary modification of the Schrödinger equation provided by the CSL mechanism (see Refs. [46,48,49] for other ways to relate R k andR k using the CSL model, and [50] for a discussion on those approaches). Note also that | does not share the same symmetries as the vacuum state, i.e. the inhomogeneity and isotropy of the system is encoded in the quantum state | . Furthermore, Eq. (23) shows how the expectation value of the quantum fieldR k in the state | acts as a source for the coefficients a lm . A well known observational quantity is the angular power spectrum defined by We will assume that we can identify the observed value |a lm | 2 with the most likely value of |a lm | 2 M L obtained from the theory and, in turn, assume that the most likely value coincides approximately with the average |a lm | 2 . This average is over possible realizations or outcomes of the state | that results from the CSL evolution. Thus, the observed C obs. l approximately coincides with the theoretical prediction C l given in terms of the average |a lm | 2 , i.e.
Using Eq. (23), the theoretical prediction for the angular power spectrum is Moreover, if the CSL evolution is such that there is no correlation between modes (which can be justified by the fact that we are working at linear order in cosmological perturbation theory), then whereR R,I k denotes the real and imaginary part of the field R k (also, we assume that there is no correlation betweenR R k andR I k ). Therefore, Performing the integral over the angular part of k and summing over m, we obtain On the other hand, the standard relation between the primordial power spectrum and the C l is given by where P s (k) is the dimensionless scalar power spectrum defined as As a consequence, Eqs. (29) and (30) imply that the power spectrum in our approach is given by Note that the definition of the power spectrum, Eq. (31), is the canonical definition when dealing with classical random fields, where the average is over possible realizations of the random fields. In cosmology, the usual identification of the two-point quantum correlation function 0|R kRk |0 with R k R k is subtle and concepts such as ergodicity, decoherence and squeezing of the vacuum state are normally invoked.
Thus, in terms of the MS variable, the scalar power spectrum in our approach is: Equation (33) is the key result from this section. It shows explicitly how to relate the quantities obtained from the quantum theory with the observed temperature anisotropies of the CMB. It also exhibits the difference between our approach and the traditional one.
The CSL model in quasi-matter contraction and inflation
In this section, we will focus on the specific details of implementing the CSL model to the QMCU and the inflationary Universe, and the main goal will be to obtain a prediction for the power spectra. We begin by noting that, in Eq. (33), the predictions related to the observational data are the objects |v R,I k | . Therefore, we will apply the CSL model to each mode of the field and to its real and imaginary parts. As a consequence, we will assume that the evolution of the state vector characterizing each mode of the field, written in conformal time, is given by with H R,I k given in (12). Note that the HamiltonianĤ R,I k depends on the fieldv R,I k which is defined in terms of the inflaton perturbations, but also it can be defined analogously using the perturbations of the scalar field associated to the QMCU [one has to take into account the change in z(η)]. Furthermore, we will consider that Eq. (4) can be extrapolated to a generic quantum modeF k , that is, the real and imaginary parts of the modeF k satisfy: At this point, we have to make a choice regarding the collapse operatorΘ R,I k . At first sight, the natural candidate is the MS variable, namelyΘ R,I k =v R,I k . Nevertheless, we think that in absence of a full relativistic CSL model, there is no a priori choice and, thus, the canonical conjugated momentum p R,I k can also be considered as the collapse operator. In fact, in Ref. [50], we have shown that in the framework of the inflationary Universe, the momentum operator can be used as the collapse operator given that, in the longitudinal gauge, the momentum operator is directly related with the curvature perturbation.
Thus, we are going to consider four different cases: We stress that only the third case, that is, the implementation of the CSL model within the inflationary framework using the fieldp R,I k as the collapse operator, was first developed in Ref. [50]. Nevertheless, we are including it in the present work for the sake of completeness. Note however that the analysis in Ref. [50] was done in the longitudinal gauge. In the present paper, we will work in the comoving gauge in all the four cases. The analysis of the three remaining cases, and in particular the implementation of the CSL model during a contracting phase of the early Universe, are presented here for the first time.
For each of these four cases, we will obtain the scalar (and tensor) power spectrum.
Furthermore, the calculation of the object |v R k | 2 is identical to |v I k | 2 . Consequently, we will omit from now on the indexes R, I unless it creates confusion.
Using the Gaussian wave functions in the field representation, Eq. (16), and the probability associated to W(η) in Eq. (2), it can be shown that [47], The quantity (4Re[A(η)]) −1 is the standard deviation of the squared field variablev k . It is also the width of every packet in Fourier's space. In a similar manner, using the Gaussian wave function in the momentum representation, Eq. (17), along with Eq. (2), it follows that For cases (i) and (ii), it is convenient to work with Eq. (36); and for cases (iii) and (iv) with Eq. (37). Thus, to calculate v k 2 , we only need to find the two terms on the right hand side of (36) or (37), respectively. The second term on the right hand side of both equations can be found from the CSL evolution equation, Eq. (34), while the first one by using Eq. (35) with the wave function in the corresponding representation. Also, in Eqs. (36) and (37), we consider the regime −kη → 0, which correspond to the range of observational interest, that is, the regime for which the modes are larger than the Hubble radius.
Once we have computed Eqs. (36) and (37), in the corresponding case, we can substitute it into Eq. (33) to give a specific prediction for the scalar power spectrum. The actual calculations are long, so we have included them in Appendix A for the interested reader. In the following, we will show only the main results.
In case (i), our predicted scalar power spectrum during inflation is (at the lowest order in the slow roll parameter): with ν s ≡ 3/2 + 2 − δ and For case (ii), we have with μ s ≡ 3/2 − 6¯ and In both cases, (i) and (ii), we have also defined: The calculations for obtaining the tensor power spectra are very similar to the one used to obtain the scalar ones (see Appendix A for further details). In case (i), the formula obtained for the tensor power spectrum is where ν t ≡ 3/2 + . Therefore, the tensor-to-scalar ratio r ≡ P t (k)/P s (k), at the lowest order in the slow roll parameter, is given by which is exactly the same prediction as in the standard inflationary slow roll scenario. Meanwhile, in case (ii), the tensor power spectrum is where μ t ≡ 3/2 − 6 = μ s . Also, for very low energy densities and curvatures, z T = a (see Ref. [22]). The tensorto-scalar ratio is given by which is also the same as the one presented in Refs. [22,23]. Note that we have evaluated the upper limit of the integrals at η = ∞. The motivation is essentially the same as the one given in Refs. [22,23]. That is, one evaluates the scalar and power spectra at very late times corresponding to when the mode "re-enters the horizon", or more precisely when k = |a H| during the expanding (post-bounce) phase. The previously presented cases (i) and (ii) correspond to selectingv R,I k as the collapse operator. Next, we focus on the results for cases (iii) and (iv), which correspond to choosê p R,I k as the collapse operator. For case (iii), we obtain: where we have defined ν s ≡ 1/2 + 2 − δ and In case (iv), the corresponding expression results with the definitions μ s ≡ 3/2 − 6 and The constants c 1 , c 2 and c 3 are shown in Appendix A. In both cases, (iii) and (iv), we have the following definitions The predictions for the tensor-to-scalar ratios are exactly the same as the ones presented in cases (i) and (ii) (see Appendix A).
We end this section by summarizing the main results. We have applied the CSL model to the inflationary Universe and to the QMCU. Moreover, in order to employ the CSL model, we need to choose the collapse operator. We have chosen to work withv R,I k andp R,I k as the collapse operators. Henceforth, we have obtained the scalar power spectra in four different cases Eqs. (38), (40), (47) and (49). On the other hand, introducing the CSL mechanism does not affect the tensor-toscalar ratio r . Specifically, if one works within the standard inflationary scenario, then the prediction for r is equal to the standard one given by slow roll inflation; meanwhile, if one adopts the QMCU framework, then the predictions are equal to the ones presented in Refs. [22,23].
Discussion on the CSL inspired power spectra
In this section, we will discuss the implications of the results obtained in the previous section. In particular, we will compare our predicted scalar power spectra with the standard one.
The scalar power spectrum predicted by slow roll inflation is traditionally expressed as [36,67] where k 0 is a pivot scale, and the amplitude A s and the spectral index n s are given by On the other hand, we have four different expressions for the scalar power spectrum, corresponding to the four cases mentioned at the beginning of Sect. 5. In the following, we will analyze each one of them, but first we will make a few observations regarding the parameter λ k .
The dependence on k in the parameter λ k encodes the "amplification mechanism", which is characteristic of dynamical reduction models Refs. [48,49]. One possible way to determine the exact dependence on k, and perhaps the simplest, is by dimensional analysis. That is, the main evolution equations are given in Eqs. (34) and (35); consequently, in order for those equations to be dimensionally consistent, the fundamental dimensions of λ k change depending on the fundamental units associated to the collapse operatorΘ R,I k . Moreover, we expect that λ k is directly related to λ, i.e. the CSL parameter, which clearly must be the same in all physical situations (cosmological or otherwise). Moreover, taking into account that we are working in units in whichh = c = 1, the fundamental dimension of λ is [Length] −1 .
Thus, in the case where the collapse operator is chosen to bev R,I k , the most natural expression of λ k , which is consistent with the dimensions of all terms involving the dynamical equations, is And in the case where the selected collapse operator isp R,I k , such an expression is where λ is the CSL parameter, with the same numerical value in all cases. From now on, we will assume that λ k takes the form of Eqs. (54) and (55) depending on the chosen operator acting as the collapse operator.
The CSL power spectra during inflation
Let us begin the discussion by working within the framework of the inflationary Universe, analyzing cases (i) and (iii). The scalar power spectrum given in Eq. (38), corresponding to case (i), can be written in a similar form to the one showed in Eq. (52). As usual, the power spectrum can be evaluated at the conformal time where the pivot scale "crosses the horizon"; or more precisely, when −k 0 η = 1 (i.e. k 0 = a H) during the inflationary epoch. Furthermore, the different coefficients that multiply each term of the function F 1 (λ k , ν s ) involve the quantity ν s . For these terms, we can approximate ν s 3/2 without loss of generality. However, note that such approximation cannot be done to the powers of k involving ν s because these are directly related to the scalar spectral index n s , for which the value n s = 1 is ruled out. Furthermore, in order to provide a suitable normalization for the CSL power spectra, we multiply and divide by the quantity λ|τ |. Thus, the power spectrum in Eq. (38) can be rewritten as: where and C 1 (k) ≡ F 1 (λ k = λk, ν s 3/2)/λ|τ |; that is, [expressions for ζ k and θ k are given in Eq. (42) with λ k = λk].
Within the inflationary framework, and with the same arguments followed to arrive to Eq. (56), we can write the power spectrum Eq. (47), corresponding to case (iii), in the following form: where A s and n s are the same as in Eq. (57), and C 3 (k) ≡ F 3 (λ k = λ/k, ν s 1/2)/λ|τ |. Thus, [expressions forζ k andθ k are given in Eq. (51) with λ k = λ/k] Let us make some remarks. Notice that the scalar index predicted by the CSL power spectra is exactly the same as the standard one from slow roll inflation, but the amplitude is slightly different. The difference between the standard amplitude and the one using the CSL model is a factor of λ|τ |/2 [see Eqs. (53) and (57)]. The reason for the factor 1/2 can be traced back to Eq. (33), since in our approach the power spectrum receives an equal contribution from the expectation values v R k 2 and v I k 2 . However, the factor 1/2 will not have any important observational consequences. On the other hand, the factor λ|τ |, which comes from the normalization of C 1 (k) and C 3 (k), does modify the standard predicted amplitude. A quantitative analysis will be done in the next section.
A second remark has to do with the following. It is well known that there is a minimum number of e-foldings for inflation related to the solution of the "horizon problem", and this minimum number depends on the characteristic energy of inflation. A shared characteristic of the functions C 1 (k) and C 3 (k) is that they include the quantity τ , which represents the conformal time at the beginning of inflation. This quantity depends on the energy scale at which inflation ends, which is associated to the inflaton potential V at that time, and the number of e-foldings corresponding to the total duration of inflation.
Third, note that another important feature of the CSL power spectra in inflation is that the function C 1 (k), corresponding to the case in which the collapse operator isv R,I k , depends explicitly on the conformal time η, whilst the function C 3 (k), which corresponds to the case whenp R,I k is the collapse operator, does not exhibit such time dependence. The time dependence on the power spectrum when the collapse operator isv R,I k has been previously noted by other authors [46,48]. Nevertheless, the exact form of their predicted power spectrum is different from the one shown here. As a matter of fact, that difference is illustrated by considering the limiting case λ k = 0. In such mentioned works, for λ = 0 (i.e. standard Schrödigner evolution), their predicted power spectrum is the same as the traditional one. Contrarily, in our approach, if λ k = 0 then P s (k) = 0, which is consistent with our point of view regarding the role played by the CSL model. In any case, even if the pictures used for the role of the CSL model are different between our work and [46,48], the time dependence on the power spectrum is shared.
In order to continue, we choose to evaluate the power spectrum (or equivalently the function C 1 (k)) at the conformal time when inflation ends, which we denote by η f . We think it is consistent with the previous calculations in which the power spectrum was obtained in the limit −kη → 0, which is satisfied by the value η f . The precise value of η f depends mainly on the characteristic energy scale of inflation and the number of e-foldings assumed for the full inflationary phase N ≡ ln[a(η f )/a(τ )].
Readers familiar with previous works, can check that our expression for the scalar power spectrum Eq. (59), which features the function C 3 (k), is essentially the same as the one obtained in Ref. [50]. The difference is that in the present paper we chose to work in the comoving gauge (where R represents the curvature perturbation), whilst in the aforementioned reference we worked in the longitudinal gauge, where the Bardeen potential corresponds to the curvature perturbation. Therefore, we find reassuring that even having worked in different gauges, the expression for the power spectrum, when the collapse operator is the momentum associated to the Fourier's mode of the MS variable, is the same and it has the attractive feature that does not depends on the conformal time. Figures 1 and 2 show different plots for the functions C 1 (k) and C 3 (k), respectively. In both cases, we have considered the value of the CSL parameter as λ GRW = 1. Mpc −1 , which corresponds to a value favored by experimental data [55,56,59,60]. The various plots in each figure correspond to different values of the characteristic energy of inflation V 1/4 , and the total e-foldings N that inflation is assumed to last, which also set the values of τ and η f . The values of k considered correspond to these of observational interest, i.e. we consider k in the range from 10 −6 to 10 −1 Mpc −1 .
As we can observe, the functions C 1 (k) and C 3 (k) exhibit an oscillatory behavior around the unit. For increasing values of k, the oscillations decrease in amplitude. However, we note that even for decreasing values of k the functions C 1 (k) and C 3 (k) are very close to 1. Consequently, for the chosen values of λ, V 1/4 and N , the functions C 1 (k) C 3 (k) 1. That means that the shape of the angular power spectrum C l will not be very different from the standard one, but the amplitude could vary (a complete analysis will be presented in the next section).
Additionally, the fact that C 1 (k) depends on the conformal time does not seem to affect its behavior in a significant manner. In fact, it is closely similar to the one of C 3 (k), which does not depend on the conformal time. That means that the contribution from the time dependent term (i.e. the last term in Eq. (58)), to the total value of the function C 1 (k) is negligible when −kη → 0.
The CSL power spectra in the QMCU
We switch now the discussion to the framework of the QMCU, i.e. cases (ii) and (iv), which correspond to selectinĝ v R,I k orp R,I k as the collapse operator, respectively.
The scalar power spectra given in both cases, i.e. Eqs. (40) and (49), can also be written in a manner similar to the standard spectrum Eq. (52). Once again, following Refs. [22,23], we choose to evaluate the spectrum at the conformal time where the pivot scale "reenters the horizon" k 0 = |a H|, which happens at late times during the expansion phase of the Universe (consequently the upper limit of the integral is evaluated at η → ∞). We approximate (for the same arguments as in the previous subsection) μ s 3/2 in the coefficients of the terms in expressions F 2 (λ k , μ s ) and F 4 (λ k , μ s ) (but not in the powers of k as these powers are directly related to the spectral index n s ). Additionally, the parameter λ k is assumed to be λ k = λk for case (ii) and λ k = λ/k in case (iv). Moreover, we multiply and divide by a factor of λ|τ | in order to properly normalize the expressions F 2 and F 4 .
Henceforth, the scalar power spectrum for case (ii), Eq. (40), will be written as where , n s − 1 = 12 (62) and C 2 (k) ≡ F 2 (λ k = λk, μ s 3/2)/λ|τ |; thus, On the other hand, the power spectrum for case (iv), Eq. (49), will be written as with A s and n s the same as in Eq. (62), and C 4 (k) ≡ F 4 (λ k = λ/k, μ s 3/2)/λ|τ |. Hence, As in the case of the inflationary Universe, the predicted value of the scalar spectral index n s is not affected by the CSL model. In fact, it has the same expression as that of the QMCU original models presented in Refs. [22,23]. Nevertheless, as can be seen in Eq. (62), the amplitude of the spectrum is modified by an extra factor of λ|τ |/2 with respect to the original QMCU model, that is, In this case, τ corresponds to the beginning of the quasimatter dominated period. Regarding the amplitude of the spectrum in the QMCU model, when the background evolution is driven by a matter dominated Universe, it can be obtained analytically working within F(T ) gravity or LQC. In the teleparallel F(T ) case, the original amplitude is ρ c ρ P (67) while in the LQC case, the original amplitude is where ρ P is the Planck energy density, C 0.9159 is Catalan's constant and ρ c is called the critical density, which corresponds to the energy density at which the Universe bounces, both expressions for the amplitude can be consulted in Refs. [22,23]. Thus, in order to obtain an amplitude in both cases (the teleparallel gravity case and the LQC case) that is consistent with that obtained from the CMB data (i.e. A s 10 −9 ), and taking into account the extra factor of λ|τ |/2 coming from the CSL model, the value of the energy density at the bouncing point must satisfy Generically λ|τ | 1; hence, the CSL model introduces an extra constriction to the QMCU, that is, ρ c ρ P . In the next section we will perform a more quantitative analysis.
The functions C 2 (k) and C 4 (k) share a characteristic feature, namely they depend explicitly on −kη, which comes from a series expansion around −kη → 0. Consequently, we choose to evaluate η = η f corresponding to the end of the quasi-matter domination stage or the onset of the bouncing phase. One can also define the total number of efoldings N ≡ ln[a(τ )/a(η f )] for the duration of the quasimatter dominated phase; however, notice that in this case, since there is no horizon problem, there is no minimum value of N . Another important aspect is that if λ = 0 then C 2 (k) = C 4 (k) = 0. In other words, if the evolution of the state vector is completely unitary, then there are no perturbations of the spacetime at all and the state vector continues being perfectly symmetric, which is consistent with our conceptual framework.
In Figs. 3 and 4, we show different plots for the functions C 2 (k) and C 4 (k), respectively. In both cases, we have considered the value λ GRW = 1.029 × 10 −2 Mpc −1 . The various plots in each figure correspond to different values of τ and η f . The values of k considered correspond to the values of observational interest, hence, we consider k in the range from 10 −6 to 10 −1 Mpc −1 . As we can see, the functions C 2 (k) and C 4 (k) exhibit the same oscillatory behavior around the unity as its counterparts during inflation. Also, the amplitude of each oscillation decreases for increasing values of k.
We end this section with a few comments regarding the dependence of the power spectrum on η in cases (i), (ii) and (iv). In case (i), which corresponds to selecting the MS variable as the collapse operator during inflation, the term containing the η dependence is the last one of C 1 (k) in Eq. (58). On the other hand, since the amplitude associated to the modes R k is "frozen" on super-Hubble scales, the behavior of C 1 (k) will not change for super-horizon modes. As a matter of fact, the plots in Fig. 1 show that C 1 (k) is essentially a constant in the limit −kη → 0, which means that the term containing the η dependence is sub- dominant in such a limit. In cases (ii) and (iv), corresponding to the framework of the QMCU, the behavior of the functions C 2 (k) and C 4 (k) are very similar to that of C 1 (k) (see Figs. 3,4). That is, they are practically a constant in the limit −kη → 0, which means that the terms involving η, i.e. the last terms of Eqs. (63) and (65), are sub-dominant in the super-Hubble limit. Nevertheless, since in cases (ii) and (iv) the Universe approaches a non-singular bounce, it might be the case that, when the mode "reenters the horizon" (k |a H|) during the bouncing phase, a modification of the dynamical evolution of the functions C 2 (k) and C 4 (k) would occur. However, if the duration of the bouncing phase is short enough then one could intuitively consider that the spectrum is left unchanged (although counterexamples exist in the literature [68]). Therefore, one could perform a full analysis regarding the CSL model during the bounce within the QMCU. Nonetheless, we will take a pragmatical approach and assume that the shape of the spectrum, provided by the functions C 2 (k) and C 4 (k), survives the bouncing phase and, then, we will use the observational data to further constraint or completely discard the predicted spectra. In case that the predicted spectra are consistent with observations, one can proceed to perform the full-fledged analysis of implementing the CSL model to the QMCU during the bouncing phase and study the possible corrections that may arise from passing the perturbations through the bounce. This subject, however, will not be explored in the present paper.
In the next section, we will explore the implications of the predicted spectra using the observational data.
Effects on the CMB temperature spectrum and its implications on the cosmological parameters
The aim of this section is to analyze the viability of the CSL model by comparing the corresponding predictions with the ones coming from the best fit canonical model to the CMB data. In particular, we will focus on the power spectra obtained using the CSL model and its effect on the angular power spectrum. In order to perform our analysis, we start by setting the cosmological parameters of our fiducial model, which will be used as a reference to compare with the CSL inspired spectra. The fiducial cosmology will be the best fitting flat CDM model from Planck data, with the following cosmological Table 4 presented by the latest Planck Collaboration [5].
Furthermore, we recall that the primordial power spectrum and the angular power spectrum are related by Eq. (30), i.e.
Hence, we will use the CSL predicted power spectra P s (k) = A s (k/k 0 ) n s −1 C i (k) with i = 1, 2, 3, 4, which correspond to the four different cases that we have considered so far. We will focus first on the inflationary model of the early Universe and, then, on the QMCU model. Also, note that the fiducial model corresponds to: P s (k) = A s (k/k 0 ) n s −1 . The precise prediction for the angular power spectrum will be obtained by using the Boltzmann code CAMB [69], with the aforementioned cosmological parameters.
The angular power spectrum and the CSL model during inflation
During inflation, the CSL power spectra is characterized by the functions C 1 (k) and C 3 (k), Eqs. (58) and (60), with standard spectral index n s − 1 = −4 + 2δ and amplitude The output of the CAMB code, that is, the temperature autocorrelation power spectrum of the fiducial model and the Table 1 and text for details one provided by the CSL model during inflation are indistinguishable; thus, we have decided not to show the plots. Instead, we present the the relative difference, which we define as (72) Figure 5 shows the relative difference between both predictions. On top, we have chosenv R,I k as the collapse operator while on the bottom we have chosenp R,I k . In both cases, we have set an energy scale of 10 −5 M P for the energy at which inflation ends, and a total amount of inflation corresponding to 65 e-foldings.
We observe that the relative difference between the fiducial spectrum and the one predicted using the CSL model with, for instance λ GRW , is practically null (the highest difference is around 0.01 %). This statement applies to both elections of the collapse operator and for other λ values listed in Table 1 (not shown in the figure).
We have also checked that the essentially null relative difference between the fiducial model and the CSL model during inflation is also present in the E polarization autocorrelation power spectrum C EE l and the temperature polarization cross correlation power spectrum C TE l . On the other hand, the amplitude of the power spectrum A s consistent with the CMB data is A s 10 −9 [5]. Henceforth, the amplitude obtained using the CSL model, as shown in Eq. (71), must satisfy Clearly, different values of λ will have an effect on the amplitude of the spectrum. Assuming that the pivot scale k 0 crosses the Hubble radius at an energy scale of V 1/4 0 = 10 −4 M P (i.e. one order of magnitude less than the presumed energy at which inflation ends), an estimate for can be calculated. Therefore, the above equation leads to λ|τ |10 −7 12π 2 .
(74) Table 1 shows the different values of obtained by considering several λ values. Also, in the same table, we provide an estimate for the tensor-to-scalar ratio r (recall that the CSL model predicts the same relation as standard inflation, i.e r = 16 ). From Table 1, it can be seen that only the value corresponding to λ GRW is consistent with both, the observed shape and amplitude of the spectrum. In particular, assuming a characteristic energy scale of inflation of 10 −4 M P 10 14 GeV, a total amount of inflation corresponding to N = 65, and the value of λ GRW , we obtain an angular spectrum with a shape and an amplitude that is indistinguishable from the fiducial model, which we know is consistent with the observational data. The amplitude of the spectrum for this particular set of values leads to an estimate for the slow roll parameter and the tensor-to-scalar ratio of 10 −4 and r 10 −2 , respectively. Those values of and r are consistent with the ones presented by the latest results of the Planck Collaboration [6].
It is also instructive to mention that, if future observations confirm the results of the BICEP2 Collaboration [70], i.e. r 0.2, then the value of λ GRW would not be compatible In the QMCU framework, the power spectra are characterized by the functions C 2 (k) and C 4 (k), i.e. Eqs. (63) and (65), respectively. The predicted scalar spectral index is n s − 1 = 12 , and the amplitude is given by Notice we have approximated the integral that appears in the amplitude, corresponding to Eq. (62), by ρ c /ρ P [see Eqs. (67) and (68)]. The output of the CAMB code, that is, the temperature autocorrelation power spectrum of the fiducial model and the one provided by the CSL model during the QMCU are also indistinguishable. Thus, we present again the relative difference defined in Eq. (72) where now C CSL l corresponds to the angular power spectrum during the QMCU. Figure 6 shows the relative difference between both predictions. On top, we have chosenv R,I k as the collapse operator while on the bottom, we have chosenp R,I k . In both cases, we have assumed a total duration of 50 e-foldings for the quasimatter contracting phase and a conformal time |τ | 10 8 Mpc corresponding to the beginning of the contracting stage. We show only the plot of S(l) corresponding to the value of λ Adler merely as an illustrative example; the plots for the values corresponding to λ 1 , λ 2 , λ GRW follow the exactly same behavior as the one shown in Fig. 6.
We found no difference between the fiducial spectrum and the one provided by the CSL model for the four values of λ listed in Table 2; the highest relative difference is around 0.1 %. This statement applies to both elections of the collapse operator. Finally, we have also checked that the essentially Table 2 also achieve an excellent fit (not shown), but the analysis done in this work does not allow prefer one value over another null relative difference between the fiducial model and the CSL model during the QMCU is also present in the E polarization autocorrelation power spectrum C EE l and the temperature polarization cross correlation power spectrum C TE l . On the other hand, the amplitude of the power spectrum A s consistent with the CMB data is A s 10 −9 [5]. Therefore, the ratio ρ c /ρ P , obtained using the CSL model, as shown in Eq. (75), must satisfy Consequently, an estimate for the energy scale of the critical energy E c is In Table 2, we show the different values of κ 1 and κ 2 by using the chosen λ values, with |τ | = 1.15 × 10 8 Mpc. We infer that the four values of λ considered are consistent with a critical energy scale in the range (10 −3 M P , 10 −6 M P ).
It is worthwhile to mention that, in the QMCU, the spectral index n s and the tensor-to-scalar ratio r are not related each other as in the standard inflationary paradigm [see Eqs. (46) and (62)]. However, the spectral index, along with the running of the spectral index α s ≡ dn s /d ln k, are the two main parameters of the QMCU model used to compare with the observational data [22][23][24]. The CSL model applied to the QMCU does not affect those parameters. The only observable affected by the CSL model is the amplitude of the spectrum (which is not related to the parameter r as in the standard spectrum). Consequently, in order to put an upper bound to the energy scale at which the bouncing phase begins, and which would be equivalent to set a constraint on the parameter λ, one should consider a specific theoretical model of the QMCU (i.e. to choose a specific dynamics and a potential of the field ϕ). Therefore, in the QMCU, and with the same degree of accuracy of past works dealing with the same model, all of the four values of the CSL parameter λ considered here yield consistent predictions with observational data; specifically, the predictions regarding the shape of the spectrum, the scalar spectral index and the running of the spectral index.
On the other hand, note that the information that could discriminate among different values of λ is codified in the amplitude of the spectrum. The predicted amplitude of the spectrum depends on the critical energy density ρ c (the value of the energy density at the bouncing time), which is model dependent.
Conclusions
The CSL model is a physical mechanism that attempts to provide a solution to the measurement problem of Quantum Mechanics by modifying the Schrödinger equation. The CSL model can be referred as an objective reduction mechanism or "effective collapse" of the wave function, and one of the main elements of this model is the collapse operator, i.e. the operator whose eigenstates correspond to the evolved states by the collapse mechanism. Also, in principle, it is possible to apply such a mechanism to any physical system.
In this work, we have applied the CSL model to the early Universe by considering two cosmological models: the matter bounce scenario (MBS) and standard slow roll inflation. Additionally, we have considered two different collapse schemes, one in which the field variable (given in terms of the Mukhanov-Sasaki variable) serves as the collapse operator, and other scheme where the collapse operator is the conjugated momentum.
In all cases, we have found a prediction for the primordial power spectrum, which is a function of the standard parameters of each cosmological model, and also of the CSL parameter λ. Although the exact expressions for the primordial power spectra are different in each case, there are features that are essentially the same as its standardnon-collapse-counterparts. Specifically, the predictions for the scalar spectral index and the tensor-to-scalar ratio are exactly the same as the ones given in the MBS and slow roll inflation without collapse. On the other hand, in each case, the shape of the spectrum is modified by a function of the wave number k, associated to the modes of the field, and by the inclusion of the λ parameter. However, for a suitable choice of values corresponding to the parameters of the cosmological models, there is no significant change in the prediction for the CMB angular power spectrum (i.e. the C l 's) that can be distinguished from the canonical flat CDM model.
Meanwhile, the prediction for the amplitude of the spectrum is modified directly by the parameter λ. We have empirically explored the range of values of λ, from the originally value suggested by Ghirardi-Rimini-Weber (GRW) λ GRW 10 −16 s −1 [41], to the one given by Adler λ Adler 10 −8 s −1 [58]. In the case of slow roll inflation, we have found that for a characteristic energy scale of 10 14 GeV and a total amount of inflation of 65 e-folds, only the value suggested by GRW is compatible with the observational bound of the amplitude; other values of λ greater than λ GRW , e.g. λ Adler , cannot be made compatible with the observed ampli-tude (because that would require values for the slow roll parameter such that > 1). In the MBS case, we have found that the modification in the predicted amplitude of the spectrum, given by the λ parameter, causes that the critical energy density ρ c , i.e. the energy density at which the bouncing phase begins, to be several orders of magnitude less than the Planck energy density ρ P . The precise number of orders of magnitude varies according to the value of λ. For instance, by assuming λ GRW and a total amount of ∼ 50 e-folds for the matter dominated contracting phase, we have ρ c 10 −15 ρ P . The latter relation is obtained by requiring the compatibility between the predicted amplitude of the scalar power spectrum and the one from the Planck CMB data A s ∼ 10 −9 .
In conclusion, it was possible to incorporate the CSL model into the cosmological context again, in particular when dealing with the quantum-to-classical transition of the primordial inhomogeneities. Moreover, it is remarkable that our implementation of the CSL model yields predictions that are also in agreement with experiments in the regimes so far investigated empirically. Those experiments involve values of the CSL parameter λ that have been tested in laboratory settings, quite disengaged from the cosmological framework. We acknowledge that at this stage, the application of CSL model to the early Universe, as done in this manuscript, can be seen as an ad hoc employment. However, the fact that the predictions can be empirically tested make us hopeful that future studies will overcome the perceived shortcomings.
(A. 4) In both cases, ζ k and θ k are given in Eq. (42). Next, we focus on the first term of Eq. (36), i.e. v 2 k . It will be useful to define the following quantities: Q ≡ v 2 k , R ≡ p 2 k and S ≡ p kvk +v kpk . (A.5) The equations of evolution for Q, R and S are obtained using Eq. (35), withΘ k =v k . That is, Therefore, we have a linear system of coupled differential equations, whose general solution is a particular solution to the system plus a solution to the homogeneous equation (with λ k = 0). After a long series of calculations we find: Using the above results, we can compute the quantity v k 2 using Eq. (36). In case (i), we substitute Eqs. (A.3) and (A.10) into Eq. (36), obtaining v k 2 = Q(η) − 1 4ReA k (η) π 2k 2 sin 2 (nπ) (A.12) With the expression in (A.12) at hand (which is valid for v R k 2 and v I k 2 ), and using Eq. (33), our predicted scalar power spectrum during inflation (at the lowest order in the slow roll parameter) is given in Eq. (38). (we have also used that during inflation z 2 (η) 2 M 2 P /(H 2 η 2 ) and m = n = 3/2 + 2 − δ).
Analogously (A.13) Hence, substituting the above expression in Eq. (33) yields: − λ k 2(μ s − 1)k 2 (−kη) −2μ s +2 + ζ 2μ s k π sin(π μ s + 2μ s θ k ) sin(π μ s )2 2μ s 2 (μ s ) −1 , (A.14) and μ s ≡ 3 2 − 6¯ , where we have used m = −n = 3/2 − 6¯ . As argued in Refs. [22,23], during the quasi-matter contracting phase z η 2 /(3 √ 3) and |a H| = −2/η, which implies that Eq. (A.14) can be written as in the final form of the power spectrum presented in Eq. (40) Let us focus now on the tensor modes. The action for the tensor perturbations is obtained from the Einstein-Hilbert action by expanding the tensor perturbations h i j (x, η) up to second-order [67]. The resulting action for the tensor field h i j (x, η) can be expressed in terms of its Fourier modes h i j (k, η) = h k (η)e i j (k), with e i j (k) representing a timeindependent polarization tensor. Performing the change of variable the action can be written as δ (2) Consequently, the replacement β →β, in the equations of the present subsection, allows us to obtain the tensor power spectra shown in Eqs. (43) and (45).
where c 1 ≡ 1 2(m + 1) , c 2 ≡ 1 2 2 (m + 1)(m + 2) , (A.23) 24) In both cases, the definitions of the quantitiesζ k andθ k are given in Eq. (51). Now, we have to obtain the first term of the right hand side of Eq. (37), that is, v 2 k . We will employ the same procedure as in the previous subsection. We use the previous definitions for the quantities Q(η),R(η) and S(η) Eqs. (A.5) and (35) but taking into account that Θ R,I k =p R,I k . Thus, the evolution equations are: Those equations are solved using the initial conditions provided by Q(τ ) = 1/(2k), R(τ ) = k/2 and S(τ ) = 0. We are mainly interested in the solution for Q(η) ≡ v 2 k [which is the first term on the right hand side of Eq. (37)]. Then, performing the series expansion to the lowest order around −kη → 0 yields Q(η) π 2k 2 sin 2 (nπ) k 2 − k 2 λ k τ 2 + mkλ k sin cos (A. 28) Since in this case m = −5/2 + 6 and n = 3/2 − 6 , and considering only the first dominant term in the expansion around −kη → 0, we finally obtain the expression for the power spectrum presented in Eq. (49). The procedure to obtain the tensor power spectra is analogous to the one outlined in the previous subsection, but clearly the difference is thatΘ R,I k =p R,I k . In the following, we will only present the results.
For case (iii), the tensor power spectrum is given by where ν t ≡ 1 2 + , and, consequently, the tensor-to-scalar ratio is r = 16 , which is the same as the standard prediction of slow roll inflation.
For case (iv), the formula for the tensor power spectrum is where μ t ≡ 3 2 − 6 = μ s . Therefore, the tensor-to-scalar ratio is exactly the same as the one shown in Eq. (46). | 18,116 | 2016-07-01T00:00:00.000 | [
"Physics"
] |
SONG AND VIDEO ANIMATION ON VIRUS: MULTIMEDIA TO INCREASE STUDENT'S LEARNING ACHIEVEMENT
This study aimed to improve student achievement through song-based multimedia and animated videos on viral material. This type of research is (R&D) adapting the ADDIE model. The population is class X SMA IT Baitul Qurro South Tangerang, sampling using non-probability sampling with saturated sampling technique where the entire population is a sample of 45 students. Learning achievement is measured using instruments that have been validated by experts (teachers and biology lecturers) using essays (C1-C6) for cognitive tests, questionnaires for affective tests, and observation sheets for psychomotor tests. The test was conducted before (pretest) and after (posttest) learning. Data analysis used the average score, percentage, and N-gain. The data obtained were analyzed by normality and homogeneity tests. The results obtained indicate that song-based multimedia and animated videos can improve student achievement with an average N-gain, namely: Cognitive domain of 56.8% (quite effective); The affective domain is 57% (quite effective), and the psychomotor domain is 75% (quite effective).
INTRODUCTION
Learning achievement is evidence of student success in accepting, refusing, and processing all information and knowledge provided in the learning process, expressed in grades and report cards for each field of study (Hamdu & Agustina, 2011). Assessment of learning achievement can be done within a certain period, such as during daily tests, one quarter, or even after the end of the semester (Sirait, 2016). Learning achievement includes three main aspects, namely cognitive, affective, and psychomotor, so the success of learning achievement depends on the ability of students to fulfill these three aspects (Hamdu & Agustina, 2011). The success of learning achievement is not only determined by students but the participation of teachers as facilitators in conveying, facilitating, educating, supporting, and guiding students so that students' knowledge (cognitive), positive values (affective), and skills (psychomotor) increase and obtain good grades (Hamid et al., 2020).
In the current digital revolution era, education has prioritized the media as an intermediary of information to achieve exciting and fun learning goals. Teachers are required to reduce the lecture method and replace it with media. One learning media, which includes text, verbal, audio, and visual, is multimedia. Multimedia can be in the form of animated videos, music, or interactive multimedia. Multimedia in learning can facilitate and optimize learning outcomes (Nurseto, 2012).
Biology learning conveys more theories and concepts (Laila et al., 2018). Students have difficulty learning different biological materials at each grade level. in grade X, students have difficulty learning about viruses and bacteria (Firmanshah et al., 2020). The concepts of virus and bacteria material that students consider problematic include understanding the characteristics of viruses, differentiating the body structure of viruses from other creatures, synthesizing how to multiply viruses, the role of viruses, and how to avoid themselves from the dangers of viruses. Viruses such as influenza, AIDS, swine flu, and others (Khan & Read, 2018). Learning difficulties will undoubtedly affect the process and results obtained by students in learning. It is in line with Diki (2013), who argues that students who have difficulty learning biological concepts or materials will impact the enthusiasm and the acquisition of learning outcomes (Diki, 2013). Difficulties in understanding virus material also occur in class XI SMA IT Baitul qurro, South Tangerang. Learning outcomes on virus material are 50% below the KKM obtained through observations and interviews with biology teachers.
One learning media that can be used as an intermediary to make it easier for students to understand the material is animated video media. Animated video media can be used as a learning tool that can be used at any time to convey specific learning objectives (Rahmayanti & Istianah, 2018). The use of animated videos in the learning process increases the interaction between teachers and students and produces an effective learning process. Students are very interested and enthusiastic, more active, better understand learning using media. By using the publish or perish seven application for 1000 articles about animated videos collaborating with songs, they are not even found on youtube or other social media. Videos only use verbal explanations. It is still rare to find songs with lyrics containing biology subject matter in collaboration with animated videos. The media's novelty is to be developed, namely songbased multimedia and animated videos on viral material.
The massive use of media in biology can increase student achievement (Luh & Ekayani, 2021). For example, the use of computer-based multimedia can improve student learning achievement (Prastika et al., 2015), student learning outcomes increase significantly after using animated videos in learning (Ponza et al., 2018). Student learning achievement is the result achieved by students, including the cognitive, affective, and psychomotor domains. Learning achievement can be measured using tests or relevant instruments (Rosyid et al., 2019). The development of multimedia-based songs and animated videos is expected to improve student achievement. For the above reasons, this research aims to develop multimedia based on songs and animated videos on viral material to improve student achievement.
METHOD
The type of research used is development research (R&D) with the ADDIE model adapted from Dick and Carey (1999). The population of this study was students of class X SMA IT Baitul Qurro, South Tangerang City, for the academic year 2021/2022. Sampling using non-probability sampling with saturated sampling technique. Saturated sampling determines the sample if all members of the population are used as samples. It is often used in research with a relatively small population (Sugiyono, 2016) so that the entire population is a sample of 45 students.
The instrument used to validate multimedia by experts (lecturers) and students using a Likert scale questionnaire that focuses on four aspects, namely material, illustrations, display quality, and attractiveness. Learning achievement is measured using several instruments that have gone through the validation stage to experts (teachers and lecturers of Biology) first with the conclusion that the questions are valid and feasible to use but with several revisions so that it is obtained that the cognitive domain uses essay tests as many as 20 items that adopt cognitive theory by Bloom and Anderson consisting of C1 -C6 the indicators are remembering, understanding, applying, analyzing, evaluating, and creating. The affective domain uses a questionnaire of 7 items consisting of 4 dimensions: acceptance, welcome, appreciation, and internalization. Moreover, the psychomotor domain uses an observation sheet consisting of two dimensions: movement skills and verbal & nonverbal skills).
The steps of development research based on the ADDIE model are as follows: (1) Analyze: The analysis phase begins with firstly observing the class to find problems where students' low achievement in learning biology material on viruses is found, secondly formulating multimedia to improve learning achievement, after that make song lyrics based on KD, the goal, and viral material indicators. The third chose the Adobe premiere pro 2020 editing application, and the fourth was an analysis of learning achievement indicators related to viral material in class X MIA.
(2) Design: The design stage consists of two aspects: media and instruments. First, the media recorded viral songs and designed an animated video display design based on KD, goals, and indicators. Second, the instrument makes a lattice of expert and student validation instruments and learning achievement instruments.
(3) Development: The development stage begins with completing the editing process using Adobe Premiere Pro 2020, then after the media is finished, the song lyrics and animation video validation process is carried out, which is assessed by two experts (biology lecturers). Biology lecturers carried out media validation as media and material experts, and 22 students of class XI (small class) were tested as test subjects to determine the validity/validity of the product. Furthermore, the last process is to revise the media according to the suggestions given by the two experts. (4) Implement: The implementation stage is the process of installing a project in a realworld context, namely by providing multimediaassisted learning on viral material in class X, but previously a pretest was carried out to determine students' initial cognitive, affective, and psychomotor skills with previously made instruments. (5) Evaluate: After being presented with multimedia in the next lesson, students are given a posttest in the cognitive, affective, and psychomotor domains. The evaluation stage determines the adequacy of learning (Yusuf et al., 2017). At this stage, an analysis of the influence of multimedia in improving student achievement is carried out.
Data analysis The data analysis technique in the expert validation test and the small class test on students uses the average value and percentage, interpreted in the classification table (Arikunto, 2008). The data analysis technique on learning achievement in the cognitive domain uses the highest and lowest scores on the pretest and posttest, N -gain. The data analysis technique in the affective and psychomotor domains uses the average and percentage values interpreted in the classification table. The data obtained were analyzed using a prerequisite test consisting of a normality test using the Kolmogorov-Smirnov test and an F test's homogeneity test.
RESULTS AND DISCUSSION
At the analysis stage, several problems were obtained. The low student achievement is seen from the student's UTS score. The student's enthusiasm for learning was low. Then the results of the analysis of the biology teacher on the lesson plan obtained indicators of virus material, namely identifying the characteristics of the virus, explaining the structure of the virus, analyzing viral replication, explaining the role and losses of viruses in life.
At the design stage, song lyrics are generated and display designs, transitions, effects, audio, selection, and cutting of animated videos are available sourced from youtube channels and google (Neuron, Mas Iki, Servier Medical Art, Armando Hasudungan, and Amhaus) with the Adobe premiere pro-2020, shown in Figure 1. At the development stage, validation results were obtained by two media and materials experts. The validation sheet is given in stages, starting with validation on song lyrics and then an animated video. After receiving revisions in the form of several criticisms, suggestions, and inputs, including 1) distinguishing song improvisation between applying it to students and using social media (youtube) so that students are easy to follow; 2) the selection of videos for the song lyrics "lysogenic forms a prophage" is still not correct and 3) the display of song lyrics must be consistently at the bottom of the screen. After that, the multimedia went through the editing stage again by revising the input mentioned above, then the multimedia (animated songs and videos) was tested. The trial was carried out in class XI as many as 22 students, this was due to the relatively small population, and all class X will be used as samples for this study and to distinguish the assessment of the trial class (small class) from the large class, the selection of class XI allows it to be used as a testing class. The validation results by two expert validators are shown in Table 1. The assessment on the material aspect consists of four indicators: the first, the suitability of the media with viral material, obtaining a percentage of 85% (Very Valid). It means that multimedia songs and animated videos contain viral material taught to class X high school students. Second, the suitability of multimedia to the learning objectives obtained a percentage of 90% (very valid). It shows that the song lyrics consist of learning objectives which consist of identifying the characteristics of the virus, replication, the role of losses caused by the virus. Third, the suitability of the media to basic competencies (KD) obtained a percentage of 80% (quite valid), which means that the song lyrics and animated videos are in accordance with the established KD, namely analyzing the structure, replication, and role of viruses in life. Fourth, it does not cause misunderstandings to get a percentage of 80% (quite valid) which means that immediately after seeing and hearing animated songs and videos, they understand and do not cause confusion or misunderstanding of concepts in viral material. The assessment on the illustration aspect consists of two indicators, namely the first illustration can describe the actual situation obtaining a percentage of 80% (quite valid) which means that the animated video can provide a visual description of the characteristics of the virus, the viral replication process, the forms of the virus, the role and losses caused by the virus correctly and following the theories and facts found. The second indicator is the suitability of the song lyrics, with the animated video getting a percentage of 85% (very valid) which means that the animated video displayed is following the lyrics of the song being sung, for example, when the song lyrics "lysogenic forms a prophage" then the video displayed is an animated video about the process of lysogenic.
The assessment on the display aspect consists of three indicators, namely the quality of animation, music, and song lyrics, obtaining 75% (quite valid). Video quality (transitions, effects, and resolution) is 80% (quite valid). Animated video is a compilation of several videos, such as 3D animation, images, and infographic videos included in audio-visual media (Hariati et al., 2020). The transition in the video is a brief description of the song's lyrics. Furthermore, animated videos have positive implications for students' learning motivation in learning activities (Widiyasanti et al., 2018), especially in this case on viral material. Audio quality (sound and tone editing) obtained a percentage of 85 %. The tone used in the song results from the adoption of a young singer named Jazz with the title song "Dari Mata." Selecting popular songs that are identical among teenagers can make it easier for students to follow the strains of tones and music videos on learning media. In general, music is entertainment and can also increase student motivation and learning quality and is relevant to students' hobbies (Juwita & Nasution, 2018;Roffiq et al., 2017). The singer's voice in the animated video has a good and melodious voice; the clarity of the voice and the music also get a valid category.
The assessment of the attractiveness aspect consists of three indicators, namely, first, learning becomes exciting and fun, obtaining a percentage of 100% (very valid), second, making it easier for students to repeat viral material that can be accessed and played anywhere and anytime, obtaining a percentage of 100% (very valid), and the third attraction is to reduce student saturation by a percentage of 100% (very valid).
The results of the small class trial for class XI students were 22 students who had carried out learning about virus material when they were in class X before. Student responses focused on the assessment of media-assisted learning, the influence or effect of the application of multimedia in learning viral material, and finally on students' assessment of the quality of the media. The test results are shown in Table 2. Student responses to learning by using multimedia songs and animated videos got 85% (Good). Students said that learning with multimedia was more fun, engaging, and not dull. Research conducted by Hardiyan and Ismi (2017) also shows that interactive multimedia-based animation can improve the quality of learning (Hardiyan & Fajriyah, 2017). It is in line with research that says that learning using audio (biology songs) is effectively applied in biology learning (Prayitno & Hidayati, 2017).
Student responses to giving multimedia in learning get 82% (Good). Multimedia provides benefits and helps students remember and understand the content easily and can be accessed and repeated anytime and anywhere. The use of multimedia (audio-visual) media can improve students' memory (Hastuti, 2019), so that it is relevant to the findings in this study. Assessment of the quality of the media obtained a percentage of 81% (Good). Students stated that the viral song's video and audio display resolution was good, the sound and music were clear, and the video visualization was easy to understand.
The implementation stage is applying multimedia-assisted learning based on songs and animated videos in class X MIA SMA IT Baitul Qurro. There were four meetings on viral material with time allocation (1 x 45 minutes). The first meeting was started by conducting a pretest to determine the students' initial abilities. The second meeting was introduced to multimedia learning, the third meeting was learning by imitating the songs contained in the animated video directly, and the fourth meeting was carrying out a learning evaluation. The multimedia prototype developed is shown in Figure 2. At the final stage, students are given some tests, the first is a cognitive test with 20 essay questions on virus material, but previously they were given a pretest at the initial meeting of virus material. The cognitive pretest and posttest results can be seen in Table 3. Based on the results of the effectiveness of the N-gain set by Hake in 1999, if the N-gain percentage is at 56% -75%, it is declared quite effective (Sinuraya & Mihardi, 2019). In the results of cognitive research, students obtained an average N-gain of 56.8%, which means that song-based multimedia and animated videos are quite effective in increasing learning achievement. It is relevant to the findings made by Suyitno (2016), who reported that the implementation of multimedia could improve student learning outcomes (Suyitno, 2016). Therefore, song-based multimedia and animated videos can improve student achievement, especially in class X SMA IT Baitul Qurro South Tangerang. The use of multimedia is a bridge to channel information, knowledge, and resources to students so that teachers do not only explain in words but are visualized in such a way with a variety of multimedia so that it is expected to be able to improve students' cognitive abilities in learning biology (Satria & Egok, 2020).
Animated videos packaged using songs make it easier for students to understand the material. Abstract material can be visualized using animated videos to easily understand the characteristics, the process of virus development, and the role and losses of viruses in life. It is in line with research that says that animated videos can make it easier for students to understand biological material that cannot be explained in words and cannot be seen by the eye in virtual form.
This research is based on the theory put forward by Piaget on cognitive learning theory, which says that the initial information enters the short-term memory in the left brain through the senses of the ears and eyes. Then, from short-term memory, the information will be processed into symbols stored in long-term memory. Therefore, to reach students' cognitive, it is necessary to provide information through audio and visuals such as animated videos (Noviyanto et al., 2015).
Learning achievement is the amount of student test scores on the mastery of cognitive knowledge and skills. Students' cognitive knowledge can be mastered more deeply if trained and often repeated. The study results (Prayogo, 2012) said that the use of multimedia in learning would make it easier for students to repeat the subject matter, especially in the current era of the digital revolution. All information can be accessed easily on the internet.
The second test at the evaluation stage is affective (attitude). The pretest was given at the initial meeting of the virus learning, and the posttest was carried out after the entire series of multimedia-assisted learning was completed. The results of the affective test are presented in Table 4. In the affective domain, the average N-gain was 57% (quite effective), with the lowest score at pretest being 60 and the highest being 82, while the lowest posttest score was 80 and the highest was 100. It means that students show the attitude of accepting material during learning, being active in learning, considering it necessary and valuable to study biology, accepting the opinions of others, generating scientific attitudes such as showing curiosity, being enthusiastic about learning, and connecting the material with phenomena in everyday life such as efforts to prevent the spread of the virus era pandemic. Multimedia can stimulate students' scientific attitudes such as curiosity and thoroughness, and students show enthusiasm in learning accompanied by singing and yelling (Rahmawanto, 2018).
In the psychomotor domain, the N-gain is 75%, presented in Table 5. It means that multimedia is quite effective in improving students' psychomotor skills. According to Chandra and Sugeng (2019), the development of multimedia products positively contributes to students' psychomotor aspects. In this finding, the psychomotor aspect is viewed from verbal skills because students actively sing along to the strains of the songs that have been given, then this multimedia attracts students' attention and focuses on learning. The data obtained at this evaluation stage passes the prerequisite test first, using the Kolmogorov-Smirnov normality test and the F test's homogeneity test. In the normality test, the critical value of N = 45 and = 0.05 is 0.19842 then if the value of D max > 0.19842 data is normally distributed. In the pretest data obtained, the decision obtained is the D max value of 0.116 < 0.19842, then the data is normally distributed. While the posttest data obtained a decision that the value of D max obtained was 0.151 < 0.19842, the posttest data were normally distributed. In the homogeneity test F Count = 1.280 and F table with = 0.05 with DF1 = 44 and DF2 = 44 so the condition is that if the calculated F value < 1.651 then the data is homogeneous. The decision obtained is that the pretest and posttest data are homogeneous data where F count < 1.651 or 1.180 < 1.651.
CONCLUSION
Based on research on the development of the ADDIE model on song-based multimedia and video animation, viral material is quite effective in increasing student achievement with the following details: The cognitive domain obtained an average N-gain of 56.8%. The affective domain gets an average of 57% (effective), and the psychomotor domain gets a percentage of 75% (effective).
ACKNOWLEDGEMENT
Thank you to the two validators who helped validate this song and video animationbased multimedia, namely 1) Mr. | 5,061 | 2021-12-31T00:00:00.000 | [
"Education",
"Computer Science"
] |
Ravynic acid , an antibiotic polyeneyne tetramic acid from Penicillium sp . elucidated through synthesis †
A new antibiotic natural product, ravynic acid, has been isolated from a Penicillium sp. of fungus, collected from Ravensbourne National Park. The 3-acylpolyenyne tetramic acid structure was definitively elucidated via synthesis. Highlights of the synthetic method include the heat induced formation of the 3-acylphosphorane tetramic acid and a selective Wittig cross-coupling to efficiently prepare the natural compounds carbon skeleton. The natural compound was shown to inhibit the growth of Staphylococcus aureus down to concentrations of 2.5 μg mL.
Introduction
The search for new antibiotic compounds that possess novel modes of action and inhibit new targets is an on-going challenge in the struggle against infectious disease. One class of natural products that has attracted much attention are those that bear the tetramic acid moiety. 1 Some years ago we reported the isolation and elucidation of a tetramic acid containing natural product, ravenic acid, 1 (tetramic acid moiety highlighted in red, Fig. 1), from a Penicillium sp. collected from Ravensbourne National Park in south-east Queensland, Australia. 2 Ravenic acid possessed moderate antibiotic activity against a clinical strain of methicillin resistant Staphylococcus aureus (MRSA). Re-examination of the fungal extract has revealed a very minor co-metabolite isolated from the Penicillium sp., that was shown to possess significantly greater antibiotic activity than ravenic acid, but was present in microgram quantities from large scale fermentation, effectively precluding study by NMR. However, a mass spectrum was recorded, and the CID ESI-MS fragmentation pattern displayed many similarities to 1, but with a m/z ratio of the deprotonated molecule (m/z 256, −ve ESI) along with those of some daughter ions two units lower. Given ravenic acid's structure, and following careful analysis of the mass spectrum we reasoned the most probable modification accounting for a loss of two mass units represents an oxidation of a double bond to afford a triple bond in the polyene side chain. As such, tetramic acids 2 and 3 were identified as synthetic targets to allow the definitive elucidation of the co-metabolite of ravenic acid (Fig. 1).
Results and discussion
Pathways presented previously by Ley 3 and Moloney 4 offered promise for the preparation of 2 and 3, but had been shown to be unsuccessful with unsubstituted amides and unsaturated side chains. During our studies of ravenic acid and its potential co-metabolites, Schobert and co-workers published an elegant synthesis of ravenic acid, representing a new synthetic procedure to prepare 3-acyl tetramic acids in a highly efficient manner. 5 Thus, we modelled a retrosynthetic pathway on the procedure outlined by Schobert (Fig. 2). Late stage Wittig olefination would allow a divergent strategy to the preparation of 2 and 3 from the common intermediate 4 and aldehydes 5 and 6. Compound 4 could be prepared by the thermal coupling of tetramic acid 7 and phosphorane 8. The use of the Boc protecting group is notable, as it had been employed successfully by Schobert, in contrast to the di-methoxybenzyl protecting group which had been the downfall of our previous attempts to synthesize ravenic acid. Despite numerous attempts employing a variety of deprotection protocols starting material consistently decomposed.
In a forward manner, tetramic acid 7 was prepared in one step from N-Boc-glycine (9) by coupling with Meldrum's acid mediated by EDC and DMAP, followed by heat induced rearrangement, in 83% yield (Scheme 1). Consistent with previously reported data, 6 7 exists as a mixture of keto and enol tautomers in CDCl 3 in the approximate ratio of 1 : 1, while in deuterated methanol, the enol form is exclusively observed. Phosphorane 8 was formed by treatment of phosphorane 10 with sodium hexamethyldisilazide. 5 In spite of the structural similarity of 8 with ketenes, it is relatively stable, not displaying a tendency to dimerise. 7 Phosphorane 8 and tetramic acid 7 were coupled under thermal conditions to form key intermediate 4. Heat facilitates nucleophilic addition of the enol tautomer of 7 on 8, with subsequent proton transfer and tautomerism leading to the formation of 4. In CDCl 3 , tetramic acid 4 exists in two forms, which have been assigned as the phosphorane and the tautomeric/resonance betaine form.
Preparation of the requisite aldehydes 5 and 6 was achieved in a rapid and efficient manner (Scheme 2). Propargyl alcohol was brominated by treatment with potassium hydroxide and bromine to form 11, and subsequent copper-mediated coupling with propyne led directly to alcohol 12. Stereoselective reduction of the triple bond proximal to the alcohol with LiAlH 4 afforded an alcohol, where the stereochemistry of the newly created alkene was deduced from the coupling constant ( 3 J = 15.6 Hz) between the olefinic protons at δ 5.68 and 6.15, indicative of the trans relationship of substituents. Oxidation employing Dess-Martin periodinane yielded aldehyde 13 which was employed directly in the following step. Sonogashira coupling of E-1-bromoprop-1-ene with propargyl alcohol successfully afforded alcohol 14, with subsequent oxidation using DMP providing aldehyde 15, with a yield of 77% over the 2 steps. Isomerization was not observed over either of the two steps, with the stereochemistry of alcohol 14 determined through analysis of the olefinic coupling constants ( 3 J = 15. 8 Hz). Examination of conditions for the Horner-Wadsworth-Emmons olefination between triethyl-2-phosphonopropionate and aldehydes 13 and 15 found that sodium hydride as the base increased the yield and selectively giving the desired E isomer. The proton NMR spectra and GCMS data indicated that one stereoisomer was formed preferentially (>95%), with literature comparisons, 8 and the HWE reaction's known stereochemical outcomes, supporting the assignment of the E configuration. 9 Attempts were made to proceed directly to aldehydes 5 and 6 in one step using DIBAL-H as the reducing agent. Treatment of the resulting esters 16-17 with one equivalent of DIBAL-H resulted in a 1 : 1 mixture of the alcohol and the starting ester, indicating that the intermediate aldehyde is reduced more rapidly than the ester. Thus three equivalents of DIBAL-H were used to ensure complete conversion of the esters to conjugated alcohols, and the alcohols subsequently oxidised to aldehydes 5 and 6 using DMP. With the synthesis of the target aldehydes complete, attention turned to completion of the final steps in the synthesis of the ravenic acid analogues.
The common tetramic acid phosphorane intermediate 4 was coupled with aldehydes 5 and 6 via the Wittig olefination to give Boc-protected ravenic acid analogues 18 and 19 in yields of 52% and 62%, respectively (Scheme 3). With either silica or alumina column chromatography resulting in decomposition, size exclusion chromatography (Sephadex LH-20) was employed to purify the polyene tetramic acids. In each case, the product with the E stereochemistry about the newly created olefinic double bond was formed as deduced by analysis of the olefinic coupling constants ( 3 J = 15.6 Hz) in the 1 H NMR spectrum. The concluding step involved treatment with TFA in dichloromethane resulting in the cleavage of the Boc protecting group, providing the ravenic acid analogues 2 and 3.
Finally, comparison of the mass spectral data as well as coinjection on a HPLC column allowed the assignment of the cometabolite as structure 3, named here as ravynic acid. The synthesis of this newly discovered natural product was performed in a convergent manner with the longest linear sequence of steps counting nine. Ravenic acid had been shown to inhibit the growth of a methicillin-resistant Staphylococcus aureus strain down to 25 µg mL −1 . 2 Ravynic acid has been shown to inhibit the growth of the same strain down to approximately 2.5 µg mL −1 . Further biological testing will allow for the assessment of the spectrum of activity and determination of the mode of action of this antibiotic.
Conclusions
In conclusion, we have isolated and unambiguously identified a new antibacterial tetramic acid natural product from a Penicillium sp. of fungus. Ravynic acid is a 3-acyltetramic acid, which has been synthesized from Boc-glycine and propargyl alcohol, to allow definitive elucidation of the structure. Notable features of the synthesis include the thermally induced formation of the 3-acylphosphorane tetramic acid and a selective Wittig cross-coupling to efficiently prepare the natural compounds carbon skeleton.
Experimental
General information 1 H, 13 C NMR spectra were recorded at 298 K, at 300 MHz, 400 MHz or 500 MHz and 75 MHz, 100 MHz or 125 MHz respectively, on an Inova 300, Varian MR-300, Varian MR-400 or Inova 500 instruments. Chemical shifts are reported in ppm (δ). NMR experiments were run in deuterated chloroform (CDCl 3 ), or methanol (CD 3 OD) as indicated; 1 H NMR spectra are referenced to the resonance from residual CHCl 3 at 7.26 ppm, or CHD 2 OD at 3.31 ppm, 13 C NMR spectra are referenced to the central peak in the signal from CDCl 3 at 77.0 ppm, or CD 3 OD at 49.0 ppm. The appearance and multiplicities of 1 H resonances are expressed by the abbreviations: s (singlet), d (doublet), t (triplet), q (quartet), m (multiplet) and combinations thereof for more highly coupled systems. 13 C NMR were run as proton decoupled spectra. 1 H signals and 13 C signals, where appropriate, are described by chemical shift δ (multiplicity, |J| (Hz), integration, assignment). Where the spectral characteristics ( 1 H and 13 C NMR) agree with published data, a literature reference is cited. Where no literature is available, including known but insufficiently characterised compounds, data ( 1 H NMR, 13 C NMR, IR, MS, HRMS) are presented. Assignments were supported by 2D NMR analysis including HSQC, HMBC and NOESY data and by literature precedence. EI-MS and HREI-MS were recorded on a VG autospec mass spectrometer, operating at 70 eV. ESI-MS and HRESI-MS were recorded on a Bruker Apex 3. Positive ionisation was detected unless otherwise indicated. Mass/charge ratios (m/z) are reported and relative abundance of the ions is as percentage of base peak intensity. IR spectra were recorded on a Bruker Alpha-P as neat solid or film. Diagnostic peaks are presented in wavenumbers (cm −1 ). Peak strengths are expressed by the abbreviations: s (strong), m (medium), w (weak) and b (broad). Melting points were determined on a Reichert Thermogalen hot stage microscope and are uncorrected. Flash chromatography was performed under pressure using silica gel (230-400 mesh Scharlau 60, 40-60 micron). Analytical thin layer chromatography (TLC) was performed on Merck silica gel 60 F 254 aluminium backed plates. Plates were viewed under UV light (254 nm) and/or by developing with ceric phosphomolybdic acid (100 mL water, 5 g phosphomolybdic acid, 0.6 g Ce (SO 4 ) 2 , 6 mL conc. H 2 SO 4 ) dip. In general, reagents were purchased from commercial suppliers and used without further purification unless otherwise stated. According to standard procedures, all solvents were dried and distilled either immediately prior to use or stored as appropriate. Diethyl ether and THF were refluxed under a static N 2 atmosphere over sodium and benzophenone. Dichloromethane was distilled from calcium hydride. Toluene was distilled from sodium. Ethanol was distilled from magnesium. The petroleum ether fraction used was 60-80°C.
Culturing of MINAP-9902
The fungus was grown on 110 mm malt extract agar plates at 25°C for 4 days. Eight 250 mL conical flasks were prepared, four with 50 mL potato dextrose broth (PDB) and four with 50 mL of malt extract broth (MEB). Each flask was inoculated with two 1 cm 2 blocks of fungal culture of the malt extract plate. These flasks were maintained at 25°C, and agitated on an orbital shaker at 120 rpm for 4, 6, 8 and 10 days (two conical flasks, one each of PDB and MEB, were removed periodically every two days from the four day mark). For each of the conical flasks, the mycelia were mechanically separated from the broth by filtration and the mycelia were homogenised in a solution of CH 2 Cl 2 /EtOH (1 : 4). The crude extracts were concentrated, then suspended in 50 mL of deionised water, and extracted successively with hexane (2 × 50 mL), CH 2 Cl 2 (2 × 50 mL) and EtOAc (2 × 50 mL). Previous studies 1 indicated that the bioactive metabolites of interest were to be found in the CH 2 Cl 2 layer. Thus, the CH 2 Cl 2 layers were dried over MgSO 4 , filtered, and concentrated in vacuo to provide samples that were subjected to HPLC.
HPLC
Analytical HPLC analyses were conducted on the eight fungal extracts prepared. HPLC conditions: Analytical and semi-preparative HPLC was performed using an Agilent 1100 system, utilising a Phenomenex LUNA 5μ C-18 (2) 250 × 4.60 mm column (analytical) or a Phenomenex LUNA 5μ C-18(2) 250 × 10.00 mm column (semi-preparative) and an Agilent 1100 series diode array detector for eluate detection. Elution was carried out using 0.1% v/v TFA in Milli-Q water and pre-filtered (0.45 μm nylon membrane filter) MeCN, with detection at 254 nm and 300 nm or 400 nm. Elution was carried out with 60% MeCN/H 2 O (0.1% TFA), increasing to 80% MeCN/H 2 O (0.1% TFA) over 30 min, with flow rates of 1.5 mL min −1 (analytical) or 2.5 mL min −1 (semi-preparative). In the first stage of HPLC work, analytical HPLC was carried out on synthetic analogues 2 and 3 to determine the retention times of these compounds. Subsequent analytical runs were conducted on the eight fungal extracts prepared from MINAP-9902, to probe for retention times similar to those for the prepared synthetic analogues. Our analysis of these runs revealed that the fungal extracts that were cultured for 6 days (both PDB and MEB) contained the metabolite of interest in a ratio that was yet to be overwhelmed by a suite of compounds produced by this fungus upon prolonged culture. This was based on the presence of peaks with similar retention times to our synthetic analogues and the comparison of the UV spectra of these peaks to our synthetic analogues. In particular, we decided to focus on the PDB broth that was cultured for 6 days (sample name: PDB6), as the signals of interest appeared stronger in the PDB HPLC trace, in comparison to that of the MEB HPLC trace. In PDB6, a strong peak observed at 14.4 min (analytical HPLC) was assigned to be ravenic acid. Peaks with retention times in the vicinity of analogue 3 (16.0 min) were also present (15.9 min and 17.6 min), but peaks with retention times close to that of 2 (13.2 min) were notably absent. Coinjection of 3 with PDB6 resulted in peak enhancement, strongly indicative of matching structures. At this stage semipreparative HPLC was performed on PDB6 to isolate the metabolite of interest. Small quantities of the metabolite were isolated and mass spectral data recorded: EI-MS m/z (%) 257 (93, M •+ ), 242 (10), 129 (48), 126 (46), 84 (75), 69 (76), 57 (100). HREI-MS: calc 257.1052 (C 15 H 15 NO 3 ) found 257.1052. The EI-MS of the metabolite isolated from PDB6 closely resembled the EI-MS of 3, with matching molecular ions (m/z 257), as well as a number of daughter ions (m/z 126, 84, 69) observed in both spectra. Re-injection of the metabolite of interest to the HPLC returned a peak with a retention time matching that of synthetic analogue 3 (see ESI †), confirming that no decomposition or structural modification had taken place. The matching retention times and mass spectral data of the unknown metabolite with analogue 3, provide strong evidence that the structure of the metabolite corresponds to that of 3.
Determination of the sensitivity of compounds against Staphylococcus aureus Kirby Bauer bioassays were performed routinely to identify fractions containing an active compound(s). This was achieved by inoculating an 83 mm nutrient agar plate with 200 µL suspension of Staphylococcus aureus cells in sterilised deionised water, and spreading this suspension over the agar surface with a flame sterilised glass spreader. The plate was dried for 10 min in a laminar flow hood, while 5 mm paper disks were soaked in a solution of the fractions or compounds of interest, dissolved in a volatile solvent. The disks were allowed to dry and then lightly pressed onto the agar surface. The plates were then incubated at 37°C for 16 hours and examined for growth inhibition around the disks. The sensitivity of a clinical strain of Staphylococcus aureus was assessed by a turbidity assay OD600 (optical density at λ 600 nm) as described in a previous publication. 17 | 3,824.8 | 2016-09-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
The Effects of Nonperforming Loans on Dynamic Network Bank Performance
This paper is to explore the relationship between banks’ performance and their nonperforming loans (NPLs). The banks’ performance through a network production process structure with NPLs is developed. With increasing NPLs in recent years, the quality of lending assets is a key significant and influencing factor for banks’ operational risk. The research methodology is to integrate the radial and nonradial measures of efficiency into the network production process framework with NPLs; this study utilizes network epsilon-basedmeasure model to evaluate the banking industry performance. In addition, the key characteristics of the bank industry including those of financial holding companies and privatized government banks are needed to be figured out and to provide insight into what causes imperfectly competitive conditions for some banks. The results demonstrate that the banking sector grew consistently in three aspects of operation: operating performance, profitability performance, and risk management in the last five years of the subject period. These results showed that the overall banking sector was capable of pursuing growth in both operations and profits while accounting for risk management. The potential applications and strengths of network data envelopment analysis in assessing financial organizations are also highlighted.
Introduction
The purpose of this study is motivated by the developments in the literature on the relationship between banks' performance and financial stability and on the effort that is currently being made to improve nonperforming loans (NPLs) [1][2][3].During the last decade, NPLs have been one of the most significant bank trends [4].The World Bank indicated that when banks adopt NPLs as one of the performance indicators, their banks' performance improves greatly.Figure 1 shows that banking market had experienced quite significant high ratio of bank nonperforming loans to total gross loans (%) in the range of 3-4% over the past 10 years period.Central Bank of the Republic of China (Taiwan) statistics and publications stated that NPLs ratio of Taiwan banks for the first quarter in 2004 fell from 2.81% to 0.25% in December 2014 which is the lowest in recent banking history.In general, banks reduce NPLs not only to enhance their operating performance and profitability performance [5] but also to let their risk management become better [6].Banks have realized that NPLs not only enhance the image of corporations but also may create profits for them.When more banks implement NPLs, it becomes not only a popular trend but also an important part of the core competitiveness of banks.In summary, NPLs play a significant role in banks' performance, implying that NPLs are now seen as an integral part of strategy.
The early literature on bank efficiency has focused mainly on total productivity [7] and bank branch efficiency [8,9].Recent studies have trended towards the relationship between bank efficiency and risk management [10][11][12][13].The results in these articles demonstrate that the incorporation of financial risk variables (e.g., NPLs or risky assets) into the analysis of efficiency estimation and ranking is significant.Therefore, we investigate the impact of risk variables on bank performance.Thus, NPLs have been selected as risk variables and, for the purposes of this paper, are considered undesirable outputs.Bank performance evaluation problems are inherently complex problems with multilayered internal linking activities and multiple entities.Data Envelopment Analysis (DEA) models [14,15] have been used to evaluate the relative performance of banks by using multiple inputs and outputs at the same time.However, the conventional DEA models cannot take into consideration the complex nature of banks with internal linking activities (e.g., NPLs or risky assets).Network DEA models using radial or nonradial measures of efficiency are used for bank performance evaluation problems [16].However, these models ignore problems where radial and nonradial inputs/outputs must be considered simultaneously.DEA models using epsilon-based measures (EBM) of efficiency are firstly proposed for a simultaneous consideration of radial and nonradial inputs/outputs [17].
The object of this report is to provide an alternative perspective and characterization of the performance to evaluate the operating efficiency of leading banking firms in Taiwan, which should provide additional managerial insights into the competitive advantage.The contributions of this study include (a) providing an alternative perspective and characterization measuring the banks' performance through a network production process structure including "operating stage," "profitability stage," and "risk management stage," by utilizing the NEBM model and stressing the importance of the growing strength and competitiveness; (b) applying the NEBM model [18] to investigate the radial and nonradial measures of efficiency into a unified framework for a bank performance evaluation problem; and (c) concerning whether differences exist in the various efficiency characteristics of the bank industry including those of financial holding companies (FHCs) and privatized government banks (PGBs).
Related prior studies that have influenced this study are discussed in Section 2. The design of the performance model and an introduction of the methodology are addressed in Section 3. The empirical results and interpretations are provided in Section 4. Finally, Section 5 concludes with the finding of this study.
Literature Review
The rapid growth of the Taiwan economy, particularly since 1991, has led a number of scholars, both within Taiwan and overseas, to study the performance of the Taiwan banking industry.However, these earlier studies have not taken NPLs into account, although NPLs are a critical component to impact the development of the Taiwan banking industry [19].Later studies have included NPLs as a fixed input and measured banking efficiency which strongly evidence that NPLs are an important undesirable output [1,6,20].
Many scholars have employed the directional distance function to measure banking efficiency with NPLs because of its ability to solve the oriented problem successfully [21,22].Fukuyama and Weber [23] introduced that directional distance function solves the oriented problem, which employs a generalized nonradial and nonoriented data envelopment analysis to solve both problems.This model is described as the slack-based measurement directional distance function.
Chang et al. [24] and Hu et al. [25] found that the higher the ratio of government stockholding or the greater the scale of the commercial banks, the lower the ratio of NPLs.Subsequently, Li (2005), Park and Weber (2006), and Fukuyama and Weber (2008) also treat a bank's NPLs as undesirable outputs [26][27][28].These studies demonstrated that the consideration of undesirable outputs has a great influence on measuring performance.Accordingly, this research incorporates the NPLs into the analysis of efficiency estimation and regards the NPLs as risk variables.
A Network Production Process Structure with NPLs for
Bank.A network production process structure with NPLs is designed to open the black box and explain internal operating structure for bank.The study indicates that the variables meet the criteria required to explore the operational efficiency of banks in the operating stage and their application of resources for maximum benefit.Research indicates that human, capital, and operating expenses represent the most important factors to explore.The bank's financial market operations require substantial manpower and play a key role in capital markets and financial capital flows.For good performance, banks must improve overall financial market performance.
Therefore, the "operating performance" measures the bank's management to generate competitive superiority, consisting of three types of its major costs (labor, fixed asset, and operational expenses) and three outputs (deposit, loans, and NPLs).The profitability stage examines profit performance, which represents output items (deposits and loans) from the operating stage as inputs to the profitability stage.The output items of the profitability stage are the interest earnings [29] and operating profit.The risk management consists of the input item that is the output of operating performance output item (NPLs) and two output items including nonaccrual loans and allowance for uncollectible accounts.The network production process structure with NPLs is shown in Figure 2 and the variables are as defined in Table 1.All nonperforming loans shall be transferred to non-accrual loans account item within six (6) months after the end of the payment period.However, those restructured loans to be performed in accordance with the agreement shall not be subject to this restriction
𝑍4: allowance for uncollectible accounts
With regard to the write-off of nonperforming loans and nonaccrual loans, the amount provided under the loan loss provision or the reserve against liability on guarantees shall be used to offset [the write off], and, if such amount(s) is insufficient, the deficiency shall be recognized as a loss in the current year published the information in the annual report since the year 2004.Cooper et al. [30] suggested that the number of DMUs (Decision-Making Units) should be at least triple the number of inputs and outputs considered.In this study the number of banks is 286 (26 × 11), which is larger than triple the number of inputs and outputs, or 286 > 3(10) = 30.
To overcome the undesirable output, Seiford and Zhu (2002) [31] propose a way which deals with undesirable outputs in the DEA framework.Each undesirable output is multiplied by "−1" and then finds a proper translation value to let negative undesirable output be positive.The translated bad output now ensured that the optimized undesirable output cannot be negative.This approach can truly reflect the real production process and is invariant to the data transformation within the DEA model.We therefore apply this method to treat the undesirable output factors in this study.
Finally, the importance and significance of input and output indicators indicate the relationship between the input and output indicators and their direction.We verify the correlation of input and output with the regression model to establish the adequacy of the variables in this study.The regression results are shown in Tables 2 and 3.The left column for the operating stage of this study shows the input indicators.According to the operating stage input indicator results, labor and fixed assets have a significant correlation indicating a high correlation between the inputs and outputs of operational efficiency in the operating stage of this study.The outputs of the operating stage are the inputs of the profitability stage.Additionally, the results show that deposits, loans, and NPLs have a significant correlation, which indicates that the variables chosen to disclose profitability and risk management are adequate.The research result validates the applicability and stability of the inputs and outputs of the research model and adequately represents profit performance and risk management.
Methodology: An Epsilon-Based Measure of Efficiency.
In the present study, the NEBM model [18] is employed to construct assessment mechanisms for banks.The proposed method considered the diversity of inputs and outputs to determine the relative efficiency.The advantages of Charnes-Cooper-Rhodes model (1978) [14] and the slacks-based measure are combined to overcome the drawbacks of a conventional model that does not include efficiency measures in nonradial measures.Additionally, the problem concerning the concurrent and unidirectional increases or decreases of the conventional model inputs and outputs is addressed.The proposed method can improve all input-output variables, depending on the situation, without unidirectional variable increases or decreases to conform to practical applications and provide an accurate analysis.
Let us consider the bank structure.Each bank is represented as if it is a different bank for each of the successive years and an analysis of the × banks is performed by using NEBM models to obtain sharper and more realistic efficiency estimates.In this study, we treat each bank as a DMU. ℎ , ℎ , and ℎ V represent the ith input ( = 1, . . ., ℎ ), the rth output ( = 1, . . ., ℎ ), and the Vth undesirable output (V = 1, . . ., ℎ ) of the ℎth division (ℎ = 1, . . ., ) in the th bank ( = 1, . . ., ) at time ( = 1, . . ., ), respectively.This study multiplies each undesirable output by "−1" and then finds a proper translation value to let negative undesirable output be positive.The translated bad output now is ℎ V = ℎ V + . (ℎ,ℎ ) represents the intermediate measure between the ℎth division and the ℎ th division of th bank at time ( = 1, . . ., ).Consequently, th bank overall efficiency score of the NEBM model ( 1) is as follows: where ℎ − is the weight of the ith input sent from the hth division that satisfies is determined based on the degree of the dispersion of the parameter associated with the inputs of the hth division. ℎ − represents the slack for the ith input in the hth division at time . ℎ − represents the weight of the hth division for the ith input slack and is determined by the decision makers.Constraints 1, 2, and 3 of Model (1) refer to the hth division inputs, outputs, and undesirable outputs, respectively.The fourth constraint is related to the intermediate products where the right side represents the products sent from the hth division and the left side shows the same products sent to the ℎ th division.Model ( 1) is called NEBM.The various steps on the NEBM DEA model proposed in this study are depicted.
After the diversity matrix is obtained, the affinity matrix for the hth division is constructed as follows: where ℎ , represents the degree of the affinity of input vector ℎ to ℎ . ℎ , is calculated by using the following equation: Step 3. Calculate ℎ and ℎ − from the affinity matrix.After ℎ matrix is obtained, the biggest eigenvalue of ℎ and its corresponding vector ℎ = ( ℎ 1, , ℎ 2, , . . ., ℎ ℎ , ) is calculated and the values of ℎ and ℎ − are estimated by using the following equations: When an input vector is compared with other vectors, the higher the degree of affinity is, the higher the value of ℎ is as a part of that vector.Equation ( 7) confirms this premise.
After determining ℎ and ℎ − , the NEBM model is used.The th bank efficiency score of the ℎth division at time is defined as follows: A bank is NEBM efficient provided that ℎ * equals 1 and A division is NEBM efficient at time provided that ∑ =1 ℎ * = 1 equals 1 at time .Note that a bank does not become NEBM efficient unless all of its divisions are efficient.
Estimation of Efficiency Scores.
For the efficiency scores of the sample over the period 2004-2014, first of all, we must get ℎ and ℎ − needs to be determined by established approximate variance matrix.We would like to determine them from the data set (, ), since DEA is a data driven method.These two parameters are obtained from the newly defined affinity index between inputs or outputs.Thus, EBM takes into account diversity of input/output data and their relative importance for measuring technical efficiency.Then we introduce an affinity index between two vectors which replaces Pearson's correlation coefficient [17].First, after the raw data in (3) obtain a diversity matrix, (4) obtains an affinity matrix, and the differences in operational efficiency, profitability, and efficiency of risk management are calculated in an affinity matrix.Tables 4 and 5 show the diversity matrix and the affinity matrix in operating performance.The maximum eigenvalue and eigenvector can be determined by the affinity matrix, and the calculation process is as follows: = 2.624, = (0.344, 0.342, 0.313) .
Hence we have Tables 6 and 7 show the diversity matrix and the affinity matrix in profitability performance.The maximum eigenvalue and eigenvector can be determined by the affinity matrix, and the calculation process is as follows: = 1.767, = (0.5, 0.5) .
(11) Hence we have Tables 8 and 9 show the diversity matrix and the affinity matrix in risk management.The maximum eigenvalue and eigenvector can be determined by the affinity matrix, and the calculation process is as follows: = 1.794, = (0.5, 0.5) . ( Hence we have Table 10 presents the results of operating performance, profit performance, and risk management over the period 2010-2014.Figure 3 shows a mild recession in banks' performances in the 2004-2007 period, followed by a transition in the overall banking sector in the 2008-2011 period, which was marked by a significant improvement in operating performances.Banks' performances plateaued in the 2012-2014 period, while maintaining good drivers of growth.The Asian Financial Crisis occurred in 1997, but it was not until 2000 that banks started to accumulate more bad debts and an increased ratio of NPLs, resulting from the difficult business environment and a drop in property prices.The Taiwanese government implemented financial reforms, which successfully addressed the further deterioration of bad loans.A bank exit mechanism was established, and the merger and acquisition of banks was legalized.The quality of personal consumer finance credit in the banking sector has worsened since 2005; the outbreak of the dual-card crisis (cash card and credit card) forced banks focusing on this area of business to tackle a large quantity of bad debts.
It is noted that in 2004-2007, with the effect of worsening business and personal finances, banks became more conservative in credit approvals.Further, financial regulations restricted the development of certain types of business transactions, resulting in a fall in the operating efficiency of banks, which dropped to its lowest point of 0.795 in 2007.In 2008, the overall operating performance of the banking sector slowly recovered from the recession, and its performance value improved to 0.826, the highest when compared with that of the past five years, marking a key turning point.This improved performance value indicates that bank deposits were improving, and simultaneously, NPLs were under effective control.Banks maintained strong growth in their operating performance for the next three years, which can be observed from the massive annual growth in the performance value.The vibrant property market in Taiwan stimulated the growth of mortgage businesses, one of the reasons for the increase in performance value.The increase in cross-strait trade due to various investment protection acts and memorandum was another explanatory factor.
The stock market in 2008-2009 was impacted by the bankruptcy of Lehman Brothers, the Taiwan Capitalization Weighted Stock Index (TAIEX) repeatedly recorded relatively lowest points, and it became much more difficult for businesses to raise capital (i.e., Initial Public Offer IPO and Secondary Public Offering SPO) directly from the capital markets.If businesses had to increase their investments or maintain capital for operations, they would certainly turn to their banks and apply to increases in their credit limits.This situation contributed to continuous growth in the overall banking sector's operating performance over the next few years.Beginning in 2008, the Taiwanese banking sector set up leasing companies and Offshore Banking Unit (OBU), establishing branches or subsidiaries in Mainland China.Banks were eager to expand their business with Taiwanese companies and state-owned enterprises in Mainland China, and also in the interbank market, with the goal of extending their lending business from Taiwan to Mainland China.The operating performance of the overall banking sector remained at 0.92 or above during the 2012-2014 period, which was higher than each of the annual figures in the 2004-2011 period, clearly illustrating the input-output view.
Although both the productivity of bank deposits and loans and the quality of credit reached a high performance level, the drivers of growth slowed down.There was an urgent need to open up financial policy to find new drivers of growth.To attain fairness in gains and distributive justice, the Taiwanese government reintroduced the capital gains tax on securities transactions in 2012.Investors scurried to transfer their capital abroad, looking for better investment channels.These actions affected capital's driver and mobility in the stock market; businesses' desire to invest was suppressed; and the overall banking sector was affected adversely, making the expansion of deposit and loan businesses even more challenging.The overall banking sector was facing a plateau period in its operations, and to find ways to break through the limits on growth, banks could adopt merger and acquisition strategies to expand the scale of their operations, engage in financial operations and transactions, or introduce derivative products to investors that would generate income from handling charges.
Figure 4 analyzes banks' profitability performance in the profitability stage, revealing significant improvement in the 2004-2007 period.While there was a decline in performance of the overall banking sector in 2008-2010, performance improved slowly in the 2011-2014 period.After the Asian Financial Crisis, the Taiwanese government carried out its first financial reform, the banking sector allocated costs for the allowance of bad debts and writing off bad debts after 3-4 years, and banks adjusted their financial health and put an emphasis on risk management.The profitability performance of bank operations gradually recovered and reached a relatively satisfactory level; until the outbreak of the global subprime mortgage crisis in 2007, the performance metric was at 0.607.Beginning in 2008, Taiwanese banks were impacted by the global subprime mortgage crisis for 3 consecutive years, and there was a significant recession in the profitability performances of the banking sector.
The worst performance metric recorded was 0.347 in 2010.Even though there was a significant recession in the profitability performance of the banking sector during this period, compared with the operating performances in the operating stage, in 2008, the banking sector was leaving the trajectory of recession, and the drivers of operating performance growth remained strong for the subsequent three years.This proved that, during the global subprime mortgage crisis, banks' loan and credit quality remained relatively healthy and bad debts were controlled effectively.After the global subprime mortgage crisis, the profitability performances of the overall banking sector began to bounce back in 2011, reaching its plateau in 2014 with a performance metric of 0.510.The banking sector's operating performance and profitability performance improved in this period, showing that banks' operations were getting more stable; banks were able to perform risk management and management while pursuing revenue and making profits.The decline in the risk management metric in 2007 was due to the impact of the dual-card crisis of personal consumer finance (i.e., cash card and credit card), while the metric was affected by the global subprime mortgage crisis in 2009.Nonetheless, the risk management of the overall banking sector quickly returned to the trajectory of a growing trend after the occurrence of these two events, indicating that the banking industry had greatly improved its ability to adapt to changes in the external business environment.Simultaneously, the risk management of the banking sector experienced significant improvements and enhancements with the Basel Accords and the requirement of a capital adequacy ratio (Bank of International Settlement Ratio (BIS Ratio)), as stipulated by the monitoring authorities.
Generally, the study found that, in the last five years of the subject period, the banking sector grew consistently in three aspects of operation: operating performance, profitability performance, and risk management.These results showed that the overall banking sector was capable of pursuing growth in both operations and profits while accounting for risk management.
Characteristics and Performance of Banks within a Commerce Group.
To explore whether differences exist in the various efficiency characteristics of the bank industry including those of FHCs and PGBs, a nonparametric statistical analysis, the Kruskal-Wallis Test, is used for unknown distribution scores [32].The nonparametric statistical analysis results are presented in Table 11.However, an efficiency and value analysis cannot fully identify what key resource allocations the banks should review.Therefore, this section applies the NEBM model through further analysis of the use of bank resources.The differences in resource use indicate which resources should be increased or decreased depending on the shortfalls in the bank item analysis and provide improved orientation.
Table 11 shows the analysis of the efficiency of banking institutions based on overall average efficiency scores, operational efficiency annual growth, progress in operating expenses, and manpower limitations.Banking institutions can exploit existing resources to obtain good performance and annual growth.FHCs show no significant growth, but PGBs show significant growth.In terms of profitability performance, efficiency is not as expected.Earnings failed to meet expectations and interest rate environment resulted in poor generated profit performance in the profitability stage.The average overall efficiency failed to grow year by year.
The FHCs' inefficiency value in terms of the effectiveness of risk management, profit, or outstanding performance compared to NFHCs and the efficiency of the value of government shares in banks and nongovernment bank shares good performance.This result shows that larger banks' performed better in adjusting the allocation of resources.These results showed that the overall banking sector was capable of pursuing growth in both operations and profits while accounting for risk management.The potential applications and strengths of network data envelopment analysis in assessing financial organizations are also highlighted.
Concluding Remarks
The study found that NPLs constituted one of the most influential factors affecting banks' performances; thus, the epsilon-based Internet data envelopment analysis was introduced into the study to explore banks' operations, profit making, and risk management, resulting in the following findings.
First, banks that fell under the framework of financial control performed better than other banks and had performance metrics in profit making and risk management that were affected significantly.This finding indicated that banks with a larger scale had better resource allocation than independent banks; risks were dispersed because of their synergy and diversification.
When the proportion of bad debts and NPLs among bank loans rose, if the government implemented financial reform, establishing a bank exit mechanism and legalizing mergers and acquisitions among banks, the loan problem could be controlled and prevented from worsening, and the emergence of systematic financial risk could be avoided.The requirements of the Bank of International Settlements-Basel Accords and the domestic monitoring authorities regarding the capital adequacy ratio (Bank of International Settlement Ratio (BIS Ratio)) have significantly improved and enhanced the risk management of the banking sector.
Second, the study built a risk assessment model of banks' bad debts, and then the network epsilon-based dynamic DEA method was used to conduct empirical analyses.The results demonstrated that our model could effectively explain the efficiency assessment of banks under the framework of risk management of bad debts.In the study of banks' operating efficacy in the operating stage, the model demonstrated that when banks were impacted by the external environment or events (such as the outbreak of the Asian Financial Crisis and the US subprime mortgage crisis in the studied period), a period of operational reorganization and financial health adjustment occurred, after which banks could again exhibit drivers of growth and efficacy.Furthermore, after comparing analyses of banks' profitability performances in the profitability stage to the results from the operating stage, it was found that banks' profitability performances and their operating efficacy are not necessarily consistent, indicating that the traditional profitability model of earning the difference between deposits and loans was becoming less important to banks, while the role of banks' investments, wealth management, and other financial products and services had increased.The model's subsequent analyses of the risk management stage revealed that risk management among banks continued to rise significantly, and there was a fiveyear period during which banks' improved operating efficacy and risk management were reflected in their profitability performance.
Third, the study found that in time banks' operating performances would enter a highly mature period of stability.If banks wish to increase their profitability and competitiveness, they must proactively develop diverse financial products and innovative financial services, expanding their operating scale and scope.In cases where the design of financial products and services, such as introducing and selling Target Redemption Forward products, crosses the professional border of banking, the introduction of commodities' securitization and futures option leverage would prompt banks to face enormous challenges.The government is unable to preemptively monitor, detect, and regulate the overall risk to the banking sector.When an incident deteriorates, it can easily trigger systematic risks, and a financial crisis can emerge more easily than in the past.The risk assessment model of bad debts in this study demonstrated that the inclusion of innovative financial businesses could more effectively prevent banks' risks becoming uncontrollable, as well as reflecting banks' performance assessment accurately.
Additional Points
Highlights. (i) The quality of lending assets is a key significant and influencing factor for banks' operational risk.(ii) This paper is to explore the relationship between bank performance and their nonperforming loans (NPLs).(iii) To integrate the radial and nonradial measures of efficiency into the network production process framework with NPLs, this study utilizes network epsilon-based measure model to evaluate the banking industry performance.(iv) The results demonstrate that the banking sector grew consistently in three aspects of operation: operating performance, profitability performance, and risk management in the last five years of the subject period.
Figure 1 :
Figure 1: Bank nonperforming loans to total gross loans for global banks (data form World Bank).
Figure 3 :
Figure 3: Trends in banks' performances in the operating stage.
FinancialFigure 4 :
Figure 4: Trends of banks' performance in the profitability stage.
FinancialFigure 5 :
Figure 5: Trends of banks' performance in the risk management stage.
Figure 5
Figure 5 reveals that the risk management of the banking sector improved gradually from 2004 to 2006; the risk management of the banking sector was kept at a satisfactory level between 2007 and 2009, and risk management rose further between 2010 and 2014.In the 11-year risk management monitoring, with the exception of slight declines of the metric in 2007 and 2009, the risk management of the banking sector improved continuously, and the metric increased considerably from 0.536 in 2004 to 0.859 in 2014.The decline in the risk management metric in 2007 was due to the impact of the dual-card crisis of personal consumer finance (i.e., cash card and credit card), while the metric was affected by the global subprime mortgage crisis in 2009.Nonetheless, the risk management of the overall banking sector quickly returned to the trajectory of a growing trend after the occurrence of these two events, indicating that the banking industry had greatly improved its ability to adapt to changes in the external business environment.Simultaneously, the risk management of the banking sector experienced significant improvements and enhancements with the Basel Accords and the requirement of a capital adequacy ratio (Bank of International Settlement Ratio (BIS Ratio)), as stipulated by the monitoring authorities.Generally, the study found that, in the last five years of the subject period, the banking sector grew consistently in three aspects of operation: operating performance, profitability performance, and risk management.These results showed that the overall banking sector was capable of pursuing growth in both operations and profits while accounting for risk management.
Table 1 :
Descriptive Statistics.The indicators of inputs and outputs are obtained from the Taiwan Economic Journal database.For the collection of NPLs and writtenoff bad debts, the financial institutions are required to have Variable definitions.
Table 2 :
Operating performance regression of input and output.* * means are significantly different at < 0.01; * * means are significantly different at < 0.05; * means are significantly different at < 0.1. *
Table 3 :
Profitability and risk management regression of input and output.
Table 4 :
Operating performance diversity matrix.
Table 5 :
Operating performance affinity matrix.
Table 8 :
Risk management diversity matrix.
Table 9 :
Risk management affinity matrix.
Table 11 :
Characteristics and performance of banks. | 7,127.4 | 2017-01-01T00:00:00.000 | [
"Business",
"Economics"
] |
Altered Nitrogen Balance and Decreased Urea Excretion in Male Rats Fed Cafeteria Diet Are Related to Arginine Availability
Hyperlipidic diets limit glucose oxidation and favor amino acid preservation, hampering the elimination of excess dietary nitrogen and the catabolic utilization of amino acids. We analyzed whether reduced urea excretion was a consequence of higher NOx; (nitrite, nitrate, and other derivatives) availability caused by increased nitric oxide production in metabolic syndrome. Rats fed a cafeteria diet for 30 days had a higher intake and accumulation of amino acid nitrogen and lower urea excretion. There were no differences in plasma nitrate or nitrite. NOx and creatinine excretion accounted for only a small part of total nitrogen excretion. Rats fed a cafeteria diet had higher plasma levels of glutamine, serine, threonine, glycine, and ornithine when compared with controls, whereas arginine was lower. Liver carbamoyl-phosphate synthetase I activity was higher in cafeteria diet-fed rats, but arginase I was lower. The high carbamoyl-phosphate synthetase activity and ornithine levels suggest activation of the urea cycle in cafeteria diet-fed rats, but low arginine levels point to a block in the urea cycle between ornithine and arginine, thereby preventing the elimination of excess nitrogen as urea. The ultimate consequence of this paradoxical block in the urea cycle seems to be the limitation of arginine production and/or availability.
Introduction
Metabolic syndrome is a pathological condition, which develops from localized inflammation and is characterized by the combination of a number of closely related diseases (insulin resistance, obesity, hyperlipidemia, hypertension, etc.) [1]. Administration of "cafeteria" diets [2] to rats has been used as an animal model for the study of late-onset hyperphagic obesity and metabolic syndrome. This model has the advantage of being comparable to some human obesity states induced by the excessive intake of energy-dense food [3,4]. The effects of the cafeteria diet are more marked in males [5,6]; this may be because they have less anti-inflammatory [7] estrogen protection than females. Low estrogen levels render males more prone to be affected by glucocorticoids [8,9], which in turn decrease the anabolic effects of androgens [10,11].
In rodents, prolonged exposure to a cafeteria diet results in higher energy intake (mainly lipids) [5] and increased body fat (obesity) but also affects lean body mass, favoring growth and protein deposition [12,13]. Although the hedonic component of the cafeteria diet initially elicits an increase in food consumption [14], once obesity is well established, hyperphagia decreases to almost normal levels of food intake. However, the large mass of accumulated fat remains, and the metabolic consequences of excess energy intake-such as insulin resistance and hyperlipidemia-persist [6,15].
The study of nitrogen handling under hypercaloric diet conditions has predominantly been limited to measuring plasma amino acid levels [16][17][18], while less attention has centered on pathways [19,20] and nitrogen balances [21]. To date, most research on dietary amino acid metabolism has been directed toward the analysis of metabolic adaptation to diets deficient in both energy and amino nitrogen [22][23][24][25] or has focused on specific regulatory pathways. Cafeteria diets are suitable for the study of the effects of high-energy diets, because total protein intake is practically unaffected by the excess dietary energy (mainly lipids) ingested [5].
Dietary or body-protein amino acid nitrogen is spared when other energy sources (such as fat or glucose) abound. Thus, high-energy diet, such as the cafeteria diet, apparently decreases overall amino acid catabolism, inducing a marked decrease in the production of urea [26]. The relative surplus of 2-amino nitrogen can maintain protein turnover and growth [12], but the excess nitrogen must be excreted in some way. The limited operation of the urea cycle suggests that there may be more amino nitrogen available for the operation of the nitric oxide (NO • ) shunt. Obese humans have been found to excrete more nitrate than their lean counterparts, and loss of NO • in expired air is proportional to body mass index (BMI) [27]. In terms of nitrogen balance, it has been found that cafeteria diet-fed rats show a higher "nitrogen gap, " that is, the difference between nitrogen intake and the sum of its accumulation and excretion in the urine and feces [21] than control diet-fed animals.
The objective of the present study was to determine whether cafeteria diet-fed rats show changes in the excretion of nitrate/nitrite in comparison with age-matched animals fed standard rat chow, investigating whether an increase in the excretion of NO compensates for the decrease in urea excretion.
Animals and Experimental
Setup. All animal handling procedures and the experimental setup were carried out in accordance with the animal handling guidelines of the European, Spanish, and Catalan Authorities. The Committee on Animal Experimentation of the University of Barcelona authorized the specific procedures used. This limited keeping the animals in metabolic cages to a maximum of 24 h to prevent unacceptable levels of stress.
Nine-week-old male Wistar rats ( = 12) (Harlan Laboratories Models, Sant Feliu de Codines, Spain) were used. The animals were randomly divided into two groups and were fed ad libitum, for 30 days on either normal rat chow (Harlan 2014) ( = 6) or a simplified cafeteria diet ( = 6) [21]. Both groups were housed in solid-bottomed cages with three animals per cage, had free access to water, and were kept in a controlled environment (lights on from 08:00 to 20:00, with a temperature of 21.5-22.5 ∘ C; 50-60% humidity). Body weight and food consumption were recorded daily. Calculation of food ingested was performed as previously described by counting the difference between food offered and left, including the recovery of small pieces of food, and compensating for drying [5]. The nitrogen content of the rat chow and the different components used in the cafeteria diet were measured with a semiautomatic Kjeldahl procedure using a Pro-Nitro S semiautomatic system (JP Selecta, Abrera, Spain).
On day 0 (i.e., the day before the experiment began) and day 26, the rats were kept for 24 h in metabolic cages (Tecniplast Gazzada, Buguggiate, Italy), recovering the urine and feces. In the metabolic cages, all rats were fed only standard rat chow and tap water, and their food consumption was measured. Samples of excreta were frozen for later analyses. Urine NO was estimated immediately to minimize further oxidation and NO • losses, using a nitric oxide analysis system (ISM-146NOXM system) (Lazar, Los Angeles, CA, USA).
On day 30, rats were anesthetized with isoflurane and then killed by exsanguination through the exposed aorta. Blood plasma and tissue samples were obtained and frozen. Liver samples were rapidly frozen in liquid nitrogen and maintained at −80 ∘ C until processed for enzyme analyses. For tissues distributed widely throughout the body (i.e., subcutaneous adipose tissue), all the tissue was carefully dissected and weighed. Hind leg muscle samples were cut from the hind leg, obtaining part of the quadratus femoris, biceps femoris, and semitendinosus muscle and a smaller proportion of others.
The cafeteria diet included biscuits spread with liver pâté, bacon, standard chow pellets, water, and milk supplemented with 300 g/L sucrose plus 10 g/L of a mineral and vitamin supplement (Meritene, Nestlé, Esplugues, Spain). All of these compounds were provided fresh daily. From the analysis of the diet components and the ingested items, we calculated that a mean of 33% of energy was derived from lipids, 16% of energy was derived from protein, and 51% of energy was derived from carbohydrates (20% from sugars).
Body and Metabolite
Analyses. Total body muscle mass was estimated, as previously described [28], using the remaining carcass. The method was based on the solubilization of muscle actin and myosin with 1 M LiCl and subsequent precipitation of mainly myosin with distilled water, followed by its estimation with a standard procedure.
Stool nitrogen was measured using the semiautomatic Kjeldahl procedure described above. Nitrogen content and nitrogen accrual were calculated by applying the body composition factors obtained from previous studies [3,21] to our experimental animals. These data are included as reference values only for comparison. Urine urea was measured with a urease-based test, and creatinine was measured with the Jaffé reaction using commercial kits (BioSystems, Barcelona, Spain).
Plasma was used for the analysis of glucose, total cholesterol, triacylglycerol and urea (using kits from BioSystems, Barcelona, Spain), nonesterified fatty acids (NEFA kit; Wako, Richmond VA, USA), and L-lactate (Spinreact kit, Barcelona, Spain). Nitrite and nitrate were measured with the Arrow-Straight system. Plasma samples were used for amino acid quantification after deproteinization with trifluoroacetic acid; they were measured with ninhydrin, in a Biochrom 30 autoanalyzer (Biochrom, Cambridge, UK), using L-norleucine as internal standard, at the Scientific-Technical Services of the University of Barcelona.
Enzyme Assays.
Frozen liver samples were homogenized in chilled 50 mM Krebs-Ringer phosphate buffer, pH 7.4, containing 0.1% Triton X-100, 2.5 mM mercaptoethanol, 0.1% dextran, 5 mM Na 2 -EDTA, and 0.5% bovine serum albumin using a mechanical tissue disruptor (IKA, Staufen, Germany). Arginase I (EC 3.5.3.1) activity was estimated as described elsewhere [29]. The method was based on the colorimetric estimation of urea (Berthelot reaction) released by the action of arginase on arginine. Homogenate protein content was measured with a standard colorimetric method [30] against blanks of the homogenization medium.
Other liver samples were homogenized in 50 mM triethanolamine-HCl buffer, pH 8.0 containing 1 mM dithiothreitol, and 10 mM magnesium acetate. Carbamoyl-P synthetase I (EC 6.3.4.16) activity was measured immediately after homogenization, as previously described [31], by measuring the incorporation of labeled bicarbonate (50 kBq/mmol) into carbamoyl-P in a medium containing 5 mM ATP, 5 mM N-acetyl-glutamate, and 0.05% bovine serum albumin. Enzyme activities were expressed in katals both in reference to the weight of fresh tissue and its protein content.
Statistical Analysis.
Statistical analysis was carried out using one-way ANOVA, with the post hoc Bonferroni test, and/or the unpaired Student's t-test, using the Statgraphics Centurion XVI software package (StatPoint Technologies, Warrenton, VA, USA). Table 1 shows the body weights and nitrogen balance values for control and cafeteria diet-fed rats at the beginning and end of the study (day 27 for nitrogen data). As expected, the increase in body weight was greater in cafeteria diet-fed rats than in controls. The overall energy and nitrogen intake were also higher in cafeteria diet-fed than in control rats.
Results
In the period that the rats were kept in metabolic cages, significant differences in nitrogen intake were observed between the groups, but not in urine or stool nitrogen excretion. The proportion of urea excreted with respect to the total daily nitrogen budget was lower in cafeteria diet-fed rats than in controls on day 27 (65% versus 75% of nitrogen intake, 82% versus 94% of urea excreted, resp.). This difference was not compensated for by creatinine and NO excretion, which was low in comparison to urea, and showed slight changes over time or with dietary treatment. Thus, although the data on nitrogen balance was measured on different days, the estimated "nitrogen gap" showed a wider margin for cafeteria diet-fed rats than for controls.
The effects of diet on organ weight on day 30 are presented in Table 2. The only significant differences between the two groups in organ weights were for muscle, stomach, heart, and adipose tissues. The other organs showed remarkably similar weights.
The plasma nitrate and nitrite concentrations are presented in Table 3. There were no significant differences between control and cafeteria-fed rats for nitrate, nitrite, or their sum. In both groups, however, nitrate was the predominant component (>90%). Table 4 shows the plasma amino acid concentrations of the control and cafeteria diet-fed groups on day 30. The similarity between the groups was remarkable, with only a few amino acids showing significant differences. Cafeteria diet-fed rats had higher levels of glutamine, threonine, serine, glycine, and ornithine than controls, while the latter showed higher levels of arginine with respect to the cafeteria diet-fed animals. When analyzing the sums of concentrations of groups of related amino acids, no changes were observed for the combined concentrations of glutamate + glutamine, aspartate + asparagine, branched-chain amino acids (leucine + isoleucine + valine), or urea cycle intermediaries (ornithine + citrulline + arginine).
The plasma concentrations of glucose, triacylglycerols, total cholesterol, and urea for both diet groups (Table 5) were similar, and all were within the normal range. These concentrations were similar to data previously published by our group, with no differences between the groups, except for higher glucose and lower urea values in cafeteria diet-fed rats. Figure 1 presents the measured activities of two key urea synthesis enzymes in the livers of the control and cafeteria diet-fed groups. The activity of carbamoyl-P synthetase I was higher than that of arginase I in the cafeteria diet-fed group; these rats had threefold higher activity rates for carbamoyl-P synthetase I than controls. The results for arginase were the reverse, since the control group had almost twice the activity per unit of tissue weight than the cafeteria diet-fed group, and this result was similar for protein and total tissue.
Discussion
The cafeteria diet is essentially hyperlipidic, with identical mean protein and carbohydrate intakes to those of control rats fed a standard diet [5]. As expected, a one-month exposure to a cafeteria diet resulted in overfeeding and increased body weight, leading to a greater increase in the size of adipose tissue deposits and higher muscle mass than in controls. These results are in agreement with previous studies showing that, as with the hyperlipidic diets, a cafeteria diet increases not only fat deposition and growth, but also protein accrual [12,21] and energy expenditure [32]. A lower excretion of urea, irrespective of the maintained (or increased) amino acid intake, was also observed, again in agreement with previous studies [21,26].
It has been postulated that a high-energy diet coupled with normal or increased protein intake may hamper 2amino nitrogen elimination in rats, humans, and other mammals [33]. This problem is largely a consequence of the abundance of energy, mainly in the form of lipids, which is used preferentially by muscle and other peripheral tissues over glucose because of insulin resistance [34]. However, amino acid oxidation is spared due to the availability of energy, that is, in the form of glucose [35]; thus, is to be expected that catabolism of dietary amino acids and, therefore, the production of ammonium from 2-amino nitrogen should also decrease. Consequently, during this buildup, The values are the mean ± SEM for 6 different animals. There were no significant differences between the two groups ( > 0.05, Student's -test) for any parameter. WAT: white adipose tissue, BAT: brown adipose tissue.
the mechanisms of amino nitrogen waste prevention surprisingly create a surplus of available amino acids. The excess of 2-amino nitrogen may be limited, in part, by increased growth (e.g., increased muscle mass) and, to a lesser extent, by increased protein turnover. However, the problem remains that not enough ammonium can be produced from the amino acid pool to maintain the glutamine (or free ammonium) necessary for the splanchnic organs (i.e., the intestines, liver, and kidney) to eliminate the excess of nitrogen as urea [36][37][38].
The observed plasma levels of amino acids seem to confirm these trends. In cafeteria diet-fed rats, glutamine but not glutamate + glutamine levels were higher than in control rats. High levels of glutamine suggest its decreased splanchnic utilization to provide ammonium for the synthesis of carbamoyl-P. The high circulating levels of ornithine again suggest a diminished production of carbamoyl-P, perhaps because of scarcity of ammonium donors in the liver. The lower arginine levels in the cafeteria diet-fed group versus controls suggest that synthesis of arginine may be insufficient to compensate for the release of urea through arginase activity The values are the mean ± SEM for 6 different animals. Statistical significance of the differences between the two groups was determined with Student's ttest. NS: > 0.05. Asterisks " * " indicate the amino acids incorporated in the sums marked in bold below them. or other uses. Arginase I in liver, which is the main site for this enzyme to complete the urea cycle [39], showed lower activity in the cafeteria diet-fed group. This may help maintain circulating arginine, although at levels lower than in control-fed animals. The higher carbamoyl-P synthetase I activity found in cafeteria diet-fed rats agrees with the clear surplus of 2-amino nitrogen available for excretion, since higher liver ammonium availability increases the activity of this enzyme [40]. The decrease in urea excretion agrees with lower arginase activity, but not with the increased activity of carbamoyl-P synthetase I, which depends on ammonium as its substrate [40]. Thus, the block in urea cycle function (and consequently in "normal" 2-amino nitrogen disposal) should lie between these enzymes in the urea cycle, that is, in the conversion of ornithine to citrulline or the latter to arginine (i.e., argininosuccinate synthetase and argininosuccinate lyase). In addition, the higher ornithine levels in the plasma of cafeteria diet-fed rats suggest that the N-acetyl-glutamate pathway for the exogenous synthesis of ornithine was not sufficiently activated to compensate for the arginine deficit.
Because the circulating levels of citrulline and aspartate were unchanged (or increased) in cafeteria diet-fed versus control rats, it can be assumed that there is probably a key regulatory path, for overall nitrogen disposal, either at the synthesis or breakup of argininosuccinate, which would help explain the lower production of urea. Based on kinetic studies, argininosuccinate synthesis was initially postulated as a key urea cycle control node [41], although normal urea cycle operation is assumed to rely more on other parameters such as pH, ammonium availability, and N-acetyl-glutamate levels [42]. However, the indirect data presented here suggest that argininosuccinate synthesis/breakup may be a significant control point in vivo under relatively high nitrogen (and energy) availability.
The involvement of ammonium availability in this context is enhanced by the relatively higher concentrations of threonine, serine, glycine, and glutamine in cafeteria diet-fed rats. These amino acids yield ammonium in their catabolism [43] via threonine/serine dehydratase, glutaminase, or the glycine cleavage system. Serine may also be converted to glycine, which leads to the same fate. These results show an unexpected picture, since, in cafeteria diet-fed rats, there is an excess of 2-amino nitrogen and the higher levels observed correspond to ammoniagenic amino acids. According to the normal catabolic pathways for nitrogen excretion, this excess should activate the production of ammonium, its transport as glutamine, release again as ammonium, and formation of carbamoyl-P, followed by its integration (with more aspartate-derived amino nitrogen) into the guanido group of arginine for its eventual release as urea. However, the amino acids that can yield ammonium directly, in an initial nontransaminative catabolic step, were somehow preserved in cafeteria diet-fed rats. These amino acids were not used in large quantities as was to be expected in a situation in which, theoretically, the lack of 2-amino nitrogen conversion to ammonium could hinder normal nitrogen excretion through the urea cycle. The contrast between preservation of the ammonium donors and high carbamoyl-P synthetase I activity in the cafeteria diet-fed group suggests that the problem does not lie in the availability of ammonium. Higher levels of the main amino acid ammonium donors suggest instead a constraint on their utilization because elimination via the urea cycle is blocked as indicated above.
The faulty operation of the urea cycle, then, suggests that the main control mechanism sought is not centered on the availability of ammonium-yielding substrates as is usually postulated for normal and starvation conditions [42]. The increased activity of carbamoyl-P synthetase and the low activity of arginase in cafeteria diet-fed rats indicate that the control mechanism lies in the actual synthesis of arginine, which is also essential for the operation of the NO • shunt. Notwithstanding, the NO • shunt does not seem to be significantly altered by the cafeteria diet, as shown by unchanged plasma and urinary NO in spite of lower circulating arginine. One possible explanation is that the blockage of arginine production results from the need to prevent an increase in the production of NO • under cafeteria diet conditions, in which blood flow-in part dependent on NO • synthesis-to a number of tissues is markedly altered [44].
The question remains of how the excess nitrogen provided by cafeteria diets is eliminated. The widening of the nitrogen gap under high-energy feeding suggests that nitrogen gas [45] may be involved, since the amount of creatinine, uric acid, and so forth, excreted is only a small fraction of urea nitrogen [46,47]. The synthesis of NO • results in the excretion, mainly via saliva [48], of nitrite and nitrate. In addition, there is a small direct loss of NO • in the breath [49]. However, the low levels of NO measured in the urine and their marked metabolic effects [50] suggest that NO as NO • derivatives, could account for at most only a very small part of the "missing" nitrogen. The lack of changes elicited by diet in circulating levels of nitrate and nitrite reinforced this assumption; that is, nitrate excretion is not a significant alternative as a nitrogen-disposal pathway to lower urea synthesis.
The one-month period of exposure to the cafeteria diet proved that this type of diet caused difficulties in the normal mechanisms of amino nitrogen disposal, exemplified by a lower urea production. These problems were not directly related to the potential availability of ammonium as the prime substrate for initiating the urea cycle but instead were probably related to the availability of arginine. No changes were observed in the levels or excretion of NO , which were small, but the "nitrogen gap" [21] became significant under cafeteria diet feeding. It is now clear that the decrease in urea excretion is not compensated for by higher NO production and elimination. The main pathway for disposal of the excess amino nitrogen generated by energy rich diets remains unsolved, with the additional conundrum of why the urea cycle appears to be disrupted for the only apparent reason of limiting the availability of arginine.
Conclusions
The decrease in urea excretion is not compensated for by higher NO production and elimination. The defective operation of the urea cycle in rats fed a cafeteria diet seems to be caused by a block in the urea cycle between ornithine and arginine. | 5,143.6 | 2014-02-24T00:00:00.000 | [
"Biology"
] |
Provincial Allocation of Energy Consumption, Air Pollutant and CO2 Emission Quotas in China: Based on a Weighted Environment ZSG-DEA Model
Air pollutants and CO2 emissions have a common important source, namely energy consumption. Considering fairness and efficiency, the provincial coordinated allocation of energy consumption, air pollutant emission, and carbon emission (EAC) quotas is of great significance to promote provincial development and achieve national energy conservation and emission reduction targets. A weighted environment zero-sum-gains data envelopment analysis (ZSG-DEA) model is constructed to optimize the efficiency of the initial provincial quotas under the fairness principle, so as to realize the fairness and efficiency of allocation. The empirical analysis in 2020 shows that the optimal allocation scheme proposed in this study is better than the national planning scheme in terms of fairness and efficiency, and the optimal scheme based on the initial allocation of priority order of “capacity to pay egalitarianism > historical egalitarianism > population egalitarianism” is the fairest. The optimal allocation scheme in 2025 can achieve absolute fairness. In this scheme, the pressures of energy conservation and emission reduction undertaken by different provinces vary greatly. The implementation of regional coordinated development strategies can narrow this gap and improve the enforceability of this scheme. Combined with the analysis of energy conservation and emission reduction in seven categories and three major national strategic regions, we put forward corresponding measures to provide decision support for China’s energy conservation and emission reduction.
Introduction
Economic development needs to consume a large amount of energy, and the consumption of energy is accompanied by the emissions of CO2 and air pollutants such as NOX, SO2, and inhalable particles. CO2 is an important component of air, which does not directly cause air pollution, but its large increase affects climate change, and then damages the ecological environment. As the largest developing country, China's economy has developed rapidly in recent years, but it has also brought great energy consumption, CO2 emissions, and atmospheric environmental pollution. According to BP Statistical Review of World Energy, in 2020, China's energy consumption and CO2 emissions accounted for 26.1% and 30.7% of the world, respectively. According to China Ecological Environment Bulletin, 40.1% of China's 337 prefecture level and above cities still exceeded air quality standards in 2020. The Chinese government attaches great importance to the control of energy consumption, air pollutants, and CO2 emissions and has put forward a series of measures such as total quantity control, provincial quota allocation, and emissions trading. In 2016, Chinese State Council issued the Work Plan for Controlling Greenhouse Gas Emissions (WPGE) and Comprehensive Work Plan for Energy Conservation and Emission Reduction (WPEE) during the 13th Five-Year Plan period. The former determines the provincial CO2 emission intensity reduction targets through classification, while the latter proposes the "dual control" targets of energy consumption and intensity and the total emission control targets of major pollutants for each province. In the 14th Five-Year Plan (2021-2025), the Chinese government proposed to significantly improve energy efficiency, strengthen the collaborative control of multiple pollutants, and continuously reduce CO2 emissions. It clearly stated that by 2025, China's energy consumption intensity and CO2 emission intensity will be reduced by 13.5% and 18% compared with 2020, respectively, and PM2.5 concentration in prefecture level and above cities will be reduced by 10%. In 2021, the China Development and Reform Commission issued the Plan for improving the dual control of energy consumption intensity and total amount, which proposed to reasonably set the total energy consumption target and carry out provincial allocation.
There is a close correlation between environment and energy. Energy consumption is an important source of air pollutants and CO2 emissions [1][2][3]. By improving energy efficiency to reduce total energy consumption or reducing the proportion of high carbon energy to optimize the energy consumption structure, air pollutants and CO2 emissions can be reduced simultaneously [4][5][6][7]. CO2 emission reduction and air pollutant reduction also have synergistic effects and reducing either of them usually brings co-benefits [8,9]. Therefore, it is necessary to comprehensively consider the relationship between energy consumption, air pollutant emissions, and CO2 emissions, and scientifically set energy conservation and emission reduction targets. In addition, under the provincial responsibility system for energy conservation and emission reduction targets, the provincial coordinated allocation of EAC quotas is conducive to improving the feasibility of achieving the national targets based on the current situation of economic development, energy consumption, and atmospheric environmental emissions in each province. The research objective is to discuss how to set provincial EAC quotas scientifically, so as to guarantee the fairness of provincial development and promote the efficient realization of the national targets.
When the total control amount of national energy consumption, air pollutants emissions, or CO2 emissions is determined, the provincial quota allocation can be viewed as zero-sum game, that is, the more quota one province gets, the less quotas other provinces get. Based on this characteristic, scholars have proposed a variety of models for quota allocation of energy consumption, air pollutants emissions, or CO2 emissions. Different from these studies, we make the following contributions. Firstly, we construct a weighted environmental ZSG-DEA model to realize the coordinated allocation of multiple elements and optimize the comprehensive allocation efficiency. Secondly, by comparing the Gini coefficients of optimal allocation results under different initial schemes, we select the fairest initial allocation principle for atmospheric environmental emission quotas. Thirdly, in combination with the 14th Five-Year Plan of China and all provinces, we explore the provincial EAC quota allocation scheme in 2025 and put forward corresponding policy implications according to the provincial pressures on energy conservation and emission reduction.
The rest of this paper is organized as follows. Section 2 reviews the literature on EAC quota allocation and analyzes the deficiencies of existing studies. Section 3 proposes research methodology and explains data sources. Section 4 presents the results of China's EAC allocation in 2020 and 2025 and discusses the results in detail. Section 5 summarizes the research of this paper and puts forward the policy implications.
Literature Review
Dales [10] first proposed the concept of emission trade. The National Environmental Protection Agency of the US then issued the rules of total amount and trade. Since entering the 21st century, Cap and Trade mode has been a widely used control mode in the field of environment, and a growing number of scholars have paid attention to the allocation of air pollutants and CO2 emissions quotas [11][12][13]. Meng et al. [14], Dong et al. [15], and Zhou et al. [16] analyzed that there were differences in energy efficiency or carbon emission efficiency among provinces in China, which should be considered in the provincial quota allocation. According to the allocation ideas and methods, the existing allocation models of energy consumption and atmospheric emissions under the Cap mode can be divided into the following three categories: (1) Indicators-based allocation models. Zhang and Xu [17] weighted historical energy consumption, GDP, and population, and allocated provincial energy consumption quotas based on cluster analysis and weighted voting model. Rose et al. [18] put forward nine standards of equity between countries and proposed corresponding rules applicable to the allocation of tradable emission rights at the regional level. Zhang et al. [4] established a provincial allocation model of VOCs by weighting four indicators such as per capita GDP. Aasmi and Leo [19] selected egalitarian equity, horizontal equity, and proportional equity as three criteria for global CO2 allocation, and evaluated the allocation scheme based on per capita emission and emission intensity standards. Pan et al. [20,21] proposed an allocation scheme of global carbon emission rights based on cumulative per capita emissions quotas, and then compared 20 key allocation schemes under different rules. Presno [22] analyzed the stochastic convergence of per capita CO2 emissions in 28 OECD countries and proposed that per capita GDP was not the sole determinant of emission allocation. Yi et al. [23], Zhou et al. [24], and Fang et al. [25] constructed composite indicators based on different single indicators for allocating provincial CO2 emission quotas of China. Han et al. [26] constructed a comprehensive index through the index weighting method to allocate carbon quota in the Beijing-Tianjin-Hebei region. Wu et al. [27] conducted preliminary allocation of provincial carbon emissions based on the principles of Grandfather, population egalitarianism, and pays ability egalitarian. Zhou et al. [28] considered fairness, efficiency, and sustainability principles simultaneously when conducting the CO2 emission quotas allocation. Li et al. [29] constructed a multi-attribute decisionmaking model to allocate provincial carbon emission rights. Kong et al. [30] and He and Zhang [31] used the same method, but they took the efficiency measured by DEA as one of their attributes.
(2) Nonlinear optimization-based allocation models. Aiming at minimizing economic and external costs, Yang et al. [32] established a nonlinear programming model for optimizing the allocation of natural gas and other energy sources. Xue et al. [33] allocated SO2 emission quotas in Beijing-Tianjin-Hebei region with the goal of minimizing the pollution control cost. Xie et al. [34,35] constructed a Gini coefficient minimization model to allocate the control targets for PM2.5 concentration in Beijing-Tianjin-Hebei district, in which the constraints include overall reduction rate, Gini coefficient, reduction rate, and ranking of each city. Zheng [36] defined the equitable interval and two equity indices and set up a fairness-efficiency trade-off model for the CO2 emission reduction responsibility allocation with the goal of minimizing the fair distance index. An et al. [37] proposed a costminimization carbon emission permit allocation model in combination with the DEA efficiency measurement model. Fang et al. [38] constructed an optimization model of carbon emission rights allocation based on energy justice, which takes the minimum sum of different Gini coefficients as the objective function under the constraints of population, ecological production land, fossil energy resources, and GDP.
(3) DEA-based allocation models. DEA is an ideal element allocation model, which can consider all input-output elements and achieve the optimal efficiency of the allocated object [39][40][41][42][43]. The proposed DEA-based allocation models mainly include ZSG-DEA, fixed cost allocation model (FCAM), and centralized DEA (CDEA). Sun et al. [44] and Miao et al. [45] constructed environmental ZSG-DEA models to allocate China's energy conservation quotas and air pollutants (SO2 and NOX) emission rights. Wu et al. [46] allocated PM2.5 emission rights based on the ZSG-DEA model. Gomes and Lins [47] used the ZSG-DEA model to redistribute the initial carbon emissions rights for different countries within the framework of Kyoto protocol. Pang et al. [48] analyzed that based on the ZSG-DEA model, different countries could obtain reasonable CO2 emission quotas and realized global Pareto optimization. Chiu et al. [49] used the ZSG-DEA model to discuss the redistribution of emission quotas in 24 EU Member States. Cucchiella et al. [50] reallocated the energy consumption and CO2 emission among 28 European countries by using the ZSG-DEA model. Li et al. [51] applied ZSG-DEA models to allocate the CO2 allowances of the Jiangsu-Zhejiang-Shanghai region. Cai and Ye [52] and Yu et al. [53] used the ZSG-DEA model to allocate the carbon emission allowance in China. Li et al. [54] put forward a twostep allocation method of CO2 emission quotas, in which the ZSG-DEA model is used to optimize the initial quotas obtained through the multi-index weighting allocation model. Yang et al. [55] constructed a ZSG-DEA model to optimize China's carbon emission reduction scheme in 2020 and 2030. Wang et al. [56] constructed a weighted ZSG-DEA model to allocate the energy consumption, CO2 emissions, and non-fossil fuel consumption, in which all three weights were set to be 1/3. Based on the ZSG-DEA model, Wang et al. [57] constructed a DEA-based resource allocation model that joints the input and output orientation to allocate provincial GDP and quotas of energy consumption, coal consumption, and carbon emission in 2020. Wang and Li [58] constructed FCAM to allocate carbon emissions in terms of the principle of population proportion convergence. Pan and Pan [59] constructed FCAM in terms of historical emission proportion convergence. Kong and Hou [60] constructed FCAM based on per capita convergence. Dong et al. [61] constructed a modified FCAM based on population proportion convergence with the results under other convergence principles as constraints. Zhou et al. [62] and Sun et al. [63] proposed CDEA models for CO2 emission quota allocation, which can achieve the maximum GDP of China under the constraint of meeting the total amount of carbon emission control. Song et al. [64] constructed a CDEA model to allocate provincial EAC quotas in 2025.
Literature review shows that that there are many studies on the allocation of provincial energy consumption, air pollutant, and CO2 emission quotas, but few studies have attempted to focus on the coordinated allocation of the three elements. From the perspective of allocation principles and methods, most studies consider fairness and efficiency. DEA-based allocation models can associate the inputs of economic activities with expected and unexpected outputs, and do not need to estimate the value of indicators such as emission reduction cost in advance. Therefore, it is more objective than the allocation models based on indicators and nonlinear optimization, and the relevant research is more abundant. The existing FCAM has two main shortcomings: first, it is difficult to reflect the weak disposability of undesirable output (the reduction of undesired output is at the cost of the reduction of expected output) and null-jointness between desirable output and undesirable output (undesirable output must be produced when desired output is produced); and second, its goal only reflects the convergence of a certain fairness principle, and cannot cover many fairness principles. Although the CDEA model can reflect the energy endowment of each province and maximize the total expected output, the economic growth target of some provinces in the allocation scheme is too high, which is infeasible under the realistic background of immature regional coordinated development. The environmental ZSG-DEA model can overcome the shortcomings of the FCAM and CDEA models. Based on the initial fair allocation scheme, it can consider the weak disposability and null-jointness of undesirable outputs to further optimize the allocation efficiency, so as to realize the integration of fairness and efficiency. Therefore, we select the environmental ZSG-DEA model to allocate EAC quotas. Different from the ZSG-DEA model for single element proposed by scholars, we construct a weighted comprehensive environmental ZSG-DEA model to reflect the differences in the importance of different elements. In addition, considering the diversity of fairness principles, we propose different initial fair allocation schemes for atmospheric emission quotas and explore the impact of different principles on the final allocation results based on the Gini coefficient.
Allocation Methods Based on Fairness
The fairness principle of allocation mainly includes historical egalitarian, population egalitarian, and pays ability egalitarian [18]. Taking atmospheric emissions as an example, historical egalitarian means that the total emission quotas are equidistantly allocated based on the proportion of provincial emissions to national emissions in the base period, which is conducive to ensuring the consistency of economic developments in all provinces. The allocation equation based on historical egalitarian is described as follows: where Rk (k = 1, 2, …, n) is the quota obtained by the kth province, R is the total quotas to be allocated, and rk is the emissions of the kth province in the base period. The base of population egalitarian is the proportion of population of each province to national population in the base period, and the allocation equation is described as follows: where pk is population of the kth province in the base period. The pays ability egalitarian reflects the fairness of emission reduction responsibility. Provinces with a large population and low per capita GDP can get more emission quotas. The allocation equation based on pays ability egalitarian is described as follows: where gk is GDP of the kth province in the base period, and α is the modified variable. α is smaller than one, indicating that the increasing (or decreasing) amplitude of quotas is smaller than the decreasing (or increasing) amplitude of per capita GDP. This assumption guarantees that the quotas of one province will not fall drastically as its per capita GDP grows. Referring to the study of Dong et al. [61], we define the value of α as 0.5.
Allocation Methods Based on Efficiency
DEA is an effective method to evaluate the relative efficiency of a decision-making unit (DMU) with multiple inputs and multiple outputs. Traditional DEA models usually assume that the input (or output) variables of each DMU do not affect each other. However, such independence does not exist under the total amount control mode, and there is zero-sum-gains game relationship among all DMUs. When the total amount of one input (or output) remains constant, the inefficient DMU needs to reduce its input (or increase its output) to achieve effectiveness, while the other DMUs need to increase their input (or reduce their output) accordingly. Lins et al. [65] proposed the ZSG-DEA model for measuring the efficiency of each DMU, and iteratively adjusted the allocation results of the inefficient DMUs to make all DMUs at the efficiency frontier. Considering the weak disposability and null-jointness of undesired output, Färe et al. [66] proposed the concept of environmental production technology (EPT). Zhou et al. [67] constructed the environmental DEA model based on EPT. Miao et al. [68] further proposed the environmental ZSG-DEA model. Referring to the environmental ZSG-DEA model and considering the importance of different elements, we construct a weighted environmental ZSG-DEA model as follows: where φl and θjl (j = 1, 2, …, s) are energy efficiency and the jth unexpected output efficiency of the lth province, respectively; α and βj are the corresponding weights of them, and they satisfy the condition + ∑ = 1; Yl and Iil represent desirable output and the ith unallocated input of the lth province, respectively; El and Ujl are energy consumption quota and the jth undesired output quota of the lth province, respectively; and δk (k = 1, 2, …, n) is the decision variable. Inequality constraints in the model imply the strong disposability of inputs and desirable output, while equality constraint implies the weak disposability and null-jointness of undesirable outputs. According to the ZSG-DEA model, inefficient DMUs need to reduce their quotas to improve efficiency. In order to keep the total quotas constant, when the inefficient lth province reduce the quotas of El(1 − φl) and Ujl(1 − θjl), the other n − 1 provinces need to increase their quotas according to the proportion of quotas last obtained, that is, the in- province increases or reduces its quotas according to the above proportional reduction method. After all provincial quotas are adjusted, the new provincial energy consumption and undesired output quotas are shown in Equations (5) and (6), respectively.
The adjusted quotas are substituted into model (4) to calculate the weighted efficiency again. Similarly, the quotas of each province are adjusted in a new round according to the proportional reduction method. When the weighted efficiency of each province is equal to one, the adjustment ends. The adjustment result is the optimal EAC allocation scheme.
Materials
Referring to the input-output indicators of previous studies, we select population, capital stock, and energy consumption as input variables, GDP as desirable output variable, and SO2, NOX, and CO2 emissions as undesirable output variables. It should be noted that although the Chinese government has proposed the target of reducing PM2.5 concentration by 10% in prefecture level and above cities and the expectation of curbing the cities' growth trend of O3 concentration in the 14th Five-Year Plan, considering the availability of historical data at provincial level and the important impacts of SO2 and NOX on the PM2.5 formulation and NOX on the O3 formulation, we still choose SO2 and NOX emissions as the representatives of air pollutants in this paper. Due to the lack of data for Tibet, Taiwan, Hong Kong, and Macao, we allocate EAC quotas to the other 30 provinces in China. The data sources are as follows: Statistics Yearbook (2012-2020). According to the 15% reduction target of energy intensity in WPEE, we calculate the total energy consumption in 2020 should be limited to 5.11 billion tce, which is greater than the total energy consumption control target of 5 billion tons. Therefore, we set the total energy consumption limit in 2020 as 5 billion tce. According to the 13.5% reduction target of energy intensity in the 14th Five-Year Plan and the total GDP of 30 provinces in 2025, the total energy consumption limit in 2025 is 5.76 billion tce. According to the 18% reduction target of CO2 emission intensity in WPGE, the CO2 emission limit in 2020 is 13.78 billion tons. According to the 18% reduction target of CO2 emission intensity in the 14th Five-Year Plan, CO2 emission intensity will be reduced by 3.9% annually from 2020 to 2025. Assuming that CO2 emission intensity decreases by 3.9% from 2019 to 2020, based on the CO2 emission intensity of 1.48 tons per 10,000 yuan in 2019, the CO2 emission intensity in 2025 is 1.17 tons per 10,000 yuan, and the CO2 emission limit in 2025 is 15.40 billion tons. (7) According to the synergistic effect equations of energy conservation and emission reduction [64] and the energy consumption intensity of 0.5156 tons per 10,000 yuan in 2020, we calculate that the emission intensities of SO2, NOX, and CO2 in 2020 are 9.24 × 10 −4 , 1.518 × 10 −3 , and 1.4825 tons per 10,000 yuan respectively, and the SO2, NOX, and CO2 emissions are 9.01 million tons, 14.70 million tons, and 13.78 billion tons, respectively. The SO2 and NOX emissions from the synergistic effect are less than those from WPEE, and the CO2 emissions from the synergistic effect is greater than that from WPGE. Therefore, we take the SO2 and NOX emissions from the synergistic effect and the CO2 emissions from WPGE as the limits of them in 2020.
Based on the energy consumption intensity and GDP in 2025, the limits of SO2, NOX, and CO2 emissions from the synergistic effect in 2025 will be 1.21 million tons, 9.99 million tons, and 16.79 billion tons, respectively. Because the CO2 emission limit from 14th Five-Year Plan is less than that from the synergy effect, we set the CO2 emission limit as 15.40 billion tons.
Initial Allocation Result in 2020 Based on Equity
Since energy is an important input resource for regional economic development, maintaining the continuity of energy consumption is very important for stable economic development. Limited by energy endowment, transportation, and other objective conditions, regional energy supply is difficult to change rapidly in the short term. In view of the above reasons, we choose 2011-2019 as the base period and apply the principle of historical egalitarian for the initial allocation of total energy consumption quota. Table 1 shows the initial allocation results of energy consumption quota in 2020. Atmospheric emissions can be controlled in a coordinated way between provinces, and advanced technological means can be used to strengthen emission reduction within provinces. Therefore, we apply the principles of historical egalitarian, population egalitarian, and pays ability egalitarian to initially allocate the provincial quotas of SO2, NOX, and CO2 emissions. By consulting the experts in the field of emission right allocation, we conclude that there is no significant difference between historical egalitarian and pays ability egalitarian in the near future, and they are more feasible than population egalitarian. Therefore, we establish three order relations to reflect the differences of different principles, namely, a. historical egalitarian = pays ability egalitarian > population egalitarian; b. historical egalitarian > pays ability egalitarian > population egalitarian; and c. pays ability egalitarian > historical egalitarian > population egalitarian. Accordingly, we establish three weight vector scenarios about historical egalitarian, population egalitarian, and pays ability egalitarian, namely: A1 (3/7, 1/7, 3/7), A2 (4/7, 1/7, 2/7), and A3 (2/7, 1/7, 4/7). Table 2 shows the initial allocation results of SO2, NOX, and CO2 emission quotas in 2020 under three scenarios. It is assumed that decision-makers consider energy conservation, air pollutant emission reduction, and CO2 emission reduction to be equally important, that is, α, β1, β2, and β3 in model (4) are all 0.25. By substituting the initial provincial energy consumption and SO2, NOX, and CO2 emission quotas under scenario A1 into model (4), we calculate the energy consumption efficiency (φ), SO2 emission efficiency (θ1), NOX emission efficiency (θ2), CO2 emission efficiency (θ3), and the weighted comprehensive efficiency (η), which can be seen in Figure 1. Beijing and Shanghai are at the efficiency frontier, indicating that they have achieved effective allocation and need to accept the excess quotas from other provinces in the subsequent adjustment process. The efficiency of the other 28 provinces has not yet reached the frontier, and the quotas of them will be reduced in the subsequent adjustment process. According to Equations (5) and (6), we make iterative adjustments. As can be seen from Figure 2, the comprehensive efficiency of the initial allocation result is the lowest. After 16 iterations, the comprehensive efficiency of each province is 1. The final adjustment result is the optimal EAC allocation result, as shown in Table 3. Similarly, we calculate the optimal EAC allocation results based on the weighted environmental ZSG-DEA model under scenarios A2 and A3 (see Tables A1 and A2 in Appendix A). In order to analyze the influence of different weights on the allocation results, we conduct sensitivity analysis on the values of α, β1, β2, and β3. Under the conditions that α > 0, β1 > 0, β2 > 0, β3 > 0 and α + β1 + β2 + β3 = 1, we traverse all possible values of them under three scenarios, and find that the optimal result does not change in any scenario. This means that the optimal allocation result has nothing to do with the weight distribution of all elements, but only with their initial allocation quotas.
Comparison with National Planning Targets
According to the two national plans of WPEE and WPGE, we calculate the provincial SO2, NOX, and CO2 emission quotas. In order to compare the fairness of the national planning scheme with the optimal allocation scheme, we calculate their Gini coefficients. The Gini coefficient method has the advantages of being immune to outliers, making full use of all samples and having relatively clear criteria. Its calculation formula is as follows: where i is the serial number of each province reordered from small to large according to the per capita or intensity index on energy or environment; Xi and Yi are the proportions of population (or GDP) and energy consumption (or atmospheric emissions) accumulated to the ith province, respectively. The greater the Gini coefficient, the more unfair the provincial quota allocation. Generally, a Gini coefficient less than 0.2 is absolute fair, between 0.2 and 0.3 is fair, between 0.3 and 0.4 is relatively reasonable, between 0.4 and 0.5 is a relatively large gap, and greater than 0.5 is a large gap. The calculation results are shown in Table 4. Table 4. Gini coefficients of the national planning scheme and three optimal allocation schemes in 2020. The intensity Gini coefficients of three optimal allocation schemes are all less than 0.2, indicating that they achieve absolute fairness. Other than the per capita Gini coefficient of SO2 emissions under scenario A3 achieving absolute fairness, other per capita Gini coefficients are all less than 0.3, indicating that they achieve interpersonal fairness. Aside from the Gini coefficient of per capita energy consumption in the national planning scheme being less than that in the optimal allocation scheme, all the other Gini coefficients in the national planning scheme are greater than those in the optimal allocation scheme. This shows that the optimal allocation scheme based on weighted environmental ZSG-DEA model is generally fairer than the national planning scheme. By comparing the optimal allocation schemes under three initial allocation scenarios, it can be seen that the Gini coefficient of each index in the scenario A3 is the smallest. Therefore, the initial allocation under scenario A3 can improve the fairness of final EAC quota allocation.
Index
By substituting the quotas in the national planning scheme into model (4), we calculate the allocation efficiency of each province, as shown in Figure 3. Except Beijing and Tianjin, the other 28 provinces are not at the frontier of efficiency. Therefore, from the perspective of efficiency, the optimal allocation scheme is better than the national planning scheme.
Allocation Results in 2025
According to the allocation results in 2020, the optimal allocation scheme under scenario A3 can not only achieve the efficiency optimization, but also achieve the best fairness. Therefore, we select scenario A3 for the initial allocation of atmospheric emission quotas in 2025, and the result is shown in Table 5. Assuming that α, β1, β2, and β3 are all 0.25, we substitute the above initial allocation result into model (4) to calculate the weighted comprehensive efficiency. After 14 iteration adjustments, the comprehensive efficiency of each province is 1. In order to reflect the influence of different weight distribution on the allocation results, we still traverse all possible values of α, β1, β2, and β3 for sensitivity analysis. The results show that the optimal allocation result in 2025 is still insensitive to the weights of different elements, which is consistent with the conclusion in 2020. The final allocation result is shown in Table 6. Table 7 presents the per capita and intensity Gini coefficients of the optimal allocation scheme. All the Gini coefficients are less than 0.2, indicating that the optimal allocation scheme achieves provincial absolute fairness in terms of per capita and intensity. Table 7. Gini coefficients of the optimal EAC quota allocation scheme in 2025.
Measurement of Energy Conservation and Emission Reduction Pressures
It is assumed that the provincial population, GDP, and other planning targets and the forecasted capital stocks in 2025 are achievable, and each province has the same pressure of energy conservation and emission reduction under the initial fair allocation scheme. In order to measure the pressure of optimal allocation scheme, we define the pressure index as follows: where Pij is the pressure index of the jth allocation element of the ith province, and Xij 0 and Xij are the initial and optimal allocation results, respectively. If Pij > 0, it means that the pressure of energy conservation or emission reduction increases compared with the initial pressure. If Pij ≤ 0, it means that there is no increased pressure and even an excess of quota space. Due to the homology and synchronization between air pollutants and CO2 emissions, SO2, NOX, and CO2 emissions can be synergistically reduced through some technical or management measures. We synthesize the pressure index of them by equal weight 1/3 to calculate the pressure index of emission reduction. Table 8 shows the pressure index values of energy conservation and emission reduction under the optimal allocation scheme. By calculating the average value of pressure index greater than 0, the average energy conservation pressure index and average emission reduction pressure index are 33.38% and 31.78%, respectively. We define the case where the pressure index value is greater than or equal to the average value as high pressure, the case where the pressure index value is greater than 0 and less than the average value as low pressure, and the case where the pressure index value is less than or equal to 0 as no pressure. It can be seen from Table 8 that different provinces will undertake different energy conservation and emission reduction pressures. In terms of energy conservation pressure index, 16 provinces, namely Beijing, Tianjin, Jilin, Shanghai, Jiangsu, Zhejiang, Anhui, Fujian, Jiangxi, Henan, Hubei, Hunan, Guangdong, Guangxi, Hainan, and Chongqing, have no pressure; 7 provinces, namely Liaoning, Heilongjiang, Shandong, Sichuan, Guizhou, Yunnan, and Shaanxi, undertake low pressure; and 7 provinces, namely Hebei, Shanxi, Inner Mongolia, Gansu, Qinghai, Ningxia, and Xinjiang, undertake high pressure. In terms of emission reduction index, 11 provinces, Beijing, Tianjin, Shanghai, Jiangsu, Zhejiang, Fujian, Shandong, Hubei, Guangdong, Hainan, and Chongqing, have no pressure; 9 provinces, namely Inner Mongolia, Liaoning, Jilin, Anhui, Jiangxi, Henan, Hunan, Sichuan, and Shaanxi, undertake low pressure; and 10 provinces, namely Hebei, Shanxi, Heilongjiang, Guangxi, Guizhou, Yunnan, Gansu, Qinghai, Ningxia, and Xinjiang, undertake high pressure.
From the two dimensions of energy conservation pressure and emission reduction pressure, 30 provinces can be divided into 7 categories, as shown in Figure 4. Category I includes 10 provinces, i.e., Beijing, Tianjin, Shanghai, Jiangsu, Zhejiang, Fujian, Hubei, Guangdong, Hainan, and Chongqing, which have no energy conservation and emissions reduction pressures. Category II includes 6 provinces, i.e., Anhui, Jiangxi, Hunan, Guangxi, Jilin, and Henan, which have no energy conservation pressure, but have certain emission reduction pressure. Category III only includes Shandong province, which has certain energy conservation pressure but no emission reduction pressure. Category IV includes Liaoning, Sichuan, and Shaanxi provinces, which have certain energy conservation and emission reduction pressures. Category V includes Heilongjiang, Guizhou, and Yunnan provinces, which have certain energy conservation pressure and high emission reduction pressure. Category VI includes 6 provinces, i.e., Hebei, Shanxi, Gansu, Ningxia, Qinghai, and Xinjiang, which have high energy conservation and emission reduction pressures. Category VII only includes Inner Mongolia, which has high energy conservation pressure and certain emission reduction pressure. It can be seen from Table 8 that the pressures of energy conservation and emission reduction vary greatly among provinces. Provinces with a relatively backward economy such as Xinjiang, Ningxia, and Qinghai undertake huge pressures, while economically developed provinces such as Beijing, Tianjin, and Shanghai have sufficient quotas. Undoubtedly, the huge pressures of energy conservation and emission reduction will make Ⅴ Ⅵ Ⅶ the relatively backward provinces bear greater economic costs, which may lead to the further expansion of provincial economic gap. Therefore, in order to improve the enforceability of the optimal quota allocation scheme, China should accelerate the implementation of the regional coordinated development strategy and promote regional integration cooperation among provinces. China has clearly proposed three major regional development strategies, namely "the Belt and Road", Beijing-Tianjin-Hebei Collaborative Development, and Yangtze River Economic Belt. We calculate the energy conservation and emission reduction pressure indexes for the major regions involved in the national strategies, and the results are shown in Table 9. It can be seen that relying on the Yangtze River golden waterway, the Yangtze River Economic Belt connecting 11 provinces (Shanghai, Jiangsu, Zhejiang, Anhui, Jiangxi, Hubei, Hunan, Chongqing, Sichuan, Yunnan, Guizhou) in the eastern, central, and western regions has no energy conservation and emission reduction pressures, and there are 23.13% energy conservation surplus and 12.90% emission reduction surplus. Relying on coastal cities and ports, 21st Century Maritime Silk Road connecting 11 provinces (Liaoning, Hebei, Tianjin, Shandong, Shanghai, Jiangsu, Zhejiang, Fujian, Guangdong, Guangxi, Hainan) in the north and south regions also has no pressure, and there are 13.87% energy conservation surplus and 31.82% emission reduction surplus. The energy conservation pressure index of Beijing-Tianjin-Hebei is 10.19%, and there is 4.22% emission reduction surplus. The New Eurasian Continental Bridge Economic Corridor covering Jiangsu, Anhui, Henan, Shaanxi, Gansu, Qinghai, and Xinjiang will bear the energy conservation pressure index of 0.82% and the emission reduction pressure index of 7.69%. The China-Mongolia-Russia Economic Corridor covering Beijing, Tianjin, Hebei, Inner Mongolia, Liaoning, Jilin, and Heilongjiang will bear the energy conservation pressure index of 24.14% and the emission reduction pressure index of 13.79%. Table 9. Energy conservation and emission reduction pressure indexes in three national strategic regions.
Conclusions and Policy Implications
From the perspective of improving the fairness and efficiency of allocation, we construct a weighted ZSG-DEA model to adjust and optimize the initial allocation scheme of EAC quotas, and conduct a detailed analysis of provincial EAC quotas allocation in 2020 and 2025. The main conclusions are as follows: (1) The proposed weighted environmental ZSG-DEA model has the following advantages: first, it considers the strong disposability of energy, population, fixed assets, and other input factors and expected output GDP, as well as the weak disposability and null-jointness of unexpected outputs such as air pollutants and CO2 emissions, which is more in line with the reality of economic production. Second, the weights of energy efficiency and unexcepted output efficiency can reflect the decision-makers' attention to different allocation elements, which makes the allocation model interactive. Third, the model is a general model applicable to the allocation of various elements, which can be extended to meet the allocation requirements of other environmental factors aside from those mentioned in this paper. (2) The efficiency of the initial allocation scheme in 2020 based on the fairness principle is low, and there are significant differences in efficiency values between provinces. After applying the weighted environment ZSG-DEA model to optimize the initial scheme, the efficiency of EAC quota allocation is significantly improved, and the efficiency value of each province is 1, realizing the effective allocation of input and output. The optimal allocation result in 2020 shows that the fairness and efficiency of optimal allocation scheme are better than the national planning scheme. In addition, the sensitivity analysis of EAC element weights shows that the optimal allocation results are independent of the weight distribution of elements, but only related to their initial allocation quotas. The optimal allocation scheme based on the initial priority order of "pays ability egalitarian > historical egalitarian > population egalitarian" has the best fairness. (3) The optimal allocation scheme in 2025 is not only effective, but also realizes absolute fairness. However, different provinces will undertake different energy conservation and emission reduction pressures. By implementing regional development strategy and promoting the coordinated energy conservation and emission reduction among provinces, the enforceability of allocation scheme can be improved.
Combined with energy conservation and emission reduction pressures of 7 categories, we put forward the following policy implications for each category. (a) The 10 provinces in Category I have no energy conservation and emissions reduction pressures, and they can continue to implement the existing policies and measures, or sell excess quotas in the market through emission trading. (b) The 6 provinces in Category II only have certain emission reduction pressure, and they can reduce air pollutants and CO2 emissions by optimizing the energy structure, adopting desulfurization and denitrification technology, and installing waste gas treatment equipment. (c) Shandong Province in Category III only has certain energy conservation pressure, and it can adopt energy-saving technology, eliminate backward production capacity and accelerate the transformation of old and new kinetic energy. Due to the synergistic effect of energy conservation and emission reduction, Shandong province may reduce its air pollutants and CO2 emissions while saving energy. Therefore, it can sell excess emission quotas through the market. (d) The three provinces in Category IV have certain energy conservation and emission reduction pressure, and they can promote energy-saving technologies and reduce underdeveloped production capacity to achieve energy conservation and the coordinated air pollutants and CO2 emission reduction. (e) The three provinces in Category V have certain energy conservation pressure and high emission reduction pressure. On the one hand, they should adopt desulfurization and denitrification technologies and install waste gas treatment facilities to reduce emissions. On the other hand, they can take energy-saving measures such as promoting energy-saving technologies and accelerating the elimination of backward production capacity to jointly reduce air pollutants and CO2 emissions. (f) The 6 provinces in Category VI have high dual pressures on energy conservation and emission reduction. The energy consumption of these provinces is mainly dominated by traditional energy such as coal. They should make great efforts to improve the energy structure, use clean and efficient energy, and avoid excessive use of coal and other fossil fuels. In addition, they should take environmental protection measures such as desulfurization and denitrification technology and installation of industrial waste gas treatment equipment for industrial emission sources. (g) Inner Mongolia in Category VII has high energy conservation pressure and certain emission reduction pressure. It should strengthen the popularization and application of energy-saving technologies, eliminate backward production capacity, and improve the energy structure.
In accelerating the implementation of regional development strategy, China should build efficient transportation networks, improve logistics and transportation systems, weaken the barriers of factor flow between regions, and establish the benefit compensation mechanism between the source and destination of energy and atmospheric emissions. According to the energy conservation and emission reduction pressures in the major regions involved in the three major regional development strategies, we put forward the following policy suggestions: (a) Yangtze River Economic Belt and 21st Century Maritime Silk Road have surplus of energy conservation and emission reduction, and they can continue to implement the current regional energy conservation and emission reduction policies. (b) Beijing-Tianjin-Hebei has achieved remarkable result in the coordinated control of air pollutants. In the future, it should strengthen the construction of integrated energy system, coordinate energy cooperation, and reduce energy consumption. (c) The New Eurasian Continental Bridge Economic Corridor can make full use of the superimposed advantages of Jiangsu and Anhui in the two regional development strategies to drive the integration of provinces in this region with the Yangtze River economic belt. In addition, the region should give full play to its role as a bridge linking central and eastern European countries, strengthen regional cooperation in energy, technology, and other fields, and realize high-quality development of regional energy and environment. (d) The China-Mongolia-Russia Economic Corridor can make full use of the regional cooperation advantages, increase regional cooperation in investment, trade, and energy, and optimize the industrial structure and energy consumption structure, so as to promote the efficient completion of regional energy conservation and emission reduction targets.
To sum up, we believe that China should continue to implement the responsibility system for energy conservation and emission reduction targets and vigorously promote regional development strategy. First, differentiated EAC quota targets should be allocated to all provinces to ensure their sustainable economic development and high efficiency in energy conservation and emission reduction. Second, a more extensive and in-depth regional integration strategy should be promoted, so as to promote the flow and exchange of elements in different regions, and promote the realization of provincial energy conservation and emission reduction targets. Third, each province should develop new energy, increase the use of clean energy, optimize industrial structure, and speed up industrial upgrading.
There are some defects in this study. First, due to the availability of data, only SO2 and NOx are selected as atmospheric pollutants. With the abundance of PM2.5, O3, and other data in China, the allocation of multiple pollutants could be included in further studies. Second, FCAM can also achieve efficiency optimization based on fair initial allocation. Constructing the environmental FCAM based on EPT and comparing it with the environmental ZSG-DEA model in this paper will enrich the research of EAC quotas allocation. Informed Consent Statement: This paper does not contain any studies with human or animal participants performed by the authors.
Data Availability Statement:
The data used to support the findings of this study are available from the corresponding author upon request. | 9,944.2 | 2022-02-16T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Compressive Strength and Slump Prediction of Two Blended Agro Waste Materials Concretes
RESEARCH ARTICLE Compressive Strength and Slump Prediction of Two Blended Agro Waste Materials Concretes Oluwaseye Onikeku, Stanley M. Shitote, John Mwero, Adeola. A. Adedeji and Christopher Kanali Civil Engineering Department, Pan African University Institute for Basic Sciences, Technology and Innovation (PAUISTI), 62000-00200 Nairobi, Kenya Civil Engineering Department, Rongo University, 103-40403 Rongo, Kenya Civil Engineering Department, University of Nairobi, Nairobi, Kenya Civil Engineering Department, University of Ilorin, Ilorin, Nigeria Agricultural Engineering Department, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
INTRODUCTION
Concrete is a combination of aggregates and cement which could be framed to any magnitude and forms of the expected design [1]. The world's yearly manufacturing of concrete is evaluated to be greater than ten billion tons [2,3]. Cement manufacturing costs a lot of resources and energy. One ton of cement manufacturing depletes roughly about 1.7 tons of raw materials, 4 Giga Joules (GJ) of energy, and discharges roughly 0.73 -0.99 tons of carbon dioxide into the atmosphere [2, 4 -6]. As a result, the concrete sector is now finding means to exploit alternative elements for concrete production in order to attain sustainable growth and to reduce the harsh environmental effect [7].
Agro-industrial wastes have been employed to replace cement so as to produce affordable concrete. These, in turn, stimulates solid waste management and conservation of natural resources [8 -15]. Other waste materials such as sewage sludge ash [16], nano-silica particles [17], blast furnace slag and nano silica hydrosols [18], calcined sewage sludge ash [19], nano silica and sewage sludge ash [20] have been used for replacing cement at different percentage replacements for producing concrete. Bamboo leaf ash (BLA) and baggage ash (BA) are part of the agro-industrial wastes that are utilised as promising cement replacement materials.
Traditional concrete is a mixture of water, cement, fine aggregates, and coarse aggregates [7]. The features of the concrete are affected by various factors like the type of cement, water-cement ratio, water content, quantity and quality of aggregates [21]. The conventional method employed in modelling the impacts of these factors on fresh and mechanical features of concrete begins with an assumption of an analytical equation [22]. Meanwhile, the method does not capture the real scenario when some of the concrete components are not the traditional elements and when the assumptions are wrong.
Slump and compressive strength are important fresh and mechanical features of concrete and play a vital role in the design of concrete entities [23]. Workability is the characteristic of concrete which ascertains the efforts needed for compaction, finishing, and insertion with the lowest loss of uniformity. The final compressive strength of concrete is mostly achieved after curing age of 28 days which serves as a point of reference to define the strength at the subsequent age [24,25]. Methods like mechanical modelling, analytical modelling, artificial intelligence, multiple linear regression, and other statistical methods [7,25] have been used lately for forecasting the slump and strength of concrete. An artificial intelligence technique which has flourished well and grown swiftly in engineering practices is Artificial Neural Network (ANN) [26,27]. The artificial neural network has been used in lots of civil engineering practices such as groundwater monitoring, concrete strength forecasting, material behaviour modelling, discovery of structural damage, concrete mix apportioning, and structural system recognition [28, 26, 29 -31].
Numerous research works [7, 21, 29, 30 -34] have employed artificial neural network for forecasting the strengths of various kinds of concrete. Duan et al. [7] developed an artificial neural network model using a single hidden layer. The artificial neural network model forecasted the recycled aggregate concrete strength accurately. Chopra et al. [21] also used sigmoid activation function and Levenberg-Marquardt training function for predicting compressive strength based on ANN modelling. They found that the sigmoid activation function and Levenberg -Marquardt were the best forecasting tools. ANN predictive model was also constructed by Hossein et al. [28] for forecasting the compressive strength of the recycled aggregate concrete. The outcome of their studies revealed that 0.0044MPa at epoch 5 was the best performing validation. A multi-layer feed-forward artificial neural network architecture known as 5-10-1 was developed by Kalra et al. [35] for the purpose of forecasting the compressive strength of concrete. Based on their research, the suitable validation was noted at epoch 40 and 10.99 MPa as its mean squared error value. Jamaladin et al. [36] forecasted the compressive strength of high-performance concrete through ANN. They studied thirty ANN architectures and found that 8-10-6-1 was the best architecture for their model. Their model was capable of simulating experimental results correctly. Getahun et al. [37] also employed ANN predictive modelling for prediction of compressive strength and tensile strength using rice husk ash and reclaimed asphalt aggregate concrete.They used 15-15-2 architecture which is multi-layer feed-forward. Their actual and predicted results were very close. Heidari et al. [38] used ANN model in order to predict strength of concrete. It was discovered that the real values and the predicted values produced by neural network were closer and absolutely correct. ANN model was also constructed by Bharathi et al. [39] towards the forecasting of hardened and fresh components of self-compacting concrete by partial replacement of cement with fly ash.
Multiple Linear Regression (MLR) on the other hand is a statistical method which utilises several independent variables in order to forecast the outcome of a dependent variable. The basic work of MLR is to model the relationships that exists between independent variables and dependent variables. It has been used in civil engineering for slump prediction and strength prediction as well [40]. Charhate et al. [40] utilised MLR for the purpose of predicting the slump of concrete grade of M20, M25, M30,M35, M40, M45,M50,M60, and M70. The predicted slump value obtained for each grade of concrete was closer to the actual slump value. Yeh [41] used MLP for predicting the slump flow of high-performance concrete, the outcome of the research indicated that the differences between the actual slump and forecasted slump were minimal.
The key factor for ANN's wide applicability and acceptance is its tendency and efficacy in resolving complicated and complex engineering setbacks [26]. Basically, the feed-forward network (multi -layer perception) has been frequently utilised in ANN architecture [42]. Feed-forward network consists of an array of extensively lateral neurons that are computational. The nodes are linked to each other by connected weights and collect the input signals based on the neurons connected together with it. Succeeding layers of the nodes obtain input from preceding arrays. That is, the output for the nodes of every array provides inputs to the nodes of the next layer. The artificial neural network processing components are analogous as compared with neurons of human brain which consist of computational segments grouped in layers [22]. Furthermore, ANN has the ability to unveil amazing potentials in the modelling of human brains [43].
There is a need to train and design the ANN accurately with data with respect to the challenge so as to obtain the required purpose. The log -sigmoid, purelin, and tan-sigmoid are the most famous activation functions [44]. Besides the output and input neurons, activation function is a key which alters the performance of an ANN [45]. Backpropagation algorithm has been largely employed for feed-forward network training [37].
The key data was obtained via laboratory experimental work for developing a compressive strength forecasting model through ANN and MLP. The combination of BLA and BA to produce concrete is a new initiative, as no study has been performed so far in this aspect. The features and constituents of concrete blending BLA and BA are clearly not the same as normal concrete, this, in turn, make it difficult to forecast compressive strength through statistical and analytical modelling techniques . The statistical model approach adopts a parametrization and premeditated pattern. Also, the analytical method is normally restrained due to some complications; it is either pointless or unduly rigorous to be formulated as a result of its illogical postulations. Conversely, ANN model has the ability to represent non -linear compounded relations between variables through examining vital features inherent in the data [37]. Furthermore, ANN can produce undesirable outcomes to impediments such as errors in the input variables which render it deficient. Hence, it has the ability to generalize and learn from occurrences and records [39]. MLP has the ability to represent linear compounded relations which exist among the variables correctly [36].
The contributions made in this paper are: (1) To affirm the efficiency of ANN and MLR models in forecasting the compressive strength and slump of BLA and BA blended concrete (2) Previous researchers worked on 28 days compressive strength. However, in this study, 56 and 90 days compressive strength was tested using ANN, there was a remarkable improvement as curing age increases. Also, the slump of the combined effect of blending BLA and BA at different percentage replacement was tested using MLR (3) Previous researchers used different pozzolans and admixtures to produce concrete. However, in our study, we used BLA and BA to produce concrete.
Elements Used
Elements employed in this studies consist of a superplasticizer, Baggage Ash (BA), Coarse Aggregate (CA), Bamboo Leaf Ash (BLA), water, Fine Aggregate (FA), and Ordinary Portland Cement (OPC). All the materials utilised were obtained from different counties in Kenya. Coarse aggregate and fine aggregate were acquired from Mlolongo and Masinga sited in Machakos county, Kenya. Sugar cane ash was fetched from sugar manufacturing industry situated in the Kakamega province of Kenya.The leaves of the bamboo were collected in a forest (MAU) situated in Kenya. The cement, as well as superplasticizer, was obtained from the central part of Kenya. Potable water from the tap was utilised. Table 1 shows the procedures utilised for this research work; all the methods shall be discussed in the next section.
Test Methods for Material Classification
The grading technique for aggregates was conducted based on ASTM C 33 [46] concept by utilising sieves in agreement to BS ISO 3310-2 (2013) [47]. Aggregates' selection was accomplished based on BS EN 932-1 (1997) [48] batching conditions. The leaves of the bamboo were dried in the sun to eliminate the liquid content present in it. Afterward, they were then subdued to burning in order to get rid of organic materials that might be present in them. Hence, calcination operation took place through muffle furnace at 650°C for 2 hours as detention time. After cooling, it was sieved through sieve size of 0.15 mm. The baggage ash was fetched from the sugar industry and was also sieved with 0.15 mm sieve size.
Mix Operation and Design Mix
A total of 25 mixes were prepared. The details of the mix distributions are stated in Table 2. The cement was partly substituted using 5%, 10%, 15%, and 20% of BLA and BA through weight of the total cement. BLA was kept constant ranging from 5%, 10%, 15%, to 20%. BA varied from 5% -20% and was blended with the BLA at each level of replacement. The mixtures were prepared using 195 kg/m 3 water and a constant water binder ratio (w/b) of 0.5. Furthermore, 0.8% by cementitious material weight superplasticizer was employed for the mixes. Manual mixing was carried out as stipulated in BS EN 1881-125 (2013) [49] and was taken care of by a tool to shield against the dispersion of liquid and cementitious materials at the mixing stage. Chemical analysis for BA, cement, and BLA was conducted through x-ray diffraction (XRD) equipment in line with BS EN 196-2 (2013) [50]. The cement, baggage ash, and bamboo leaf ash were obtained based on ASTM C188 (2016) [51]. The concrete design mix for grade 25 was carried out with reference to BS EN 206 (2014) [52] as well as BS 8500 -2 (2012) [53]. 2) shows the gradation of coarse aggregates. It reveals that the envelope of the curve was within the curve limits as stated according to ASTM -33 (2003) [46]. Almost 90% of the coarse aggregates fall within 9.5mm to a value of 25mm. Fig. (3) shows the distribution in terms of particle size of the BLA.
From Fig. (3), it can be observed that particles of roughly about 20% lie between 1 µm -2 µm, and also 80% of these particles lie between 2 µm -150 µm, conforming to standards of ASTM -D7928 (2017) [54]. This increase the bamboo leaf ash water content and surface area . The particle size for BLA utilised for this study was 150 µm. The particle size distribution of BA is illustrated in Fig. (4). Based on the Fig. (4), approximately 29% of the particles lie within 2 µm -20 µm. Also, 71% lie between 20 µm -70 µm which conform to the ASTM -D7928 (2017) [54]. As a result, the liquid content as well as the surface area of the baggage ash was raised. The particle size of baggage ash used in this study was 150 µm. The modulus in terms of fineness for the fine aggregate was estimated to be 2.55 and conformed to ASTM -C33- (2003) [46] which stated that fineness modulus should lie between 2.3-3.1.The fine aggregate silt content was found to be 4.67%, conforming to requirements stipulated in ASTM -C33 (2003) [46] which should not exceed 5%. Specific gravity 2.48 and 2.43 of coarse aggregates and fine aggregates met the range stated in ASTM -33 (2003) [46] between 2.4 -2.9, respectively. Additionally, the bulk density (rodded) of 1577 kg/m 3 and 1495kg/m 3 for fine aggregates and coarse aggregates lies between the limit of 1200 -1750 kg/m 3 according to ASTM -C33 (2003) [46] specifications. The water absorption for aggregates was 3.95 and 3.27 which conformed to ASTM -C33 (2003) [46], not reaching beyond 4. The specific gravity of 2.10 and 2.79 for BA and BLA was 33% and 11%, being lower as compared to cement. Furthermore, the bulk density of baggage ash and bamboo leaf ash was about 32% and 33% to that of OPC. The lesser values obtained in reference to bulk density and specific gravity of BA and BLA could lead to a reduction in the density of concrete. The highest particle size of OPC, BA, and BLA was obtained to be 150mm, 90mm, and 150mm, respectively. Table 4 shows the chemical percentages for BA, BLA, and cement. . (4). Distribution in terms of particle size of BLA in conformity with ASTM -D7928 benchmark. The chemical analysis of BA, BLA, and OPC is shown in Table 4. XRD was used to perform the test. According to the results illustrated in Table 4, the percentage of CaO discovered in BLA [55] was higher than BA. However, CaO discovered in cement was larger than BA and BLA. The CaO is the main mechanism behind the establishment of tricalcium silicate and dicalcium silicate which ionises with water to produce about calcium -silicate -hydrate (C-S-H) and it was believed to be the dominant driver in terms of progression of strength. The percentage composition of SiO 2 + Al 2 O 3 + Fe 2 O 3 for BA and BLA was found to be 77.87% and 73.38% respectively which exceeded 70% minimum condition based on ASTM C618 (2008) [56] for a pozzolana. Likewise, the LOL of BLA and BA was greater as compared to cement. It fell between the limit of 12% indicated by ASTM C618 (2008) [56]. The physical and chemical features of the superplasticizer adopted in this study are shown in Table 5.
Artificial Neural Network's Framework Scheme
Three-layer perceptron was constructed through R (nnet package). A total of eleven artificial neural networks were formulated using 214 data sets attained from 27 laboratory concrete mixtures made. The neural network consisted of an input layer, a hidden layer, and an output layer. A total number of 14 input variables (cement, water, BLA, BA, water cement ratio (w/c), fine aggregates, coarse aggregates, aggregate size, curing age, specific gravity coarse, specific gravity fine, fineness modulus, water absorption coarse, water absorption fine) were considered for the analysis. However, curing age, BLA, BA, and cement were the most viable variables employed for the analysis. The compressive strength was the dependent variable. Artificial neural network model independent and dependent characteristics are displayed in Table 6 (Eq. 1). (1)
Training of Neural Network
The 11 neural networks formulated were trained by hidden neurons and activation function. The activation function used for training was linear activation function.
(2)
Back propagation was utilised for the purpose of networks training. The training process was performed in such a way that, each layer was trained sequentially through forward and backward estimations. The algorithm of the backpropagation consist of two major stages [31].
Model Efficiency Assessment
The artificial neural network model were trained using a training data set. Testing and validation were performed on testing and validation data set, and degree of precision was computed through the forecasted errors accrued from testing and validation sets. The forecasting accuracy of the ANN Where a is the observed value; p is the forecasted value, n represents the concrete sample numbers.
Feature Relative Importance
Feature importance analysis is performed in order to evaluate the relative impact/significance of input variables on the output of the artificial neural network's compressive strength forecasting model. The determination of the effect of input parameters on the output is regarded as complex in ANN [28,57]. In this research, the relative feature importance was performed on the testing and validation data set using R statistical software.
ARTIFICIAL NEURAL NETWORK ARCHITECTURE
The initial essential phase for establishing an artificial neural network model is ascertaining the ANN architecture. Meanwhile, there is no ground rule for choosing the best artificial neural network architecture, which still requires further study [26,58]. Thus, following numerous attempts performed, ANN architecture depicted in Fig. (5) was chosen.
The analysis was performed using R (nnet package), and four input variables, five neurons, one hidden layer, and one output were adopted. I 1-I 4 connotes the input variables for the model, B1-B2 stand for bias or threshold introduced during the learning process, H 1-H5 represent the neurons and hidden layer used, and C.S. is the output layer which represents compressive strength. A total of 214 data sets were used. The data sets were divided into two sets and 67% of the data set was used as training set i.e. 144 data while 33% of the data set was used as testing and validation i.e. 70 data. The training, testing and validation data were randomly selected from 28, 56, and 90 days compressive strength values.
Forecasted Compressive Strength
The compressive strength results produced by the model in terms of training, testing and validation were in close range as compared to the actual or experimental values. The visual plots given in Figs. (6 and 7) are the in-sample and out-sample of the actual and predicted values. The effectiveness of the model was assessed through model precision mechanism shown in Tables 7 and 8. Fig (6). ANN architecture . Fig (7). Plot of actual and predicted results (training). The ANN model's coefficient of determination for both predicted and actual values for the training was found to be 0.961 while that of testing and validation, it was 0.905. This shows that there is a strong correlation between the actual and predicted values in both the cases. The ANN model forecasted the compressive concrete strength having RMSE of 0.802 MPa for training, while RMSE for testing and validation was 1.380 MPa. This indicated that the differences among the forecasted and actual compressive strength values were negligible in both the cases. The MSE for training was 0.644 MPa and 1.905 MPa for testing and validation, respectively. This signifies that the model over forecasted the compressive strength averagely by 0.644 MPa and 1.905 MPa. The MAE for the training was estimated to be 0.588 MPa which stands for the average difference between the predicted and actual values. The MAE for testing and validation was found to be 1.050 MPa, indicating that, the average difference between the predicted and actual values was minimal. MAPE for the training indicated that the forecasted compressive strength changed averagely by 2.328% from the actual values. While MAPE for testing and validation was 3.946% i.e. the forecasted compressive strength deviated averagely by 3.946% from the actual values. The visual plots shown in Figs. (6 and 7) distinctly display the forecasted compressive strength values which were in close conformity as compared to the actual or experimental values. Its supports that the model has the ability to replicate the actual compressive strength results with great precision.
Relative Feature Importance Analysis
The input features consist of details concerning the expected outputs. Nevertheless, some features might seem to be insignificant which in turn leads to stagnancy in the model. Insignificant features make the training algorithm dummy and noisy. Insignificant features do not impact extra details to the model and can result in deterioration in terms of learning algorithm performance. The relative feature importance of each input feature was built on the testing and validation data set using R statistical software as shown in Fig. (8). The feature relative importance analysis results obtained indicated that curing age is the most significant, followed by bamboo leaf ash (BLA), baggage ash (BA), and cement.
MULTIPLE LINEAR REGRESSION
Multiple linear regression (MLR) is a statistical tool which predicts the outcome of a dependent variable using many independent variables. It models the relationship between independent and dependent variables. The analysis was performed using R statistical software. The MLR model was fitted to several independent variables like bamboo leaf ash (BLA), baggage ash (BA), water, cement, coarse aggregate (C.A), fine aggregate (F.A), water binder ratio (W/B), water to solid ratio (W/S), total aggregate -to-binder ratio (TAB), nominal aggregate size (NS), and superplasticizer (SP). Water, C.A, F.A, W/B, TAB, NS and W/S were, however, found to be insignificant.Therefore, the slump data was fitted to only 5 independent variables namely: Cement, BA, BLA, SP1 and SP2. The data was subdivided into two sets. The first 17 values were used as training data and the last 8 values were used as the test and validation data. The MLR model input and output characteristics are shown in Table 7.
Best Fitted Slump Model
Out of the 5 independent variables chosen, the best combination of variables that gives the smallest AIC (Akaike information criterion) was chosen as the best model.
Step regression was utilised to check the best combination of independent variables that gave the smallest AIC. The results of the step regressions are given in Eqs. (7 and 8). From the analysis above, the best model was found to be the one that had Cement, BA, BLA and SP1 as independent variables. The model was fitted and the results are summarized in Table 8. Thus, the fitted model is given in Eq. (9). From the P-values in Table 10, it can be seen that all the parameters are significant at 5% level of significance. From the results, we can also see that all the independent variables are positively correlated to slump. Hence, an increase in any of the independent variables would increase the slump value.
Predicted Slump
The slump results produced by the model in terms of training, testing and validation were very close as compared to that of actual or experimental values based on visual plots given in Figs. (9 and 10). The accuracy of the model was determined by a model precision tool illustrated in Table 9. The MLR model's multiple R 2 and adjusted R 2 for both predicted and actual values were 0.9336 and 0.9115, which indicated a strong correlation between the actual and predicted values ( Table 11). The residual error for the model was estimated to be 3.075 at 12 degrees of freedom. This connotes that the difference between the observed values and forecasted values was not much significant. The MLR model predicted the slump by an RMSE value of 6.634mm for training, while RMSE for testing and validation was 8.373mm. This means that the differences between the predicted and actual slump values were very small. MAPE for the training implies that the forecasted slump values shifted on average by 3.633% from actual data values. MAPE for the testing and validation was 8.034% i.e. the predicted slump changed on average by 8.034% from the actual data. Furthermore, the visual plots illustrated in Fig. (10 and 11) clearly show that the predicted slump values were very close as compared to the experimental or actual values. Therefore, the model has the tendency to duplicate the actual slump results with moderate accuracy. was 0.961 for training and 0.905 for testing and validation. The curing age, BLA, BA, and cement contributed immensely towards the ANN model output. The MLR model forecasted the slump with predictive error (RMSE) values of 6.634 mm for training and 8.374 mm for testing and validation. The predicted slump deviated (MAPE) averagely by 3.633% for training and 8.034% for testing and validation. The residual error was 3.075 at 12 degrees of freedom. The multiple R 2 and adjusted R 2 were 0.9336 and 0.9115 respectively. The P-value was found to be 5.639e-07 which is less than 0.05. Hence, the MLR model is a good fit at 5% level of significance. The superplasticizer (SP1+), BLA, BA, and cement contributed greatly towards the MLR model output.
PLOT OF ACTUAL AGAINST PREDICTED (IN SAMPLE)
ANN was able to forecast accurately the 28, 56 and 90 days compressive strength. MLR was able to forecast accurately the slump of the concrete. Our work was compared favourably the work of Getahun et al., (2018)
CONSENT FOR PUBLICATION
Not applicable.
AVAILABILITY OF DATA AND MATERIALS
The data that support the findings of this study are available from the corresponding author, [O.O., S.M.S., J.M., A.A.A., C.K.], upon request.
FUNDING
This study was supported by The African Union Commission (AUC) and the Japan International Cooperation Agency (JICA) (Grant no. 10.13039 /501100004532). | 6,028.4 | 2019-09-30T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Materials Science"
] |
Study of a Forwarding Chain in the Category of Topological Spaces between T0 and T2 with respect to One Point Compactification Operator
In the following text, we want to study the behavior of one point compactification operator in the chain Ξ := {Metrizable, Normal, T2, KC, SC, US, T1, TD, TUD, T0, Top} of subcategories of category of topological spaces, Top (where we denote the subcategory of Top, containing all topological spaces with property P , simply by P). Actually we want to know, for P ∈ Ξ andX ∈ P, the one point compactification of topological spaceX belongs to which elements of Ξ. Finally we find out that the chain {Metrizable, T2, KC, SC, US, T 1 , TD, TUD, T0, Top} is a forwarding chain with respect to one point compactification operator.
Introduction
The concept of forwarding and backwarding chains in a category with respect to a given operator has been introduced for the first time in [1] by the first author. The matter has been motivated by the following sentences in [1]: "In many problems, mathematicians search for theorems with weaker conditions or for examples with stronger conditions. In other words they work in a subcategory D of a mathematical category, namely, C, and they want to change the domain of their activity (theorem, counterexample, etc.) to another subcategory of C like K such that K ⊆ D or D ⊆ K according to their need." Most of us have the memory of a theorem and the following question of our professors: "Is the theorem valid with weaker conditions for hypothesis or stronger conditions for result?" The concept of forwarding, backwarding, or stationary chains of subcategories of a category C tries to describe this phenomenon.
In this text, Top denotes the category of topological spaces. Whenever is a topological property, we denote the subcategory of Top containing all the topological spaces with property , simply by . For example, we denote the category of all metrizable spaces by Metrizable.
We want to study the chain {Metrizable, Normal, T 2 , KC, SC, US, T 1 , T D , T UD , T 0 , Top} of subcategories of Top in the point of view of forwarding, backwarding, and stationary chains' concept with respect to one point compactification or Alexandroff compactification operator.
Remark 1.
Suppose ≤ is a partial order on . We call ⊆ (i) a chain, if for all , ∈ , we have ≤ ∨ ≤ ; (ii) cofinal, if for all ∈ , there exists ∈ such that ≤ .
In the following text, by a chain of subcategories of category C, we mean a chain under "⊆" relation (of subclasses of C). We recall that if M is a chain of subcategories of category C such that ⋃ M is closed under (multivalued) operator , then we call M where, for multivalued function , by ( ) ∈ 3 \ 2 , we mean that at least one of the values of ( ) belongs to 3 \ 2 ; (iii) a backwarding chain with respect to ; if for all ∈ M, we have ( ) ⊆ ; (iv) a full-backwarding chain with respect to ; if it is a backwarding chain with respect to and for any distinct 1 , 2 , 3 ∈ M, we have 1 ⊆ 2 ⊆ 3 ⇒ (∃ ∈ 3 \ 2 ( ) ∈ 2 \ 1 ) , (2) where, for multivalued function , by ( ) ∈ 2 \ 1 , we mean that at least one of the values of ( ) belongs to 2 \ 1 ; (v) a stationary chain with respect to if it is both forwarding and backwarding chains with respect to .
Basic properties of forwarding, backwarding, fullforwarding, full-backwarding, and stationary chains with respect to given operators have been studied in [1]. We refer the interested reader to [2] for standard concepts of the Category Theory.
We recall that by N we mean the set of all natural numbers {1, 2, . . .}; also = {0, 1, 2, . . .} is the least infinite ordinal (cardinal) number and Ω is the least infinite uncountable ordinal number. Here ZFC and GCH (generalized continuum hypothesis) are assumed (note: by GCH for infinite cardinal number , there is not any cardinal number with < < 2 , i.e., + = 2 ).
We call a collection F of subsets of a filter over if ⌀ ∉ F; for all , ∈ F we have ∩ ∈ F; for all ∈ F and ⊆ with ⊆ we have ∈ F. If F is a maximal filter over (under ⊆ relation), then we call it an ultrafilter over . If for all ∈ F, we have card( ) = card( ); then we call F a uniform ultrafilter over .
We end this section by the following two examples.
Basic Definitions in Separation Axioms
In this section we bring our basic definitions in Top.
Convention 1.
Henceforth in the topological space suppose ∞ ∉ . So (see [3,4] Regarding [5], we have T 2 ⊆ KC ⊆ SC ⊆ US ⊆ T 1 . Also by [6] we have In this section, we want to study the operator on the above chain. However, it has been proved in [1, Lemma 3.1 and Corollary 3.2] that the chain T 1 ⊆ T D ⊆ T UD ⊆ T 0 is stationary with respect to the operator ; therefore, the main interest is on Metrizable ⊆ Normal ⊆ T 2 ⊆ KC ⊆ SC ⊆ US ⊆ T 1 .
Note 1. A topological space is KC if and only if { ⊆ :
is an open subset of } ∪ { ∪ {∞} : ⊆ and \ is a compact subset of } is a topological basis on . (3) ( ) is T 2 if and only if is T 2 and locally compact [4]; thus ( ) is T 2 if and only if it is normal.
(5) If ( ) is KC, then is KC too (hint: if is a compact subset of , then is a compact subset of ( ) by (2). If ( ) is KC, then is a closed subset of ( ), and again by (2), is a closed subset of , so is KC).
(6) A T 2 space is a -space if it is either first countable or locally compact so every metrizable space is -space [3,7]. For topological spaces , , by ⊔ , we mean topological disjoint union of and . (iv) Consider ∞ ∈ . In this case, 1 := ∩ 1 is an open subset of 1 by Remark 3(4). Using the compactness of 1 , 1 \ 1 is a closed compact subset of 1 . Also is an open subset of ( 2 ) containing ∞; thus 2 \ 2 is a closed compact subset of 2 . Since 1 \ 1 and 2 \ 2 are two closed compact subsets of Hence is an open subset of ( 1 ⊔ 2 ).
Lemma 5. If is a closed subset of , then ( ) is an embedding of ( ).
Proof. If is compact, then ( ) = and by Remark 3(4) we are done. If is not compact, \ is an open subset of and ( ); thus ∪ {∞} is a closed compact subset of ( ). Suppose ⊆ ∪ {∞}; we prove that is a closed subset of * := ∪{∞} as a subspace of ( ) if and only if is a closed subset of ( ) = ∪ {∞} as one point compactification of . However, we mention that ∪ {∞} in both topologies is an embedding of by Remark 3(4).
First, suppose is a closed subset of * . Using the following two cases, is a closed subset of ( ) too.
(i) Consider ∞ ∈ . In this case, := * \ = \ is an open subset of ; therefore it is an open subset of ( ), so = ( ) \ is a closed subset of ( ).
(ii) Consider ∞ ∉ . In this case, is a closed subset of ( ) since it is a closed subset of * and * is closed in ( ). Therefore, Conversely, suppose is a closed subset of ( ). Using the following two cases, is a closed subset of * too. (iv) Consider ∞ ∉ . In this case, is a closed compact subset of ( ) with ∞ ∉ ; thus is a closed compact subset of . Hence, is a closed compact subset of , and = ( ) \ is an open subset of ( ). Therefore, ∩ * = * \ is an open subset of * , so is a closed subset of * .
We have the following.
(1) has a formal proof, so we deal with (2). If ∈ C and is a closed subspace of , then ∈ C. Suppose , ∈ C; , are closed subspaces of with ∩ = { } and ∪ = . We prove ∈ C. (i) Consider C = Metrizable. If , are metrizable subspaces of , then there exist metrics 1 , 2 , respectively, on , such that 1 , 2 ≤ 1, the metric topology induced from 1 on is subspace topology on induced from , and the metric topology induced from 2 on is subspace topology on induced from . Define : × → [0, +∞) with Then the metric topology induced from on coincides with 's original topology.
(ii) Consider C = T 2 . Suppose , are Hausdorff subspaces of and , ∈ are two distinct points of . Consider the following cases: 1 , are disjoint open subsets of with ∈ 1 and ∈ .
Using the above cases, is Hausdorff.
(iii) Consider C = Normal. If , are normal subspaces of , then , are Hausdorff and, using the case "C = T 2 ", is Hausdorff. Now suppose , are disjoint closed subsets of ; also we may suppose ∉ . (iv) Consider C = KC. Suppose , are KC and is a compact subset of . Since , are closed, ∩ , ∩ are compact too. Since ∩ is a compact subset of and is KC, ∩ is a closed subset of . Since ∩ is a closed subset of and is a closed subset of , ∩ is closed subset of . Similarly, ∩ is a closed subset of . Thus = ( ∩ ) ∪ ( ∩ ) is a closed subset of and is KC.
(v) Consider C = SC. Suppose , are SC and ( : ∈ ) is a sequence in converging to . Using the following cases, { : ∈ } ∪ { } is a closed subset of .
, { } is a closed subset of (resp. ) since (resp. ) is SC and in particular We may suppose ∈ . Since is T 1 , { } is a closed subset of . Since is a closed subset of and { } is a closed subset of , { } is a closed subset of .
(viii) Use similar methods for the rest of the cases of C. Proof. Let be a noncompact SC space. Suppose ( : ∈ ) is a sequence in ( ) = ∪ {∞} converging to , ∈ ( ). We have the following cases.
(i) Consider , ∈ . In this case, is an open neighborhood of , in ( ); hence there exists ∈ such that ∈ for all ≥ . Therefore, ( : ≥ ) is a converging sequence in to , . Since is SC, is US and = .
(ii) Consider ∈ , = ∞. In this case, there exists ∈ such that ∈ for all ≥ . Therefore, ( : ≥ ) is a converging sequence in to . Thus ∉ for all ≥ and by converging ( : ∈ ) to . So this case does not occur.
Using the above cases, we have = , and ( ) is US.
The Main Table
See Figure 1; then we have Table 1 which we prove in this Section and where: The mark "√" indicates that in the corresponding case, there exists ∈ such that ( ) ∈ , and the mark "-" indicates that in the corresponding case for all ∈ we have ( ) ∉ . Let By Remark 3 (7) in Table 1, the mark "-" for cases in which " ∈ E, ∈ F" or " ∈ F, ∈ E" is evident. However, it has been proved in [1, Lemma 3.1 and Corollary 3.2] that the chain T 1 ⊆ T D ⊆ T UD ⊆ T 0 is stationary with respect to the operator , so corresponding marks of the cases in which , ∈ F are obtained. Thus it remains to discuss cases in which , ∈ E.
Since the subspace of a metrizable (resp. T 2 , SC, and US) space is metrizable (resp. T 2 , SC, and US) using Remark 3 (4) and (5), if ( ) is, respectively, metrizable T 2 , KC, SC, or US, then is too. Hence we obtain "-" for the following cases too (choose and from the same rows of Table 2). Figure 1).
Consider as the set of all rational numbers as a subspace of Euclidean space R. Since is not locally compact, by Remark 3(3), ( ) is not Hausdorff. Suppose is a compact subset of ( )); in order to show that ( ) is KC, we show is a closed subset of ( ). We have the following two cases. Case 1. If ∞ ∉ , then is a compact subset of ; since is a metric space, is a closed subset of too. Therefore, is an open subset of ( ). Hence, is a closed subset of ( ). is an open subset of ( ). Since ∉ , ⊆ ⋃{ : ≥ 0}. Using the compactness of , there exists ≥ 1 such that Metrizable C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10 C 11 C 9 := T UD spaces Table 2 Reason of omitting this case If ( ) is metrizable, then is metrizable too If ( ) is SC, then is SC too If ( ) is US, then is US too is a contradiction. Thus \ is an open subset of , and = ( ) \ ( \ ) is a closed subset of ( ).
Using the above three cases, { : ∈ } ∪ { } is a closed subset of ( ) and we are done.
Second Row. Here we have = 2 \ 1 and the following cases for .
(iv) Consider On the other hand, using the definition of one point compactification, any subset of ( ) containing ∞ is a compact subset of ( ). Therefore, ( ) \ { } is a compact subset of ( ), but it is not a closed subset of ( ); thus ( ) is not KC. We claim that ( ) is SC. Suppose ( : ∈ ) is a sequence in ( ) converging to . We have the following cases. it is an open subset of ( ). Thus { : ∈ }∪{ } is a closed subset of ( ).
Since 0 ∈ \ {0} and by the above two cases, there is not any sequence in \ {0} converging to 0; is not metrizable. Thus ∈ 2 \ 1 . Now pay attention to the following claims. Third Row. Here we have = 3 \ 2 and the following cases for .
(v) Consider = 6 \ 5 . Consider as disjoint union of 1 and 2 , where we have the following.
Fourth Row. Here we have = 4 \ 3 and the following cases for .
(ii) Consider = 5 \ 4 . Consider uncountable set with countable complement topology { ⊆ : = ⌀∨( \ is countable)} [4, counterexamples 20 and 21]. Since every two nonempty open subsets of have nonempty intersection, is not Hausdorff. It is clear that is T 1 . Moreover, is a compact subset of if and only if is finite. Therefore, every compact subset of is closed and is KC. So ∈ 4 \ 3 . Now suppose is an uncountable subset of with uncountable complement. So is not closed. For all compact subset of , the set ∩ is finite and closed. Therefore, is not a -space. Using Remark 3(2), ( ) is not KC. Using Remark 3(1), ( ) is US; we claim that ( ) is SC. Suppose ( : ∈ ) is a sequence in ( ), converging to ∈ ( ). We have the following cases. Table 1 regarding case " = 2 \ 1 , = 6 \ 5 ").
Some Observations in
(i) The collection {T 2 , KC, SC, T 1 } is a full-forwarding chain with respect to . In other words, Table 3 is valid.
In Table 3, the mark "√" indicates that in the corresponding case there exists ∈ such that ( ) ∈ , and the mark "-" indicates that in the corresponding case for all ∈ we have ( ) ∉ .
(ii) The collection {Metrizable, T 2 , KC, SC, US, T 1 , T D , T UD , T 0 , Top} is a forwarding chain with respect to . The collection T 1 , T D , T UD , T 0 , Top is a stationary chain with respect to . | 4,177.6 | 2014-04-30T00:00:00.000 | [
"Mathematics"
] |
Peripheral and central levels of kynurenic acid in bipolar disorder subjects and healthy controls
Metabolites of the kynurenine pathway of tryptophan degradation, in particular, the N-Methyl-d-aspartic acid receptor antagonist kynurenic acid (KYNA), are increasingly recognized as primary pathophysiological promoters in several psychiatric diseases. Studies analyzing central KYNA levels from subjects with psychotic disorders have reported increased levels. However, sample sizes are limited and in contrast many larger studies examining this compound in blood from psychotic patients commonly report a decrease. A major question is to what extent peripheral KYNA levels reflect brain KYNA levels under physiological as well as pathophysiological conditions. Here we measured KYNA in plasma from a total of 277 subjects with detailed phenotypic data, including 163 BD subjects and 114 matched healthy controls (HCs), using an HPLC system. Among them, 94 BD subjects and 113 HCs also had CSF KYNA concentrations analyzed. We observe a selective increase of CSF KYNA in BD subjects with previous psychotic episodes although this group did not display altered plasma KYNA levels. In contrast, BD subjects with ongoing depressive symptoms displayed a tendency to decreased plasma KYNA concentrations but unchanged CSF KYNA levels. Sex and age displayed specific effects on KYNA concentrations depending on if measured centrally or in the periphery. These findings implicate brain-specific regulation of KYNA under physiological as well as under pathophysiological conditions and strengthen our previous observation of CSF KYNA as a biomarker in BD. In summary, biomarker and drug discovery studies should include central KYNA measurements for a more reliable estimation of brain KYNA levels.
Introduction
Increased concentration of kynurenic acid (KYNA), a neuroactive end-product of the kynurenine pathway of tryptophan degradation 1 , has repeatedly been observed in cerebrospinal fluid (CSF) and postmortem brain tissue of subjects with schizophrenia or bipolar disorder [2][3][4][5][6][7][8][9] . The kynurenine pathway is critically controlled by the immune system, and in vitro experiments have revealed that interleukin (IL)-1β as well as IL-6 induces the biosynthesis of KYNA in human astrocytes 6,10 , i.e., the main producer of KYNA in the brain 1 . Notably, increased CSF levels of IL-1β and IL-6 have also been observed in schizophrenia 10 , as well as IL-1β has in the CSF from bipolar disorder (BD) subjects 11,12 . Although based on limited sample sizes, CSF levels of KYNA in BD subjects, as well as IL-1β levels, have been reported to be selectively increased in subjects with a history of psychotic episodes 5,6,9 , and linked to persistent set-shifting impairment 6 . In line with these clinical associations, rodent studies have confirmed that KYNA causes disruption of pre-pulse inhibition 13 , as well as behavioral responses analogous to impaired set-shifting ability 14 . Although KYNA has established antagonistic actions on both the glycine co-agonist site of the N-methyl-D-aspartic acid receptor (NMDAR) and the cholinergic α7 nicotinic receptor 1 , the exact molecular mechanisms in vivo remain elusive.
While KYNA crosses the blood-brain barrier (BBB) poorly 15 , rodent studies suggest that approximately 60% of brain kynurenine, the precursor to KYNA, comes from peripheral sources 1 . However, to what extent peripheral sources of kynurenine influences brain KYNA levels, under physiological as well as pathophysiological conditions in humans, remains controversial 16 , as well-powered studies including intra-individual analyses of peripheral and central kynurenine metabolites are still lacking. BBB permeability for a certain compound may differ between species 17 , and depends on potential pathophysiological leaking of BBB. Assuming that the brain KYNA pool is connected to peripheral kynurenine levels, KYNA in brain may still be targeted by specific regulatory mechanisms that limit the use of peripheral KYNA measurements as a predictor of central KYNA levels. Studies of peripheral KYNA concentrations in BD and schizophrenia have reported conflicting results [18][19][20] . In BD, this could be a result of mood state at the time of sampling since increased central KYNA levels have so far exclusively been observed in euthymic patients [4][5][6] and decreased peripheral levels in inpatients with an ongoing mood episode 19 , or euthymic patients pooled with mildly depressed patients 21 . Importantly, if blood levels of KYNA do reflect central levels, cumbersome CSF sampling could be avoided in clinical as well as in drug discovery studies.
Using a large cohort of systematically phenotyped BD type I/II subjects and matched healthy controls (HCs), randomly sampled from the normal population, we here examine intra-relationships and inter-relationships between peripheral and central KYNA levels as well as subgroup analyses of defined BD groups.
Materials and methods
The study was approved by the institutional review board of the Karolinska Institutet. Informed consent was obtained from all included subjects.
Study population
All patient data were collected from Swedish BD participants in a long-term follow-up program at a tertiary outpatient unit in Stockholm. A subset of the bipolar subjects (n = 76) and HCs (n = 46) have previously been analyzed with detection of increased CSF KYNA levels in psychotic bipolar subjects 6 , and other labs have performed plasma studies in bipolar disorder subjects with reports of altered KYNA levels using smaller sample sizes [18][19][20] . Thus, the current sample sizes were deemed to ensure adequate power. The diagnostic procedure has been outlined in detail previously 22,23 . Briefly, assessments are based on all available sources of information, including patient records, and interviews with next of kin when feasible. A consensus panel of experienced boardcertified psychiatrists specialized in bipolar disorder made a "best estimate" diagnosis. Enrolled study subjects are at least 18 years of age, meet the Diagnostic and Statistical Manual of Mental Disorders 4th Edition (DSM-IV) criteria for a bipolar disorder I or II. Further, the Montgomery-Åsberg Depression Rating Scale (MADRS) 24 and the Young Mania Rating Scale (YMRS) 25 were used to assess the extent of ongoing depressive and manic symptoms in patients. The baseline clinical diagnostic instrument for BD used the Affective Disorder Evaluation (ADE) 26 , translated and modified to suit Swedish conditions after permission from the originator Gary S. Sachs. Co-morbid psychiatric disorders were collected using the Mini International Neuropsychiatric Interview (M.I.N. I.) 27 . Experienced psychologists performed the neuropsychological assessments, using the Delis-Kaplan Executive Function System (D-KEFS). To obtain a sensitive measure of set-shifting, we employed the TMT and extracted the total time taken for Combined Letter/ Number Switching minus the Combined Number Sequencing + Letter Sequencing, that is, the "switching cost". Raw contrast scores were transformed into agecorrected scaled contrast scores based on normative data in which an achievement score of 10 represents the mean in each age group 28 .
The general population HCs were randomly selected from the same catchment area by Statistics Sweden and underwent a similar clinical evaluation as the bipolar subjects.
Collection of cerebrospinal fluid and blood
Subjects fasted overnight before the standardized CSF and blood collection that occurred between 9.00 and 10.00 a.m. For CSF collection, a non-cutting spinal needle was inserted into the L3/L4 or L4/L5 interspace and a total volume of 12 mL of the CSF was collected, gently inverted to avoid gradient effects, and divided into 1.0-1.6 mL aliquots that were stored at −80°C pending analysis.
Analysis of kynurenic acid
CSF and plasma were analyzed for KYNA content using a High Performance Liquid Chromatography (HPLC) system with fluorescence detection as previously described 6 . To precipitate residual protein, samples were centrifuged at 20,000×g (5 min for CSF and 3 min for plasma). Supernatants from plasma samples were then also diluted with an equal volume of perchloric acid (0.4 M) and the centrifugation procedure was repeated and followed by addition of 70% perchloric acid at a volume equal to 1/7 of the new supernatant. Prior to analysis the plasma samples were then centrifuged a third time. This resulted in a dilution factor of 2.29, which was later on multiplied on to the resulting concentrations to receive the corresponding plasma concentration. After thawing and centrifugation the samples were then manually injected into the HPLC system (50 μL CSF or 20 μL plasma). The HPLC system included a dual-piston high-liquid delivery pump (Bischoff, Leonberg, Germany), a ReproSil-Pur C18 column (4 × 100 mm, Dr. Maisch GmbH, Ammerbuch, Germany) and a fluorescence detector (Jasco Ltd, Hachioji city, Japan) with an excitation wavelength of 344 nm and an emission wavelength of 398 nm (18 nm bandwidth). A mobile phase of 50 mM sodium acetate (pH 6.2, adjusted with acetic acid) and 7.0% acetonitrile was pumped through the HPLC-column at a flow rate of 0.5 mL/min. To enable fluorescent detection, zinc acetate (0.5 M) was delivered after the column by a piston pump P-500 (Pharmacia) at a flow rate of 10 mL/h. Signals from the fluorescence detector were transferred to a computer for analysis with Datalys Azur (http://datalys.net). The retention time of KYNA was about 7-8 min. The sensitivity of the system was verified by analysis of standard mixtures of KYNA with concentrations from 3.75 to 30 nM, which resulted in a linear standard plot. To verify the reliability of this method, samples were analyzed in duplicate, and the mean intraindividual variation was below 5%.
Statistics
Correlation or partial correlation analyses were performed using Spearman's correlation coefficients. Group analyses were performed using Mann-Whitney U-tests, logistic regression analyses, or Chi-square tests as indicated after confirming the assumptions of each test. All reported p-values are two sided. All analyses used the statistical software program R (version 2.7.0; https://www. r-project.org) and graphs were produced using Graph-Pad prism 6.0 (http://www.graphpad.com/) or the R package "plotly" (https://plotly-book.cpsievert.me).
Peripheral and central KYNA levels in bipolar disorder and healthy controls
Peripheral KYNA levels in patients with BD and HCs were measured in plasma from a total of 277 subjects (114 HCs and 163 subjects with either BD type I [n = 93] or II [n = 70]). Among the 116 males (42%) and 161 females the median age was 34 years (IQR = 17.5). More detailed demographics and clinical characteristics are given in Table S1. Female HCs, as well as female BD subjects, displayed lower plasma concentrations of KYNA than male HCs and male BD subjects (β = −0.04; P = 0.004, and β = −0.05; P = 5 × 10 −4 , respectively; logistic regression with male/female as dependent variable and age as covariate; Fig. 1a, b). Among HCs we observed a positive correlation between age and plasma levels of KYNA, while plasma KYNA levels were unaffected by age in BD (r s = 0.19; P = 0.041, and r s = 0.04; P = 0.60, respectively; partial Spearman correlation adjusted for sex; Fig. 1c, d).
Plasma and CSF KYNA measurements displayed no correlation in the whole sample (r s = 0.04; P = 0.58; adjusted for age, sex, and case status), or in subgroup analyses stratifying on case status, or case status and sex (Fig. 2c, d, and Fig. S5). Excluding bipolar subjects with comorbid somatic illness again had no major impact on the relationship between CSF and plasma KYNA levels in BD subjects (data not shown), and analyses using age strata did not reveal any age dependent effects on a possible correlation between plasma and CSF measurements (20-39 years: r s = 0.07; P = 0.42, 40-59 years: r s = 0.21; P = 0.16, and 60-79 years: r s = 0.12; P = 0.60; adjusted for sex and case status).
In bipolar subjects, we also studied dose-dependent effects of ongoing medications on plasma as well as CSF KYNA levels but without evidence of any significant effects on either plasma or CSF measurements of KYNA (Table S2).
Peripheral and central KYNA levels in bipolar disorderlifetime psychotic Symptoms and cognitive functioning
In the bipolar cohort with available plasma KYNA concentrations 82 subjects had experienced psychotic episodes (defined as occurrence of hallucinations and/or delusions during a mood episode), i.e., 50%. As we previously have observed increased CSF KYNA concentrations in euthymic BD subjects with a history of psychosis 5,6 , we now compared plasma KYNA levels between this BD subgroup and HCs. Unlike previous results for CSF KYNA levels we observed no difference in mean KYNA plasma levels between this BD subgroup and HCs (adjusted for sex and age; Fig. 3a). Excluding subjects with comorbid somatic illness, or stratifying on sex, revealed no significant differences between patients and controls ( Fig. S6 and S7).
Among subjects with CSF data, now using a substantially larger sample as in previous studies 5, 6 , we observed significantly increased CSF KYNA levels in the 47 BD subjects (50%) with a history of psychosis (Fig. 3b). Similar results were obtained excluding subjects with comorbid somatic illness (Fig. S8) and sex-stratified analyses suggested effects of similar magnitude in females and males (Fig. S9).
Given our previous data showing increased CSF KYNA levels in euthymic bipolar subjects with set-shifting impairments 6 , we now also in a subset of the sample (n = 97) with available cognitive evaluations studied plasma KYNA levels in relation to set-shifting performance again using the Trail Making Test (TMT; Switching vs. Combined Number Letter Sequencing). In Fig. 1 The effect of sex and age on peripheral and central kynurenic acid (KYNA) levels in healthy controls (HCs) and bipolar disorder (BD) subjects. a Plasma KYNA levels in female and male HCs. b Plasma KYNA levels in female and male bipolar subjects. c The effect of age on plasma KYNA levels in HCs. d The effect of age on plasma KYNA levels in bipolar subjects. e CSF KYNA levels in female and male HCs. f CSF KYNA levels in female and male bipolar subjects. g The effect of age on CSF KYNA levels in HCs. h The effect of age on CSF KYNA levels in bipolar subjects. Group comparisons in a, b, e, and f were performed using logistic regression models (age as covariate) with sex as dependent variable (0 = male, 1 = female). Correlation coefficients in c, d, g, and h are Spearman's (r s ) and derived from partial correlation analyses with sex as a covariate. All reported P-values are two-sided contrast to previous analyses of CSF KYNA concentrations, mean plasma KYNA concentrations did not differ between bipolar subjects who scored below the mean standard score of 10 and bipolar subjects scoring ≥ 10, and plasma KYNA displayed no correlation with TMT scores when treated as a continuous variable (Fig. S10).
Peripheral and central KYNA levels in bipolar disordertype I and II disorder
Among the cohort with available plasma, 93 subjects (57%) had a BD I diagnosis and 70 subjects a type II diagnosis. 75 of the bipolar I subjects (81%) had a history of psychosis while 7 (10%) of the BD II subjects had a history of psychosis (depressive episode with psychotic features). As with contrasting BD subjects based on psychosis (Fig. 3), we observed no difference in KYNA plasma levels between BD I and BD II subjects after adjustment for sex and age (Fig. S11).
In the cohort with CSF samples, 55 subjects (59%) were diagnosed with bipolar I (42 (76%) of these subjects had a history of psychosis), and 39 subjects had a bipolar II diagnosis (13% of these subjects had a history of psychosis). Unlike analyses dividing the sample into psychotic and non-psychotic subjects, we observed no significant association between CSF KYNA levels and subtype of bipolar disorder (Fig S11).
Peripheral and central KYNA levels in bipolar disordercurrent depressive symptoms
To assess current depressive symptoms in BD subjects, we used total score on MADRS at time of lumbar puncture and blood collection. Subjects with a total score < 5 . c Correlations between plasma and CSF KYNA levels in HCs. d Correlations between plasma and CSF KYNA levels in BD I/II subjects. Group comparisons were performed using logistic regression models (age-and sex as covariates) with group as dependent variable (0 = HC, 1 = BD). Reported correlation coefficients (panel c and d) are Spearman's (r s ) and derived from partial correlation analyses with age and sex as covariates or partial correlation analyses with age as a covariate (stratified on sex). See also Fig. S1 and S2. All reported P-values are two-sided (49%), were judged to be in complete remission regarding depression 29 . Excluding these subjects, the remaining BD subjects displayed significantly decreased plasma KYNA levels compared to HCs (adjusting for age and sex), although not reaching significance compared to BD subjects with MADRS score < 5 (Fig. 4a), or against HCs when excluding subjects with comorbid somatic illness (β = −0.02; P = 0.069, n = 171).
CSF KYNA levels were not significantly decreased in BD subjects with MADRS score > 4 (59%) comparing to HCs, although a significant decrease was observed comparing these subjects to the rest of the BD group (Fig. 4b).
However, fewer subjects with remaining depressive symptoms had a history of psychosis (37 vs. 59%, respectively) explaining the observed decrease between the two BD groups divided on MADRS score (see figure legend for Fig. 4b). In agreement with our analyses using the complete BD group, we observed no correlation between plasma and CSF KYNA levels in the subgroup with remaining depressive symptoms (r s = 0.09; P = 0.67; adjusted for age and sex).
Peripheral and central KYNA levels in bipolar disorderlifetime suicide attempt or self-harm
In the plasma cohort, 101 subjects had a lifetime history of suicide attempt or self-harm. This group did not display plasma KYNA levels that differed from HCs or bipolar subjects without such a history (Fig. 5a).
Thirty-eight subjects with CSF data had a lifetime history of suicide attempt or self-harm. These subjects displayed a slightly higher mean CSF KYNA compared to HCs (Fig. 5b) although the comparison to the smaller BD group without history of suicide attempt or self-harm did not reach significance (Fig. 5b). Distribution of lifetime psychotic symptoms was also similar between BD subjects with or without history of suicide attempt or self-harm (Fisher's exact test; P = 1.0).
Discussion
Our findings provide further evidence of a robust association between CSF levels of KYNA and psychotic BD, in agreement with our previous reports using smaller sample sizes 4,5 . In contrast, plasma KYNA concentrations were unchanged in BD subjects with a history of psychosis compared to HCs. These findings are in line with previous rodents studies supporting brain-specific regulation of KYNA dependent on factors such as glial energy metabolism and neuronal signals [27][28][29][30] , and suggest "on site" brain pathology as an important factor causing increased central KYNA levels in psychotic BD. Notably, symptomatology associated with increased CSF KYNA levels, i.e., delusions and hallucinations are also prominent features in schizophrenia and previous studies report similar increases in central KYNA levels in schizophrenia 2,6-8 . Important for future studies, our association analyses also revealed that age and sex influenced KYNA concentrations differently dependent on if measured in the periphery or in CNS. While females displayed lower plasma levels, sex did not influence central KYNA levels. Again, this implies brain-specific regulation of KYNA although the specific mechanisms remain elusive. Age displayed an even more complex influence on KYNA levels with healthy controls displaying an age-dependent increase in plasma and an even more pronounced effect in CSF, while bipolar subjects only displayed increasing CSF levels by age. It remains uncertain if the disease-specific lack of an In sex and age adjusted analyses, BD subjects with a history of psychosis (median = 1.82 nM, IQR = 1.44) displayed increased CSF KYNA levels compared to HCs (P = 0.0031) as well as compared to BD subjects without a history of psychotic episodes (median = 1.57 nM, IQR = 1.02; BD psychosis vs. BD no psychosis; P = 0.0069). HCs and BD subjects without a history of psychotic episodes displayed similar CSF KYNA concentrations (P = 0.55). All reported P-values are two-sided and derived from logistic regression models with sex and age as covariates age effect in plasma is due to confounding or is part of the pathophysiology as the mechanisms by which ageing effects the biosynthesis of the kynurenine metabolites is still largely unknown 30 .
We also observed decreased levels of plasma and CSF KYNA in BD with ongoing depressive symptoms in relation to HC, although not reaching significance when compared to non-depressed BD subjects. This supports the findings by Wurfel et al. and Birner et al. 16,18 . However, we observed no correlation between CSF and plasma KYNA concentrations in the whole sample or in the subset of BD subjects displaying ongoing depressive symptoms. By contrast, we also observed a significant increase in CSF KYNA levels in subjects with a lifetime history of suicidal behavior compared to HCs.
Several limitations in the current study should be highlighted. First, with our limited number of subjects displaying more severe depressive symptoms it cannot be excluded that BD subjects with ongoing moderate and severe depression display decreased plasma KYNA levels directly representing a pathophysiological mechanism driven by KYNA in the brain. Further, our collected 2) stratified on total MADRS score. In age and sex adjusted analyses, BD subjects with MADRS score > 4 (median = 33.0 nM, IQR = 14.2) displayed significantly decreased plasma KYNA levels compared to HCs (P = 0.046) although the decrease did not reach significance when comparing with BD subjects and MADRS score < 5 (median = 38.0 nM, IQR = 23.6; P = 0.070). b Cerebrospinal fluid (CSF) KYNA levels in HCs and BD subjects stratified on total MADRS score. Median (IQR) CSF KYNA levels in HCs: 1.56 nM (1.03), BD subjects with MADRS score < 5: 1.91 nM (1.54), and BD subjects with MADRS score > 4: 1.58 (0.93). In age and sex adjusted analyses, no significant difference in CSF KYNA levels could be observed between BD subjects with MADRS score > 4 and HCs (P = 0.34), while this group displayed decreased CSF KYNA levels compared to remaining BD subjects without ongoing depressive symptoms (P = 0.041). However, as BD subjects displaying depressive symptoms more seldom had a history of psychosis (36 vs. 57%) the difference in CSF KYNA concentration between the two BD groups defined by MADRS score was largely explained by distribution of psychosis (β = −0.62; P = 0.041 unadjusted vs. β = −0.54; P = 0.081 adjusted for psychosis). All P-values are two-sided and derived from logistic regression models with age and sex as covariates (and in the comparison of CSF KYNA levels between BD subjects also controlling for a history of psychosis, see above) clinical data did not separate suicide attempt and selfharm. Thus, it is possible that analyses restricted to suicide attempters would have given us other results. Finally, central and peripheral measurements of a larger set of metabolites in the kynurenine pathway are also warranted to provide a more comprehensive understanding of brain kynurenines under physiological as well as pathophysiological conditions. Regarding kynurenine, it is however worth mentioning that in human plasma kynurenine and KYNA concentration display a strong correlation 31 .
In summary, our data suggest that (1) KYNA in CSF, but not in plasma, represents a biomarker for lifetime psychotic episodes in BD, (2) peripheral KYNA levels do not predict central KYNA levels in healthy volunteers or in BD subjects. Thus, studies incorporating KYNA levels should include central rather than peripheral measurements to allow meaningful conclusions, and must correct for age-effects and sex-effects as well as current depressive symptoms. Finally, brain-specific pathological changes of the kynurenine metabolism in psychotic disorders may offer specific and novel drug targets. 2) stratified on a history of suicide attempt/self-harm (median for BD no suicide attempt/self-harm = 37,2; IQR = 21.2, median for BD suicide attempt/self-harm = 32.6; IQR = 24.5). Given the high number of females among BD subjects with a history of suicide attempt/self harm (70.5 vs. 55% in BD subjects with no such history) there was no significant differences in KYNA plasma concentrations for this group compared to HCs or BD without a history of suicide attempt/self-harm when adjusting for age and sex (BD suicide attempt/self-harm vs. BD no suicide attempt/self-harm: P = 0.57). b Cerebrospinal fluid (CSF) KYNA levels in HC (median = 1.56 nM, IQR = 1.03) and BD I/II subject (median = 1.66 nM, IQR = 1.16). BD subjects with a history of suicidal behavior (median = 1.63 nM, IQR = 1.67) displayed significantly increased CSF KYNA concentrations compared to HCs although not reaching significance compared to the smaller BD without a history of suicidal behavior group (median = 1.76 nM, IQR = 1.00; P = 0.33). All reported P-values are two-sided and derived from logistic regression models with sex and age as covariates | 5,706.4 | 2019-01-29T00:00:00.000 | [
"Biology",
"Psychology"
] |
Impact of Correlated Noises on Additive Dynamical Systems
Impact of correlated noises on dynamical systems is investigated by considering Fokker-Planck type equations under the fractional white noise measure, which correspond to stochastic differential equations driven by fractional Brownian motions with the Hurst parameterH > 1/2. Firstly, by constructing the fractional white noise framework, one small noise limit theorem is proved, which provides an estimate for the deviation of random solution orbits from the corresponding deterministic orbits. Secondly, numerical experiments are conducted to examine the probability density evolutions of two special dynamical systems, as the Hurst parameter H varies. Certain behaviors of the probability density functions are observed.
Introduction
Dynamical systems arising from financial, biological, physical, or geophysical sciences are often subject to random influences.These random influences may be modeled by various stochastic processes, such as Brownian motions, Lévy motions, or fractional Brownian motions.A fractional Brownian motion , ≥ 0, in a probability space (Ω, F, ), with Hurst parameter ∈ (0, 1), is a continuous-time Gaussian process with mean zero, starting at zero and having the following correlation function: In particular, when = 1/2 it is just the standard Brownian motion.The time derivative of a fractional Brownian motion, /, as a generalized stochastic process, has nonvanishing correlation [1,2] and it is thus called a correlated noise or colored noise.In the special case of = 1/2, this noise is uncorrelated and thus is called white noise [3].Correlated noises appear in the modeling of some geophysical systems [4][5][6].
For systematic discussions about fractional Brownian motions and their stochastic calculus, we refer to [7][8][9][10][11][12] and the references therein.Fractional Brownian motions have stationary increments and are Hölder continuous with exponent less than , but they are no longer semimartingales, even no longer Markovian.They possess some other significant properties such as long range dependence and self-similarity which result in wide applications in fields such as hydrology, telecommunications, and mathematical finance.During the last decade or so, several reasonable stochastic integrations with respect to fractional Brownian motions were developed.See, for example, Lin [13], Duncan et al. [14], Decreusefond and Üstunel [15], and the references mentioned therein.Stochastic differential equations (SDEs) driven by fractional Brownian motions also have been attracting more attention recently [1,10,[16][17][18].
In this paper, we consider the following scalar stochastic differential equation (SDE): where the drift (⋅) is a Lipschitz continuous function on , > 0 is the noise intensity, is a fractional Brownian motion with > 1/2, and the initial state value is assumed to be independent of the natural filtration of .Since this system has a unique solution [17,19], here we intend to understand some impact of correlated noises on this additive dynamical system as the Hurst parameter varies.
This paper is organized as follows.In Section 2, we set up a fractional white noise analysis framework which makes correlated noises as functionals of standard white noises and prove a small noise limit theorem which implies the stochastic continuity of the system with respect to noise intensity.In Section 3, we show that the probability density function of satisfies a Fokker-Planck type partial differential equation with respect to the fractional white noise measure.Then, we implement numerical experiments to examine the probability density evolutions as the Hurst parameter varies.As to one linear system and one double-well system, certain behaviors of the probability density functions are observed.
Analysis Framework and Small Noise Limit
2.1.Analysis Framework.White noise framework is one natural and flexible stochastic analysis thoughtway, and fractional white noise analysis takes correlated noise as functionals of standard white noise.This approach has shown to be very effective in investigating distributions and path properties of stochastic processes.In the following, we describe the fractional white noise analysis framework.
Let S() be the Schwartz space of rapidly decreasing smooth functions on and S () the space of tempered distributions.And denote by ⟨⋅, ⋅⟩ the dual pairing on S () × S().For 1/2 < < 1, define where Then, for , ∈ S(), Now we can only prove the linear map Γ is continuous from S() to 2 ().Since Γ is not continuous from S() to S() (even not a proper operator in S()), we could not obtain a dual map from S () to S () by duality.By using Itô's regularization theorem, we construct a unique S ()valued random variable : S () → S () such that which extends the map Γ * in view of (5).
Theorem 2. Let = ∘ −1 be the image measure of induced by the map T.Then, for any ∈ S(), the distribution of ⟨⋅, ⟩ under is the same as ⟨⋅, Γ ⟩ under .In particular, is a fractional Brownian motion with Hurst constant .Moreover, where () ≡ ⟨, 1 [0,] ⟩ is the standard Brownian motion.
(See proof in [20].) Let {F , ∈ + } and {F , ∈ + } be the filtrations generated by { } and { }, respectively.Then, in view of (8), we have , where ( * )() := ().So, the filtrated probability space (S (), F , ) is the extension of (S (), F , ).Thus the stochastic analysis with respect to measure could be reduced to the standard white noise framework naturally.Therefore, we choose the standard white noise measure as the reference measure rather than , and this treatment is more useful and more convenient for applications.For more details, we refer to [20] and the reference therein.
Small Noise
Limit.Now, we consider the SDE (2) in fractional white noise framework And to investigate the impact of noise on deterministic dynamical system which is solvable on any finite time interval [0, ].We have the following result.Hence, for any small enough > 0, we have which completes the proof when → 0. In the final step, we have used the self-similarity of the fractional Brownian motion This theorem provides an estimate for the deviation of random solution orbits from the corresponding deterministic orbits.Note that the expectation E in the above theorem corresponds to the fractional white noise measure.And, henceforth, we take all expectations E with respect to the fractional white noise measure (i.e., for simplicity, we omit the subscript mentioned above).
Probability Density Evolution
For SDE, such as (2), the probability density function of the solution carries significant dynamical information.This is considered here by examining a fractional Fokker-Planck type equation.The key step in the derivation of this Fokker-Planck type equation is the application of Ito's formula for SDEs driven by fractional Brownian motion, under fractional Mathematical Problems in Engineering white noise analysis framework [1,10,16,20,21].We sketch the derivation here.By Ito's formula [10], Theorem 6.3.6, for a second order differentiable function ℎ(⋅) with compact support, we have Taking expectations on both sides yields Let = (, ) be the probability density function of the solution of the system (2).Recall that E[ℎ( )] = ∫ R ℎ()(, ); by integration by parts and = 0 at = ±∞, we obtain that is, In the following, we numerically simulate this partial differential equation for two special cases: () = − 3 and () = , with finite noise intensity (for simplicity we take = 1).Through these two special cases, we expect to illustrate the impact of correlated noises on additive dynamical systems as the Hurst parameter varies.
Here, we perform the popular Crank-Nicolson scheme in Matlab for (17) with zero boundary values;, the grid size is 0.05, total grid points are 801, and the time step size is 0.01.And the initial probability density function is taken to be standard normal; that is, (, 0) = (1/ √ 2) − 2 /2 .
Since the system is tridiagonal, we could solve it using Thomas Algorithm efficiently.Moreover, for other initial conditions and other drift coefficients, for instance, the initial uniform distribution or () = − 2 , this method also applies smoothly.
3.1.Numerical Simulation: () = − 3 .We first simulate the dynamical evolutions of the probability density function (, ) for the corresponding stochastic differential equation ( 2) with the double-well drift () = − 3 , for various values of > 1/2.The double-well dynamics is a rich and typical model for understanding numerous physical or geophysical systems [22,23], focusing on the maxima (minima), symmetry, kurtosis, and so forth.
As observed in Figure 1, the probability density function (, ) evolves from the unimodal (one peak) to the flat top and then to the bimodal (two peaks) shape for various Hurst parameter values , as time increases.Simultaneously, the effect of Hurst parameter on the dynamics is significant.As value increases, the plateau for (, ) becomes lower when time exceeds = 0.5.
Numerical
Simulation: () = .Now, for comparison we investigate the dynamical evolutions of the probability density function (, ) of the corresponding stochastic differential equation (2) with the linear drift () = , which is a rich toy example for understanding dynamical systems.
Also as observed in Figure 2, at given time instants, (, )'s peak becomes higher as increases.This illustrates the significant and distinguishing influence of Hurst parameter on the dynamics when time evolves.The bigger makes the solution of (2) has more centralized value, but the long time effect shows that the values of the solution distribute more scatteredly. | 2,128 | 2014-09-03T00:00:00.000 | [
"Mathematics",
"Physics"
] |
The Pathology of Severe Dengue in Multiple Organs of Human Fatal Cases: Histopathology, Ultrastructure and Virus Replication
Dengue is a public health problem, with several gaps in understanding its pathogenesis. Studies based on human fatal cases are extremely important and may clarify some of these gaps. In this work, we analyzed lesions in different organs of four dengue fatal cases, occurred in Brazil. Tissues were prepared for visualization in optical and electron microscopy, with damages quantification. As expected, we observed in all studied organ lesions characteristic of severe dengue, such as hemorrhage and edema, although other injuries were also detected. Cases presented necrotic areas in the liver and diffuse macro and microsteatosis, which were more accentuated in case 1, who also had obesity. The lung was the most affected organ, with hyaline membrane formation associated with mononuclear infiltrates in patients with pre-existing diseases such as diabetes and obesity (cases 1 and 2, respectively). These cases had also extensive acute tubular necrosis in the kidney. Infection induced destruction of cardiac fibers in most cases, with absence of nucleus and loss of striations, suggesting myocarditis. Spleens revealed significant destruction of the germinal centers and atrophy of lymphoid follicles, which may be associated to decrease of T cell number. Circulatory disturbs were reinforced by the presence of megakaryocytes in alveolar spaces, thrombus formation in glomerular capillaries and loss of endothelium in several tissues. Besides histopathological and ultrastructural observations, virus replication were investigated by detection of dengue antigens, especially the non-structural 3 protein (NS3), and confirmed by the presence of virus RNA negative strand (in situ hybridization), with second staining for identification of some cells. Results showed that dengue had broader tropism comparing to what was described before in literature, replicating in hepatocytes, type II pneumocytes and cardiac fibers, as well as in resident and circulating monocytes/macrophages and endothelial cells.
Introduction
Dengue infection is the most prevalent arthropod-borne viral disease in subtropical and tropical regions of the world. The dengue virus (DENV) belongs to the Flaviviridae family and consists of four antigenically distinct serotypes (DENV1-4). The infection can result in a broad spectrum of effects, including acute febrile illness, the dengue fever (DF), which may progress to severe forms such as dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS), with changes in hemostasis and vascular permeability [1,2]. Several studies indicate that the occurrence of secondary infection with a heterologous serotype increase the risk of developing DHF [3,4]. Therefore, in areas where multiples DENV serotypes circulate, such as in Brazil, sequential infections may occur, which lead to the increase in the number of severe dengue cases [5][6][7]. Moreover, other risk factors, such as ethnicity, age, co-morbidities, genetic predisposition and immune conditions of the patient, as well as genetic variations of viral strains, may also contribute for the occurrence of DHF [8][9][10]. Severe dengue disease is characterized by circulatory damages, associated in most cases with hepatic dysfunctions [11][12][13]. These injuries may be a direct consequence of the virus presence and/or resulted by an exacerbation of the immune response after infection [14,15].
Overall, in vivo studies regarding DENV infection and its pathogenesis are limited by the lack of an experimental animal model able to mimic the full spectrum of the disease as observed in humans [16]. Therefore, there are still several gaps in understanding the pathogenesis of dengue. On the other hand, autopsy studies based on human dengue cases are extremely important and may clarified some of these gaps, pointing out for example how and which tissues are affected during the disease. Most of histopathological reports with dengue human fatal cases indicate that the liver, spleen and lymph nodes are target organs of infection [17][18][19]. Besides the occurrence of hemorrhage and edema in the liver of dengue fatal cases, histopathological analysis also reported damages caused by metabolic alterations and/or inflammatory reactions, such as the presence of steatosis, areas with infiltrated cells and necrosis and hyperplasia and destruction of Kupffer cells [17,18,[20][21][22]. Additionally, other studies showed several lesions in spleen tissues, such as interstitial edema, vascular congestion, splenic rupture and bleeding [17,18,23]. However, recently, atypical clinical manifestations of dengue have been reported, involving the kidney, lung, heart and central nervous system, which were also corroborated by histopathological findings revealing several areas with hemorrhage, edema and inflammatory infiltrates in these organs [24][25][26][27][28].
In addition to histopathological analysis, very little is known about the ultrastructural aspects of affected organs in dengue human cases. In fact, as far as we know, the only study described in the literature with electron microscopy evaluation is from Limonta et al. [29], showing the presence of virus-like particles in neuroglia, hepatocytes and alveolar and splenic macrophages in one human fatal case.
Therefore, in the present work we characterized histopathological and ultrastructural aspects of the liver, lung, heart, kidney and spleen of four DENV-3 fatal cases occurred in Rio de Janeiro, Brazil, in the dengue outbreak from 2002. These patients developed DHF with several dysfunctions in all the studied organs. Virus tropism and replication was also evaluated, by immunohistochemical and in situ hybridization analysis, revealing the presence of the virus in different cells, such as resident and circulating monocytes/macrophages, endothelial cells, hepatocytes, type II pneumocytes and cardiac fibers. Results showed that this virus had a broader tropism comparing to what was described before in literature, leading to drastic lesions in several organs.
Ethical Procedures
All procedures performed during this work were approved by the Ethics Committee of the Oswaldo Cruz Foundation/ FIOCRUZ, with the number 434/07 for studies with fatal dengue cases and controls. The institutional review board or ethics committee waived the need for consent.
Human Fatal Cases
The human tissues analyzed in this study (liver, lung, heart, spleen and kidney) were obtained from four dengue fatal cases in 2002 in Rio de Janeiro. During the summer of 2002, Rio de Janeiro had a large epidemic of DF and DHF and 99% of cases were caused by DENV-3 [30]. Our cases were patients admitted in the Hospital São Vicente de Paulo (case 1), Hospital Universitário Clementino Fraga Filho (case 2) and the Hospital Miguel-Couto (cases 3 and 4). The available information of clinical and necropsy data concerning the four fatal cases is listed below and described in Table 1. All patients died with a clinical diagnosis of dengue hemorrhagic fever, with classical symptoms (fever, myalgia and hemorrhagic manifestations). Necropsy revealed that cases presented extensive areas of hemorrhage and edema in all analyzed organs. The dengue diagnosis was confirmed by positive serum IgM antibodies. The four negative controls, from both sexes and ranging from 40 to 60 year old, were non-dengue or any other infectious disease case.
Case Presentation
Case 1. Clinical data: A 63-year-old male patient with diabetes mellitus, taking acetylsalicylic acid (100 mg) and Daonil, developed a sudden onset of headache, myalgia, anorexia, abdominal pain. Four days later he presented diarrhea, hemoptysis, leukopenia, thrombocytopenia (platelet 79.000/mm3) and hemoconcentration (hematocrit 59%). Biochemical parameters evaluated in the serum: aspartate aminotransferase 75 IU/L; alanine aminotransferase 21 IU/L; creatine phosphokinase 126 IU/L; creatinine 5.0 mg/dL; urea 57 mg/dL; lipase 41 IU/ L. Ultrasonography revealed peri-hepatic and peri-pancreatic collections confirmed by computed tomography of the abdomen that also revealed: increase of the heart, discrete opacities in the left lung with marginal pleural reaction at the base of this hemithorax, distended gall bladder. Furthermore, he was submitted to routine of acute abdomen that revealed no gaseous distention. Physical examination on admission revealed blood pressure of 140/80 mmg and dehydration with cutaneous rash and petechiae. The patient presented a progressive worsening of the clinical, evolving to shock with severe pulmonary congestion, orotracheal intubation and respiratory orthesis, with the use of dopamine and dobutamine. In the next day, it was observed a hemodynamic instability and oliguria, with consequently administration of norepinephrine in increasing doses, culminating in refractory shock and death. The patient died with a clinical diagnosis of dengue hemorrhagic fever, severe ischemic cardiomyopathy and pancreatitis.
Necropsy data: Multiple purpuric and petechial hemorrhagic lesions were evident, especially around needle puncture sites. Hemorrhages were also present in serous cavities, including hemorrhagic pleuritis (sulphation of the visceral pleura) and fibrin pericarditis. The liver and spleen were grossly congested with multiple hemorrhagic foci, and a perforated duodenal ulcer was described.
Case 2. Clinical data: A 21-year-old female patient who experienced fever, myalgia and headache for 8 days and symptoms progressed to metrorrhagia, nausea, vomiting and diarrhea. Before hospitalization, she was examined in another health service with a hypothesized dengue diagnosis due to severe leukopenia and thrombocytopenia (platelet 10.000/mm3). The patient was admitted in the intensive care unit (ICU) at the Hospital Universitário Clementino Fraga Filho with respiratory failure, followed by the evolution of multiple organ failure and refractory shock. Biochemical parameters evaluated in the serum: aspartate aminotransferase 149 IU/L; alanine aminotransferase 66 IU/L; glucose 158 mg/dL; creatinine 1.10 mg/dL; urea 9.0 mg/dL.
Case 3. Clinical data: A 41-year-old female patient reporting fever for 2 days, weakness, fainting, sweating, yellow discharge, epigastric and abdominal pain. Laboratory workup demonstrated hematocrit of 48% and leukocytosis. Ultrasound showed fluid in the abdominal cavity. Patient died with a clinical diagnosis of dengue hemorrhagic fever causing an acute pulmonary edema (causa mortis).
Necropsy data: The autopsy revealed a presence of tracheal hyperemia, externally reddish wall of the duodenum, esophageal mucosa with irregular dark wall, pulmonary edema, bilateral pleural effusion, hypertrophic cardiomyopathy and yellowish brown myocardium, mild retroperitoneal hemorrhage, visceral polycongestion, ascites, yellowish hepatic parenchyma and spleen with diffluent parenchyma. Case 4. Clinical data: A 61-year-old female hospitalized with suspected dengue symptoms (fever, myalgia, vomiting and diarrhea). Biochemical parameters evaluated in the serum: creatinine 1.07 mg/dL; urea 22.9 mg/dL; glucose 104 mg/dL. The patient died by acute pulmonary edema with sudden cardiac arrest.
Histopathological Analyzes
Tissues samples from the human necropsies were fixed in formalin (10%), blocked in paraffin resin, cut in 4 mm, deparaffinized in xylene and rehydrated with alcohol, as described elsewhere [31]. Sections were stained with hematoxylin and eosin for histological examination and visualized in a Nikon ECLIPSE E600 microscope. Hemorrhage and edema, diffuse in the entire organs (liver, lung, heart, kidney and spleen) in all dengue cases, were analyzed quantitatively, using a millimetric ocular lens which allows us to calculate the percentage of injury tissues (areas of damage divided by the total area of the tissue in each slide). In the liver, steatosis, present in all hepatic lobules, was evaluated and quantified using a scale ranging from 0 to 4, according to the extensive of affected areas. Grade 0 was attributed to less than 1% of total affected hepatocytes, grade 1 between 1% and 25%, grade 2 between 25% and 50%, grade 3 between 50% and 75% and grade 4 more than 75% of affected hepatocytes, as described elsewhere [32]. A total of 40 fields of the dengue cases and controls (10 images for each case representing different lobules) were photographed at magnification of 400x and counted. Damages were characterized in the three areas of the hepatic acini (periportal, midzonal and central vein area) similar as performed by Quaresma et al. [33]. All analyzes were accomplished in a blind test without prior knowledge of the group.
Immunohistochemistry Procedure
For immunohistochemical studies the paraffin-embedded tissues were cut (4 mm), deparaffinized in xylene and rehydrated with alcohol. Antigen retrieval was performed by heating the tissue in presence of citrate buffer [34]. Such tissues were then blocked for endogenous peroxidase with 3% hydrogen peroxidase in methanol and rinsed in Tris-HCl (pH 7.4). To reduce non-specific binding, sections were incubated Protein Blocker solution (Spring Biosciense) for 5 min at room temperature. Samples were then incubated over-night at 4uC with anti-DENV-3 polyclonal antibodies (raised in Swiss mouse inoculated with DENV-3), diluted 1:300 in Tris-HCl, or with antibodies specific to recombinant dengue NS3 protein (expressed in Escherichia coli, [35].The next day, sections were incubated with a rabbit antimouse IgG, a secondary antibody-HRP conjugate (Spring Bioscience, CA, USA) for 30 min at room temperature. For negative control of the immunohistochemistry reaction, samples were incubated only with the secondary horseradish peroxidaseconjugated antibody. Reaction was revealed with diaminobenzidine (Dako, CA, USA) as chromogen and the sections were counterstained in Meyer's hematoxylin (Dako).
Electron Microscopy Assay
Tissues samples were fixed with 2% glutaraldehyde in sodium cacodylate buffer (0.2 M, ph 7.2), dehydrated in acetone, postfixed with 1% buffered osmium tetroxide, embedded in EPON and polymerized at 60uC for 3 days. Semi-thin sections (0.5 mm thick) were obtained using a diamond knife (Diatome, Biel, Switzerland) adapted to a Reichert-Jung Ultracut E microtome (Markham, Ontario, Canada). Sections were stained with methylene blue and blue solution II [36]. Ultrathin sections (60-90 nm thick) were stained with uranyl acetate and lead citrate [37], and were observed in a Zeiss EM-900 transmission electron microscope.
In Situ Hybridization
For in situ hybridization, we used one probe (59-TGACCAT-CATGGACCTCCA-39), which anneals in a conserved region inside the NS3 gene in the negative strand of viral RNA, and contained six dispersed locked nucleic acid modified bases with digoxigenin conjugated to the 59 end. This probe was tested before in a mouse model infected with DENV, in which positive reaction was only observed in tissues from virus infected animals [31]. Paraffin-embedded sections of dengue cases and controls (5 mm) were treated for in situ hybridization as described elsewhere [38]. Briefly, deparaffinized sections were digested with pepsin (1.3 mg/ ml) for 30 min, incubated with the probe cocktail at 60uC for 5 min for denaturation, followed by hybridization at 37uC overnight. Sections were further washed with 0.26SSC and 2% bovine serum albumin at 4uC for 10 min. The probe-target complex was visualized due to the action of alkaline phosphatase on the chromogen nitroblue tetrazolium and bromo-chloroindolyl-phosphate. For negative control, tissue sections of dengue cases were incubated with the solution without probe and revealed as described above.
Double Staining for Viral RNA and Phenotypic Cell Markers
The double staining based protocol with optimization of pretreatment conditions for detection of the RNA and phenotypic markers, using in situ hybridization and immunohistochemistry, respectively, was previously described [39]. Briefly, the dengue probe, described above, was tagged with 59 digoxigenin and locked nucleic acid (LNA) modified (Exiqon). The probe-target complex was visualized using an antidigoxigenin-alkaline phosphates conjugate and nitro-blue tetrazolium and 5-bromo-4-chloro-39indolyphosphate as the chromogen. Samples were then submitted to immunohistochemistry assay for detection of either CD68 (identification of macrophages), cytokeratin AE1/AE3 (identification of pneumocytes) and CD31 (identification of endothelial cells) and each antibody was provided in a ready-to-use form Ventana Medical Systems. Data was analyzed by the computer based Nuance system (Caliper Life Sciences, Hopkinton, MA, USA) which separates the different chromogenic signals, converts them to a fluorescent based signal and ''mixes'' them to determine if a given cell is co-expressing one or more target.
Statistical Analyses
Statistical analyses were performed using Graph Pad Prism 5 software (La Jolla, CA, USA), version 4.03. Statistical differences were assessed by the Mann Whitney U test to evaluate differences in parameters between controls and DENV-patients, and values were considered significant at P,0.05.
Liver of DENV-3 Fatal Cases: Histopathological and Ultrastructural Aspects
Histopathological analysis of the liver showed circulatory and parenchyma damages in all studied DENV-3 fatal cases, present in all lobules. As expected, in the liver of non-dengue patients we observed a regular structure of hepatic parenchyma, with hepatocytes around the central veins and sinusoid capillaries exhibiting normal endothelial cells (Figs.1a, 1f and 1h). On the other hand, dengue cases presented severe parenchyma and circulatory alterations. The most prominent parenchyma lesion was the presence of abnormal retention of lipids inside hepatocytes with either small fat vacuoles around the nucleus (microsteatosis) or large vacuoles with displacement of the nucleus to the periphery of the cell (macrosteatosis) (Figs. 1b, 1c and 1g). Ultrastructural analysis also revealed numerous large lipid droplets inclusions inside hepatocytes, typical of macrosteatosis ( Fig. 1i) and absent in the control (Fig. 1h). Quantification of steatosis revealed that three dengue cases, mainly case 2 who was obese, showed extensive areas with this damage, while case 4 presented a basal steatosis degree similar to those observed in non-dengue patients (Fig. 1m). Additionally, we observed that steatosis was present in all the three hepatic zones in the dengue cases, although it was more prominent around the portal space (Figs. 1n, 1o and 1p). Besides steatosis, the hepatic parenchyma of dengue cases also presented focal areas of necrosis with the presence of mononuclear infiltrates, mainly in case 2 (Fig. 1b). Cell death was also evidenced by detection of nuclear vacuolar degeneration in semi-thin analysis (Fig. 1g) and the presence of swollen mitochondria in ultra-thin investigations (Figs. 1i and 1j), indicating probably apoptotic processes, which were absent in the control of non-dengue patient (Figs. 1f and 1h).
Hemorrhage and edema were observed in all the four dengue cases (Figs. 1d and 1e). Quantification of these damages, detected in all the three hepatic zones with similar extensions, revealed the highest percentage in case 1, who was diabetic, followed by case 4 (Figs. 1k and 1l). We also observed the presence of numerous hyperplasic macrophages in sinusoidal capillaries (Fig. 1f) and platelets were found in the lumen of these sinusoids with concomitant loss of endothelium (Fig. 1j).
Detection of Dengue Virus in the Liver
Initially, all cases were tested for the presence of DENV-3 antigens in general, by immunohistochemistry assay. Virus antigens were detected mainly in hepatocytes (Fig. 2a) and to a lesser extend in Kupffer cells (Fig. 2a) and in the endothelium (Fig. 2b), identified by the morphology of the cells. Virus antigens were observed only in dengue cases, in which cases 2 and 3 presented the highest number of positive cells (Fig. 2c). Virus replication in the liver was then investigated by immunochemistry (Figs. 2a, 2b and 2d). As expected, nondengue cases did not react with antibodies against the NS3 protein (data not shown). Additionally, the replication was also confirmed by in situ hybridization, using a probe that anneals only in the negative strand of the virus RNA, which revealed the presence of this RNA in hepatocytes (Fig. 2g). For negative controls, the same in situ hybridization assay was performed in dengue cases with omission of probe (Fig. 2e), as well as with non-dengue cases incubated with the probe (Fig. 2f), and both tests did not present positive staining.
Lung of DENV-3 Fatal Cases: Histopathological and Ultrastructural Aspects
Histopathological analysis of the lung showed damages in all studied DENV-3 fatal cases. As expected, we observed a regular structure of alveoli, alveolar septa and normal endothelial cells in the lung of non-dengue patients (Figs. 3a, 3h and 3j). Dengue cases presented septum thickening with an increase of cellularity (Fig. 3b), the presence of mononuclear inflammatory infiltrates (Fig. 3c) and hyperplasia of alveolar macrophages (Fig. 3g). Cases 1 and 2, who had co-morbidities (diabetes and obesity), also showed hyaline membrane formation ( Fig. 3e and 3f), probably due to dengue shock syndrome, with the concomitant hypertrophy of type II pneumocytes (Fig. 3d). Alterations observed in cases 1 and 2 were evidenced with more detail in the ultrastructural analysis (Fig. 3k). Virus-like particles were also detected in the endothelium of the lung in case 1 (Fig. 3l).
All the four dengue cases presented diffuse areas with hemorrhage and edema (Figs. 3b and 3c). Quantification of these damages showed larger areas of hemorrhage in cases 1 and 2 ( Fig. 3m) and edema in cases 2 and 3 (Fig. 3n). Isolated megakaryocytes and cell fragments with aspects of platelets were observed in alveolar space in semi-thin (Fig. 3i) and ultra-thin analysis (Fig. 3j).
capillaries with normal structures and (g) one dengue case presenting micro (Mi) and macrosteatosis (Ma), nuclear degeneration (black star) and numerous macrophage cells (Mø). (h) Ultrathin section of a non-dengue case exhibiting normal hepatocytes (H) and regular sinusoidal capillaries (SC) with the presence of monocytes (Mo) and Kupffer cells (KC) and (i and j) dengue cases showing large lipid droplets (LD) in the cytoplasm of hepatocytes, swollen mitochondria (red stars) and presence of platelet (Pt) inside sinusoidal capillaries (SC) with loss of endothelium. Semi-thin and ultrathin sections of liver were stained with methylene blue/azure II solution and uranyl acetate/lead citrate, respectively. Quantitative studies of histological damages were made individually in dengue (cases 1-4) and non-dengue patients (cont. [1][2][3][4], and statistical analysis were performed comparing the mean values of each group (dengue patients vs non-dengue patients). Damages were quantified by the percentage of affected area for (k) hemorrhage and (l) edema or (m) by steatosis degree using a scale ranging from 0 to 4. (n-o) Steatosis was also evaluated in each hepatic acini area (periportal, midzonal and central vein) by plotting different damage degrees (ten fields for each case). Asterisks indicate differences that are statistically significant between control and dengue groups, (*) (P,0.05) and (***) (P,0.0001). doi:10.1371/journal.pone.0083386.g001
Detection of Dengue Virus in the Lung
Virus antigens were detected in the lung of all dengue cases, mainly in alveolar macrophages (Fig. 4a), but also in type II pneumocytes (Fig. 4a) and endothelium (Fig. 4b). Quantification of cells with dengue antigens revealed that cases 1 and 2 presented the highest number of positive cells (Fig. 4c). Virus replication in lung of dengue cases was observed by immunochemistry with the presence of the NS3 protein in alveolar macrophages, type II pneumocytes (Fig. 4d) and endothelium (Fig. 4e). As expected, none of lung tissues from non-dengue cases showed virus antigens (data not shown). In situ hybridization for detection of the dengue RNA negative strand also revealed virus replication in several alveolar macrophages, type II pneumocytes and endothelium (Figs. 4h and 4i). Dengue replication in these cells was confirmed by co-localization of the virus RNA (fluorescent blue) and CD31 (fluorescent red), for identification of endothelial cells, or cytokeratin AE1/AE3 (fluorescent green), a marker for pneumocytes ( Fig. 4j and 4k). As expected, positive reaction was not observed in dengue cases treated with omission of the probe (Fig. 4g) or in non-infected tissues incubated with the probe (Fig. 4f).
Heart of DENV-3 Fatal Cases: Histopathological and Ultrastructural Aspects
Histopathological analysis of the heart showed parenchyma and circulatory alterations in all DENV-3 cases. As expected, nondengue patients presented a normal cardiac structure with branching fibers, central nuclei and intercalated discs (Figs. 5a, 5d and 5f). On the other hand, all dengue cases, except case 2, presented myocarditis with cardiac fibers degradation and the loss of striations and nucleus (Figs. 5b and 5c). Besides, focal areas of mononuclear infiltrated were detected in all dengue cases (Figs. 5b and 5c). Semi-thin sections revealed degeneration of cardiac fibers with the absence of nucleus and a diffuse interstitial edema, characteristic of myocarditis (Fig. 5e). Ultrastructural analysis suggested an apoptosis process in the cardiac fiber with picnotic nucleus and mitochondria alterations (Fig. 5g), which was not detected in the non-dengue patient (Fig. 5f).
Hemorrhage and edema were observed in all dengue cases (Fig. 5b). Quantification of these damages revealed that cases 2 and 3 presented extensive areas of hemorrhage (Fig. 5i), while case 1, who had diabetes and was diagnosed as having ischemic cardiomyopathy, exhibited several areas with edema (Fig. 5j). Ultrastructural evaluation also showed interstitial edema around capillary vessels in this case (Fig. 5h).
Detection of Dengue Virus in the Heart
Virus antigens were detected mainly in the myocardial fibers in the perinuclear region (Figs. 6a and 6b), but also in monocytes/ macrophages and endothelium (Fig. 6c). All dengue cases presented virus antigens, with larger detection of positive cells in cases 3 and 4, while no reaction was observed in control tissues (Fig. 6d). Virus replication in the heart of dengue cases was observed by immunochemistry with the detection of the NS3 protein in the same cell types stained for the other virus antigens (Figs. 6e and 6f). As expected, none of heart tissues from nondengue cases showed such antigens (data not shown). The DENV replication was also confirmed by in situ hybridization and results revealed strong and weak positive reactions in endothelium and cardiac fibers, respectively (Fig. 6i). As expected, positive reaction was not observed in dengue cases treated with omission of the probe (Fig. 6g) or in non-infected cases incubated with the probe (Fig. 6h).
Kidney of DENV-3 Fatal Cases: Histopathological and Ultrastructural Aspects
Histopathological analysis of the kidney of the DENV-3 cases showed parenchyma and circulatory damages. As expected, in non-dengue patients we observed a normal structure with preserved distal and proximal convoluted tubules and intact renal glomerulus (Fig. 7a). Semi-thin and ultrathin sections of nondengue cases showed podocytes around the glomerular capillaries, endothelial and mesangial cells, as well as, distal and proximal convoluted tubules presenting cuboidal cells containing brush border and capillaries vessels with regular structures (7d, 7e and 7h). In contrast, three dengue cases (1, 2 and 4) presented acute tubular necrosis, characterized by sloughing of necrotic cells and loss of the basement membrane mainly in proximal convoluted tubules, but also to a lesser extend in distal tubules, with formation of casts of cellular debris (Figs. 7b and 7g). Ultra-structural analysis revealed pyknotic nucleus and dilatation of endoplasmic reticulum in these necrotic cells (Fig. 7i). On the other hand, cellular regeneration was also found in the cortical region, indicating thus an initial recovery of the organ (Fig. 7b).
Several focal areas with hemorrhage and edema were observed in all dengue cases, located preferentially in the medullar region (Fig. 7c). Quantification of these damages showed that cases 1, 2 and 4 presented more areas with hemorrhage and edema when compared to case 3 (Figs. 7j and 7k). Circulatory disorders were also observed in semi-thin sections revealing the presence of thrombus in glomerular capillaries (Fig. 7g).
Detection of Dengue Virus in the Kidney
Virus antigens were detected by immunohistochemical analysis, revealing that all dengue cases presented similar number of positive cells (Fig. 7m), mainly in circulating macrophages and monocytes into blood vessels (Fig. 7l). As expected, non-dengue patients did not presented virus antigens. However, no virus replication could be detected in the kidney of any of the four dengue cases, evaluated either by presence of the dengue NS3 protein or the virus RNA negative strand (data not shown).
Spleen of DENV-3 Fatal Cases: Histopathological and Ultrastructural Aspects
Histopathological analysis of the spleen of the four DENV-3 cases showed severe parenchyma and circulatory dysfunctions. Spleen of non-dengue patients presented a regular structure, with normal splenocytes and well-defined regions of white and red pulp (Figs. 8a, 8d and 8f). On the other hand, dengue cases revealed prominent parenchyma lesion with remarkable atrophy of lymphoid follicles (Fig. 8b), disruption of the structural pattern and destruction of germinal centers (Fig. 8c). Quantitative analysis of lymphoid follicle areas revealed an approximately two fold reduction of these follicles in white pulp in dengue cases when compared to non-dengue patients (Fig. 8i). Additionally, analysis of semi and ultra-thin sections revealed areas with vacuolization around degenerated splenocytes (Figs. 8e and 8g) and loss of endothelium of sinusoids (Fig. 8h), which were not observed in non-dengue case (Figs. 8d and 8f).
Several areas with vascular congestion and edema were observed in all dengue cases, located preferentially in red pulp (Figs. 8b and 8c). Quantification of these damages showed that cases 1 and 2, both with co-morbidities, had more extensive areas of congestion (Fig. 8j), while case 1 also presented more diffuse areas of edema (Fig. 8k).
Detection of Dengue Virus in the Spleen
Viral antigens were observed only in dengue cases, detected in circulating macrophages located in red pulp (Fig. 9a). Quantification of dengue positive cells showed high cell numbers in cases 2 and 4 (Fig. 9b). Dengue replication in spleen was observed in these same cells, by detection of the NS3 protein (Fig. 9c) and virus RNA (Fig. 9f). Macrophages with replicating virus were identified morphologically ( Fig. 9c and 9f), as well as with the cell marker CD68 (fluorescent red), which co-localized with the dengue RNA (fluorescent blue) (Fig. 9g). As expected, reaction was not observed in tissues treated without the probe (Fig. 9d) or in non-dengue patient samples incubated with the probe (Fig. 9e).
Discussion
Clinical observations of dengue patients and postmortem studies have provided important insights about the dengue pathophysi-ology, although there are still many gaps in understanding it. In the present work, we analyzed tissue samples (liver, lung, heart, kidney and spleen) from four dengue fatal cases concerning their histopathological and ultrastructural aspects, with the concomitant detection of virus in these tissues. These organs were chosen since they have been associated with dengue infection by some reports in literature [17][18][19][23][24][25][26][27], although their involvement in the disease and death is still not clear. We observed lesions characteristic of DHF/DSS, such as hemorrhage and edema, in all organs. These observations were expected since severe dengue is usually associated to an increase of vascular permeability, which tend to lead to plasma leakage [40][41][42]. Damages, which were diffuse throughout the organs, were quantified and a particular profile of these lesions was observed in each organ.
The first organ we analyzed was the liver, which is commonly involved in dengue infection, normally leading to elevated serum levels of some hepatic enzymes [43][44][45]. We observed remarkable metabolic alterations in the hepatocytes of three DENV cases (1, 2 and 3), with the presence of single or multiple small lipid vesicles (microsteatosis) and/or large vesicles (macrosteatosis). Steatosis has been associated to dengue infection in human reports [21,22], as well as in experimental animal models [31,46] and in vitro studies [47,48]. In fact, some studies suggested a link between lipid droplets and viral replication, with the involvement of the capsid [48] and non-structural 3 dengue proteins and increasing of the cellular fatty acid synthesis [48]. Consequently, such lipid vesicles may contribute to the spread of the virus by the hepatic tissue and subsequently to other organs. Furthermore, our analysis regarding steatosis degrees in the three hepatic zones revealed a heterogenic pattern, in which highest degrees were observed in zone I, around the portal space. This zone, described in the literature as a high oxygen content area and the first to be regenerated [49], is likewise more affected in hepatitis C cases [50], another flavivirus which seems also to use lipid vesicles for replication [51,52]. However, in yellow fever fatal cases, Quaresma et al. [33] observed that steatosis was more intense in zone II, the midzonal area.
Quantification of liver damages comparing the four dengue cases revealed that case 2 presented the highest degree of steatosis and also focal areas of necrosis associated with mononuclear infiltrate. The existence of co-morbidity in this case, a young woman with obesity, might be one explanation for such observation. It is well known that obesity results in hepatic nonalcoholic steatosis, with mild increase of alanine and aspartate aminotransferases (ALT and AST, respectively) serum levels [53,54]. In fact, case 2 also presented increased level of AST (149 IU/L), suggesting a liver dysfunction.
Other hepatocytes alterations, such as a vacuolar degeneration of the nucleus and the presence of swollen mitochondria, were observed in these cases by ultrastructural analysis, suggesting an apoptotic process. Liver biopsies and autopsies, obtained from either children or adults infected with DENV, also presented apoptotic cells, mainly hepatocytes and Kupffer cells [21,22,55]. However, evaluation of hepatic alterations by electron microscopy in dengue human cases is rare [29]. The swelling of mitochondria with the loss of its matricial chamber was also observed in dengue infected mice [30,56] and it had been associated with alterations of the ATP balance leading to apoptosis in a hepatocyte cell line infected with DENV [46]. In fact, in vitro studies with several cell lines from different tissues, such as liver, lung, kidney and vascular endothelial cells, showed that dengue infection by itself can induce apoptosis [57][58][59] without the involvement of other host factors, including the various components of the immune response. However, apoptosis, as well as other damages, can be exacerbated in vivo by activation of inflammatory responses [15,60].
Immunohistochemical analysis revealed the presence of dengue antigens abundantly in hepatocytes and poorly in Kupffer/ macrophage and endothelial cells, although macrophages had been pointed as one of the first target cells after infection [61]. Previous studies also detected DENV antigens in hepatocytes of human dengue cases [62,63], while other reports observed virus antigens mainly in Kupffer cells [64]. However, the existence of viral antigens, such as the envelope or membrane proteins, inside cells to assign replication may be questionable, especially in macrophages, since these antigens can be originated from phagocytized or killed virus. Therefore, we also evaluated the presence of the NS3 protein in such cells, since this is a nonstructural protein which is only observed after virus replication.
Results showed that this antigen was detected in the same cells (hepatocytes, Kupffer and endothelial cells) evidencing, thus, virus replication. Furthermore, we confirm these findings by in situ hybridization with the dengue RNA negative strand, which is only present inside cells during replication and it is a robust tool to verify virus tropism. Our results revealed strong hybridization signal in hepatocytes, thus confirming replication in these cells. Similar results were also observed in mice after dengue infection [31].
Another highly affected organ in all the four dengue cases was the lung. In fact, clinical and necropsy data, as well as our histopathological analysis, indicated that all patients died from acute pulmonary edema. Pulmonary complications during dengue infection are scarcely described, characterized mainly by hemoptysis, pulmonary hemorrhage and congestion of alveolar septa, which may lead to the rupture of alveolar walls [8,17,24,65]. In the present work, we observed the highest percentage of areas with hemorrhage and edema, comparing to other organs, and several mononuclear infiltrates as well as hyperplasia of alveolar macrophages in all the four dengue cases. Furthermore, we noted that cases 1 and 2, who also had co-morbidities (diabetes and obesity, respectively), showed strong septum thickening associated with increase of cellularity. Damages in the lung tissue from these two cases indicated that they had suffered a dengue shock syndrome, which leaded to hyaline membrane formation as described elsewhere [17,24]. The presence of hyaline membrane is also found in the lung of patients with shock in Acute Respiratory Distress Syndrome (ARDS), but its pathogenesis seems to be different from dengue, since ARDS lead mainly to neutrophil inflammation [66,67], whereas in dengue cases we observed only mononuclear infiltrates. The abnormalities in the lung of the dengue cases were confirmed by electron microscopic. It also revealed that type II pneumocytes were highly predominant in the alveoli, mainly in case 2 and in a lesser extend in case 1, indicating that injured type I pneumocytes were removed and replaced by 1-4), and statistical analysis performed comparing the mean values of each group (dengue patients vs non-dengue patients). The media of lymphoid follicle areas were quantified (i), as well as the percentage areas with vascular congestion (j) and edema (k). Asterisks indicate differences that are statistically significant between control and dengue groups, (*) (P,0.05) and (***) (P,0.0001). doi:10.1371/journal.pone.0083386.g008 type II pneumocytes, which, in its turn, proliferated and became hypertrophic.
The lung was also the organ with the highest number of positive cells presenting virus antigens, thus indicating the importance of this organ in the disease, at least in severe cases. In fact, virus-like particles, detected by electron microscopy, were only observed in the lung. Dengue antigens (in general and specifically the NS3) and negative RNA strand were observed in alveolar macrophages, endothelial cells and in type II pneumocytes, indicating virus replication in these cells. Such results are in part in accordance to other findings [63,64], which detected virus antigens in alveolar macrophages and endothelial cells. However, as far as we know, this is the first report showing virus replication also in pneumocytes. Furthermore, these results were confirmed by co-localization assays for the virus RNA and pneumocytes specific markers.
Investigation in the heart showed focal areas with mononuclear infiltrate in all the dengue cases. Moreover, in three cases (1, 3 and 4) we noted degeneration of cardiac fibers, with absence of nucleus and loss of striations, resulting from an interstitial edema, which suggests myocarditis. In fact, clinical data from case 1, who was diabetic, indicated that this patient also had cardiac problems, while necropsy data from cases 3 and 4 revealed hypertrophic cardiomyopathy. Virus antigens were detected in cardiac fibers, as well as in endothelium and monocytes/macrophages. Moreover, immunohistochemistry for detection of NS3 and in situ hybridization also indicated replication in cardiac fibers, which is a surprising result and confirms date from Salgado et al. [27], who detected virus antigens in these fibers. Furthermore, cardiac dysfunction had been reported in other histopathological and clinical evaluations of dengue patients [68,69].Taken together, such findings suggest that the direct infection of the virus in cardiac fibers may be responsible, at least in part, for heart dysfunction. However, besides the direct cytotoxic effect of the virus in cardiac fibers, the exacerbation of the host immune response leading to strong cytokine expression may contribute to the observed tissue damages. Injuries seem also to involve apoptosis, which was suggested by the presence of pyknotic nuclei from cardiomyocytes and loss of mitochondrial integrity, revealed by our ultrastructural evaluations.
Regarding the kidney, the main lesion we observed was an acute tubular necrosis of proximal convoluted tubules, particularly in cases 1, 2 and 4, caused by ischemic processes, which is common in patients with severe hypovolemic shock with significant blood volume loss [70]. In this process, there is a sloughing of necrotic cells of the brush border and loss of the basement membrane in tubules with formation of casts of cellular debris. Ultrastructural analysis revealed the presence of pyknotic nuclei and dilatation of endoplasmic reticulum, suggesting the existence of booth cell death process, necrosis and apoptosis. It would be expected that case 1, who had diabetes, would present severest nephropathy with glomerular hypertrophy, indicating hyperfiltration, characteristic of this disease [71]. However, we did not observe such pathology in this case, although he exhibited elevated levels of creatinine and urea (5.0 mg/dL; and 57 mg/dL, respectively), clearly demonstrating a renal disorder. On the other hand, the dengue infection in this case potentiated vascular damages in pancreas, evidenced by the presence of arterioles with altered vascular walls and predominance of mononuclear cells (macrophages) in the midst of conjunctive tissue around the pancreatic islets (data not shown).
Interestingly, the kidney was the only organ where we did not observe DENV replication, either by detection of the NS3 protein or the virus RNA negative strand. However, we did note the presence of virus antigens in general, probably the dengue E and M proteins, in monocytes/macrophages in all studied cases. Results suggested that these antigens can be derived from immune complexes reabsorbed by such cells. Thus, these findings suggest that the kidney is not a target organ in dengue infections and that the observed injuries are not caused directly by virus replication in this organ, but due to vascular fragility probably induced by the host immune response. In fact, besides edema and hemorrhage we also observed a thrombus formation in glomerular capillaries.
On the other hand, the most remarkable effect of the dengue infection in the spleen was the atrophy of lymphoid follicles, a T and B lymphocyte rich region, observed in all the four cases, which were approximately 2 fold smaller than controls. In addition to the disruption the follicle architecture, we also noted a destruction of the germinal centers. Such results are in accordance with those from Limonta et al. [29], who reported a decrease in the amount of splenic lymphocytes in one dengue fatal case, which suggests atrophy of lymphoid follicles. Reduction of the population of TCD4+ and TCD8+ cells was also observed by flow cytometry in spleen of mice infected with DENV2 [71,72]. As expected, virus replication was detected in circulating macrophages located in splenic red pulp of the four dengue cases, corroborating others studies [63,64].
Overall, dengue infection involves circulatory disturbs, mainly hemorrhage and edema, described in several reports [19,20,40]. Besides, the histopathological analysis performed in the present work, also allowed us to identify megakaryocytes and platelets in alveolar spaces, thrombus in glomerular capillaries and loss of endothelium of splenic and hepatic sinusoids. Furthermore, we detected, by electron microscopic, platelets adhered to endothelial cells in liver, heart and spleen, which is normally not observed in health individuals. All such findings are probably due to a response to vascular damages and should play an essential role in hemostasis. Moreover, other studies showed an impaired thrombopoiesis and suppression of megakaryopoiesis in dengue patients [73,74,75], suggesting that infection causes extramedullar effects as a physiological compensatory mechanism that occurs when the bone marrow is unable to cover the physical demand of blood cells [75].
The vascular alterations observed in dengue cases, by its turn, may be a consequence of the imbalance of the host immune system, specially cytokine storm, cytotoxic T cell and complement activations [14,15,76,77], in addition to endothelium injuries caused by the direct infection of the virus in these cells, as also reported by several in vitro studies [57][58][59]. Further studies will be necessary in order to investigate the contribution of the host immune response in the observed tissue damages. | 9,549.4 | 2014-04-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
SURFACE AND SUBSURFACE RESIDUAL STRESSES AFTER MACHINING AND THEIR ANALYSIS BY X-RAY DIFFRACTION
Residual stresses are an integral part of manufactured workpieces, whether they are introduced deliberately, as a part of the design, as a by-product of a process carried out during the manufacturing process, or are present as the product of the component’s service history. Residual stresses are additive with the stresses existing in the workpieces as a result of service loads. Clearly, they may be considered beneficial to the workpieces and therefore desirable, they may be irrelevant and can be ignored, or they are a potential detriment to the workpieces and their continued service life. Given an adequate history, the magnitude of residual stresses in parts that are in service can be considered as indicators of the workpiece’s deterioration [1].
Introduction
Residual stresses are an integral part of manufactured workpieces, whether they are introduced deliberately, as a part of the design, as a by-product of a process carried out during the manufacturing process, or are present as the product of the component's service history. Residual stresses are additive with the stresses existing in the workpieces as a result of service loads. Clearly, they may be considered beneficial to the workpieces and therefore desirable, they may be irrelevant and can be ignored, or they are a potential detriment to the workpieces and their continued service life. Given an adequate history, the magnitude of residual stresses in parts that are in service can be considered as indicators of the workpiece's deterioration [1].
Obviously, to realize the benefits of understanding the residual stresses in parts and structures, tools are needed to measure them. Several techniques are available, with varying degrees of sophistication. Some of them are rather limited in their application, but one stands out as having widespread applications and being readily available.
X-ray diffraction is applicable to crystalline materials, which include virtually all metals and their alloys, and most ceramic materials. It is a non-destructive detection technology in many applications, and is widely accepted by the engineering community, being the subject of SAE and ASTM publications, which provide reliable sources of information on methods to ensure repeatability and reliability in the results of measurements. Because individual mea-surements are non-destructive, they can be replicated to demonstrate their statistical reliability. This article will look closely at the methodology of residual stress measurement by X-ray diffraction, explore the characteristics of measurements performed using modern X-ray methods, and offer a few practical examples [2].
Principles of X-Ray Diffraction Stress Measurement
Macroscopic stresses, which extend over distances that are large relative to the grain size of the material, are of general interest in design and failure analysis. Macroscopic stresses are tensor quantities, with magnitudes varying with direction at a single point in a component. The macroscopic stress for a given location and direction is determined by measuring the strain in that direction at a single point. When macroscopic stresses are determined in at least three known directions, and a condition of plane stress is assumed, the three stresses can be combined using Mohr's circle for stress to determine the maximum and minimum residual stresses, the maximum shear stress, and their orientation relative to a reference direction. Macroscopic stresses strain many crystals periodically in the surface. This periodical distortion of the crystal lattice shifts the angular position of the diffraction peak selected for residual stress measurement [1].
Microscopic stresses are scalar properties of the sample, such as percent of cold work or hardness, which are without direction
SURFACE AND SUBSURFACE RESIDUAL STRESSES AFTER MACHINING AND THEIR ANALYSIS BY X-RAY DIFFRACTION
Andrej Czan -Eva Tillova -Jan Semcer -Jozef Pilc * Process specifications and working procedures widely used by the aerospace and automotive industries require surface analysis by machining and specify the process parameters such as type, destructive measuring or simulations. Destructive measuring or simulation carried out in order to optimise and later to verify the process parameters are a very indirect way of measurement. While they are performed on simulation only similar in composition and elastic properties to that of the actual part to be machined, they almost never match all the important conditions of the process such as the shape of the real part or the residual stress prior to the treatment. Consequently the residual stresses and their depth distribution after the machining may differ very significantly from those required by the technologist. The only reliable way to verify that the operation has produced the desired effect is to actually measure the stresses in the machined component.
and result from imperfections in the crystal lattice. Microscopic stresses are associated with strains within the crystal lattice that traverse distances on the order or less than the dimensions of the crystals. Microscopic stresses change within the crystal lattice, altering the lattice spacing and broadening the diffraction peak. Macroscopic stresses and microscopic stresses can be determined individually from the diffraction peak position and breadth [1]. Figure 1 describes the diffraction of a monochromatic beam of X-rays at a high diffraction angle 2θ from the surface of a stressed sample for two orientations of the sample relative to the X-ray beam. The angle ψ, defining the orientation of the sample surface, is the angle between the normal of the surface and the incident and diffracted beam bisector, which is also the angle between the normal to the diffracting lattice planes and the sample surface [3].
Diffraction occurs at an angle 2θ, which is defined by Bragg's Law: nλ ϭ 2d sin θ, where n is a number denoting the order of diffraction, λ is the X-ray wavelength, d is dimension of the lattice spacing of crystal planes, and θ is the diffraction angle. For the monochromatic X-rays produced by the metallic target of an X-ray tube, the wavelength is known to 1 part in 105. Any change in the lattice spacing d results in a corresponding shift in the diffraction angle 2θ. Figure 1a shows the sample in the ψ ϭ 0 orientation. The presence of a tensile stress in the sample results in a Poisson's ratio contraction, reducing the lattice spacing and slightly increasing the diffraction angle, 2θ. If the sample is then rotated through some known angle ψ (Fig. 1b), the tensile stress present in the surface increases the lattice spacing over the stress-free state and decreases 2θ. Measuring the change in the angular position of the diffraction peak for at least two orientations of the sample defined by the angle ψ enables calculation of the stress present in the sample surface lying in the plane of diffraction, which contains the incident and diffracted X-ray beams. To measure the stress in different directions at the same point, the sample is rotated about its surface normal so that the direction of interest coincides with the diffraction plane. Because only the elastic strain changes the mean lattice spacing, only elastic strains are measured using X-ray diffraction for the determination of macroscopic stresses. When the elastic limit is exceeded, further strain results in dislocation motion, disruption of the crystal lattice, and the formation of microscopic stresses, but no additional increase in macroscopic stress. Although residual stresses result from non-uniform plastic deformation, all residual macrostresses remaining after deformation are necessarily elastic.
The residual stress determined using X-ray diffraction is the arithmetic average stress in a volume of material defined by the irradiated area, which may vary from square centimetres to square millimetres, and the depth of penetration of the X-ray beam. The linear absorption coefficient of the material for the radiation used governs the depth of penetration, which can vary considerably. However, in iron, nickel, and aluminium-based alloys, 50% of the radiation is diffracted from a layer approximately 0.005 mm deep for the radiations generally used for stress measurement. This shallow depth of penetration allows determination of macro and microscopic residual stresses as functions of depth, with depth resolution approximately 10 to 100 times that possible using other methods. Although in principle virtually any interplanar spacing may be used to measure strain in the crystal lattice, the availability of the wavelengths produced by commercial X-ray tubes limits the choice to a few possible planes. The choice of a diffraction peak selected for residual stress measurement impacts significantly on the precision of the method. The higher the diffraction angle, the greater the precision. Practical techniques generally require diffraction angles, 2θ, greater than 120° [4].
Plane-stress elastic model X-ray diffraction stress measurement is confined to the surface of the sample. Electropolishing is used N -normal to the surface. Fig. 1 Principles of X-ray diffraction stress measurement [3] to expose new surfaces for subsurface measurement. In the exposed surface layer, a condition of plane stress is assumed to exist. That is, a stress distribution described by principal stresses σ 1 and σ 2 exists in the plane of the surface, and no stress is assumed perpendicular to the surface, σ 3 ϭ 0. However, a strain component perpendicular to the surface ε 3 exists as a result of the Poisson's ratio contractions caused by the two principal stresses (Fig. 2) [5].
The strain, ε ϕψ in the direction defined by the angles ϕ and ψ is: where E is the modulus of elasticity, v is the Poisson's ratio, and α 1 and α 2 are the angle cosines of the strain vector: Substituting for the angle cosines in Eq. 1 and simplifying enables the strain to be expressed in terms of the orientation angles: If the angle ψ is taken to be 90°, the strain vector lies in the plane of the surface, and the surface stress component, σ ϕ is: Substituting Eq. 4 into Eq. 3 yields the strain in the sample surface at an angle ϕ from the principal stress σ 1 : Equation 5 relates the surface stress σ ϕ , in any direction defined by the angle ψ, to the strain, ʦ, in the direction (ϕ, ψ) and the principal stresses in the surface. If d ϕψ is the spacing between the lattice planes measured in the direction defined by ϕ and ψ, the strain can be expressed in terms of changes in the linear dimensions of the crystal lattice: where d 0 is the stress-free lattice spacing. Substitution into Eq. 5 yields: where the elastic constants (1 ϩ v/E) (hkl) and (v/E) (hkl) are not the bulk values but the values for the crystallographic direction normal to the lattice planes in which the strain is measured as specified by the Miller indices (hkl). Because of elastic anisotropy, the elastic constants in the (hkl) direction commonly vary significantly from the bulk mechanical values, which are an average over all possible directions in the crystal lattice. The lattice spacing for any orientation, then, is: Equation 7 describes the fundamental relationship between lattice spacing and the biaxial stresses in the surface of the sample. The lattice spacing d ϕψ , is a linear function of sin 2 ψ.
The intercept of the plot at sin 2 ψ ϭ 0 is: (8) which equals the unstressed lattice spacing, d 0 , minus the Poisson's ratio contraction caused by the sum of the principal stresses. The slope of the plot is: which can be solved for the stress σ ϕ : The X-ray elastic constants can be determined empirically, but the unstressed lattice spacing, d 0 , is generally unknown. However, because E » (σ 1 ϩ σ 2 ), the value of d ϕ0 from Eq. 8 differs from d0 by not more than Ϯ 1%, and σφ may be approximated to this accuracy using: The method then becomes a differential technique, and no stress-free reference standards are required to determine d 0 for the biaxial stress case. The three most common methods of X-ray diffraction residual stress measurement, the single-angle, two-angle, and sin 2 ψ techniques, assume plane stress at the sample surface and are based on the fundamental relationship between lattice spacing and stress given in Eq. 7 [6].
The single-angle technique, or single-exposure technique, derives its name from early photographic methods that require a single Fig. 2 Plane-stress elastic model [3] exposure of the film. The method is generally considered less sensitive than the two-angle or sin 2 ψ techniques primarily because the possible range of ψ is limited by the diffraction angle 2θ [7]. Figure 3 shows the basic geometry of the method. A collimated beam of X-rays is inclined at a known angle, β, from the sample surface normal. X-rays diffract from the sample, forming a cone of diffracted radiation originating at point 0. The diffracted X-rays are recorded using film or position-sensitive detectors placed on either side of the incident beam. The presence of a stress in the sample surface varies the lattice spacing slightly between the diffracting crystals shown at points 1 and 2 in Fig. 3, resulting in slightly different diffraction angles on either side of the X-ray beam. If detector S1 and detector S2 are the arc lengths along the surface of the film or detectors at a radius R from the sample surface, the stress is: (11) The angles ψ 1 , and ψ 2 are related to the Bragg diffraction angles θ 1 , θ 2 , and the angle of inclination of the instrument, β, by: and (12) The precision of the method is limited by the principle that increasing the diffraction angle 2θ to achieve precision in the determination of lattice spacing reduces the possible range of sin 2 ψ, lessening sensitivity. The single-angle technique is generally not used, except for film and position-sensitive detector apparatuses designed for high-speed measurement [7].
Quantitative and Qualitative Stress Analysis
The conventional way of measuring the surface residual stresses is X-ray diffraction (XRD). This is a well-established quantitative, absolute method and provides accurate stress values. In machined components the beneficial maximum compressive stresses are beneath the surface and thus to verify the result of the machining, evaluation below the surface is necessary. The measurement depth To measure the subsurface stress by XRD requires successive electrochemical removal of material and repeated XRD measurements. Such a procedure is acceptable for laboratory evaluations on selected samples. Difficult to reach areas such as holes, fillets or roots of gears form an additional difficulty. The XRD is nevertheless irreplaceable in obtaining the true and complete picture of the effects of machining. Particularly, very steep stress gradients after machining of very hard steels are well resolved by this technique. It ought to be mentioned here that, in addition to the stress profile, also the effects of plastic deformation, work hardening or softening of the machined surface can be illustrated and quantified by the XRD measurement (Fig. 4).
Material and experimental technique
Experimental studies were made on a high carbon rolled steel bar (60 mm diameter and 500 mm long). The chemical analysis of the steel conducted on a direct reading spectrometer determined its chemical composition as: 1.14% C, 0.46% Mn, 0.16% Si, 0.11% S and 0.04% P. Round slices cut from the steel bar were shaped as shafts (59 mm ϫ 150 mm) by subsequent machining. The thickness of samples subjected to inhomogeneous plastic deformation were approximately 30 mm, whereas those subjected to thermal and phase transformation were approximately 15 mm. All these samples were made free from residual stresses by annealing them in a muffle furnace as follows: heating rate, 165 °C.h Ϫ1 ; soaking time, 1 h; soaking temperature, 850 °C; cooling rate, 30 °C.h Ϫ1 (Fig. 5 and Fig. 6).
Residual stress measurements were made on five machined samples, three samples having thermal residual stresses and two samples having thermal and phase transformation stresses. The specimens were carefully prepared and made free from scale and dirt. Clamping of the samples to the work holder was done with chuck tightening to avoid their sliding during the experiment. Residual stresses were measured in the area located at the middle, e.g., at l/2 length in all the samples. The residual stresses were measured on the X-ray diffractometer using the strain flex X-ray analyzer.
Fig. 4 Measurement system for residual stresses in X-ray diffraction
From the theory of elasticity the relationship between residual stress (σ) and strain (ε) on the specimen surface under plane stress is given by the Bragg equation, λ ϭ 2d sin θ, relating incident X-ray wavelength (λ), lattice interplanar spacing (d) and diffraction angle (θ).
Determination of the magnitude and direction of the maximum residual stress created after machining is posssible to measure by X-ray diffractometry. The direction of maximum residual stress, that is, the most tensile or least compressive, is assumed to occur in the cutting or grinding direction during most machining operations. This is frequently the case, but the maximum stress often occurs at significant angles to the cutting direction. Furthermore, the residual stress distributions produced by many cutting operations, such as turning, may be highly eccentric, producing a highly tensile maximum stress and a highly compressive minimum stress [8].
The residual stress field at a point, assuming a condition of plane stress, can be described by the minimum and maximum normal principal residual stresses, the maximum shear stress, and the orientation of the maximum stress relative to some reference direction. The minimum stress is always perpendicular to the maximum. The maximum and minimum normal residual stresses, shown as σ 1 and σ 2 in Fig. 2, and their orientation relative to a reference direction can be calculated along with the maximum shear stress using Mohr's circle for stress if the stress σ ϕ is determined for three different values of ϕ [9].
To investigate the minimum and maximum normal residual stresses and their orientation produced by turning of samples, X-ray diffraction residual stress measurements were performed in the longitudinal, 45°, and circumferential directions at the surface and at subsurface layers to a nominal depth of 0.1 mm, exposing the subsurface depths by electropolishing complete cylindrical shells around the cylinder. The cylinder was nominally 59 mm in diameter and uniformly turned along a length of several inches. The irradiated area was limited to a nominal height of 1 mm around the circumference by 2.5 mm along the length [10]. Measurements were conducted using a Cr Kα (420) two-angle technique, separating the Kα 1 peak from the doublet using a Cauchy peak profile (Fig. 7).
The measurements performed independently in the three directions were combined using Mohr's circle for stress at each depth to calculate the minimum and maximum normal residual stresses and their orientation defined by the angle ϕ, which was taken to be a positive angle counterclockwise from the longitudinal axis of the cylinder. Figure 8 illustrates the results, showing the maximum and minimum principal residual stress profiles and their orientation relative to the longitudinal direction. The maximum stresses are tensile at the surface, in excess of 140 MPa, dropping rapidly into compression at a nominal depth of 0.005 mm. The maximum stress returns into tension at depths exceeding 0.025 mm and remains in slight tension to the maximum depth of 0.1 mm examined. The minimum residual stress is in compression in excess of Ϫ480 MPa at the turned surface and diminishes rapidly in magnitude with depth to less than Ϫ138 MPa at a depth of 0.013 mm. The minimum stress remains slightly compressive and crosses into tension only at the maximum depth examined. The orientation of the maximum stresses is almost exactly in the circumferential direction (90° from the longitudinal) for the first two depths examined. For depths of 0.013 mm to the maximum depth of 0.1 mm, the maximum stress is within approximately 10° of the longitudinal direction.
Experimental Results and Discussion
To measure the residual stresses, we began by placing the parts on a pad measuring system. The detector arm focused on the unit area measured (Fig. 9). Stresses were measured in the axial and radial direction components. After preparing the parts the X-ray apparatus began to emit and X-rayed the previously selected point in the part. The device measured the stress to a depth of 12μm in the range of angles around 123° -171° [11]. Then the computer formulated the results as graphs, which calculated the residual values for shear and residual stresses, which were then processed into tables.
Turning Roughing Operation when using a cutting speed of v c ϭ 100m.min Ϫ1 a measurement was made of the axially compressed nature of the residual stress, whose value hovered around Ϫ360MPa, and the radial residual stress in the application of the same cutting speed showed a value of Ϫ175MPa pressure of the same character. With increasing cutting speed there was a reduction of the residual stresses in the axial and radial directions. With a value of v c ϭ 150m.min Ϫ1 the tension in the axial direction compared with v c ϭ 100m.min Ϫ1 reduced by 40%, while in the axial direction it decreased by 30%. When using v c ϭ 200m.min Ϫ1 the tension in the axial direction when applied to the values of v c ϭ ϭ 100 m.min Ϫ1 decreased by 50%, while in the radial direction there was a change to tensile stresses of pressure and a change of 250% (Fig. 10). The turning operation was complete when the residual stress nature of the pressure had been finished in all cases. When using a cutting speed of v c ϭ 100 m.min Ϫ1 the value of the measured axial tension corresponded to a voltage of -850MPa, whereas the radial residual stress when applying the same cutting speed showed a value of Ϫ580MPa. With a cutting speed of v c ϭ 150 m.min Ϫ1 the tension in the axial direction increased by 2% compared with v c ϭ 100 m.min Ϫ1 whereas in the radial direction the tension increased by 20%. When using v c ϭ 200 m.min Ϫ1 the tension in the axial direction when applied to the values of v c ϭ 100 m.min Ϫ1 increased by 250%, whereas in the radial direction of the residual stress it increased by 25% (Fig. 11).
That component is used as rolling tool that operates its surface on material to a ductile strength. In places where there is tensile residual stress, there is a negative effect on the functional area components. They have a great impact on the spread of cracks in components. Compressive stress occurs because the distances between the atoms themselves are very small, so they tend to associate and act against cracks in the workpiece. The compressive stress should not reach high values, for example, 2 GPa when in the surface can give rise to crevices and cracks. The optimal value of the residual stress varies in the range of 500-700 MPa. Residual stresses in a given case should not exceed 1000 MP, and the most extreme value for steel is 2550 MPa. In this experiment, the residual stresses in some places were evaluated as 2 GPa. Such a large residual stress was caused by previous use of the rolling thorn, which created great forces that had a large impact on the results of the measurements of residual stresses in this experiment (Fig. 12).
The results appear to indicate that stresses within approximately 0.013 mm of the sample surface are dominated by machining, which resulted in a maximum stress direction essentially parallel to the cutting action. At greater depths, the stress distribution may be governed not so much by the machining as by stresses that may have been present due to forging or heat treatment.
Conclusion
Distributions of residual stresses on the surfaces and along the depth of the machining steel samples have been presented. The stress distribution for the sample with the cylinder sample is characterised by compressive stresses on the surface and by tensile stresses in the subsurface. Residual stress distributions for samples with a circular surface are more complicated.
Conclusions about reversing of compressive residual stress on the surface of the sample to compressive and tensile stress in the depth made by analysis of equilibrium equations have been confirmed experimentally.
This work is related to the project with the University of Zilina, 2009/2.2/04-SORO OPVaV number (26220220101), and named "Intelligent system for non-destructive evaluation technologies for functional properties of components of X-ray diffractometry". The main objective is to transform the new non-destructive technologies for knowledge transfer to industry practice in the evaluation of functional properties of the surface and subsurface layers of non-destructive techniques. Fig. 11 Graph of residual stresses in turning finishing | 5,682.6 | 2013-06-30T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Molecular cloning, mapping to human chromosome 1 q21-q23, and cell binding characteristics of Spalpha, a new member of the scavenger receptor cysteine-rich (SRCR) family of proteins.
CD5 and CD6, two type I cell surface antigens predominantly expressed by T cells and a subset of B cells, have been shown to function as accessory molecules capable of modulating T cell activation. Here we report the cloning of a cDNA encoding Spα, a secreted protein that is highly homologous to CD5 and CD6. Spα has the same domain organization as the extracellular region of CD5 and CD6 and is composed of three SRCR (scavenger receptor cysteine rich) domains. Chromosomal mapping by fluorescence in situ hybridization and radiation hybrid panel analysis indicated that the gene encoding Spα is located on the long arm of human chromosome 1 at q21-q23 within contig WC1.17. RNA transcripts encoding Spα were found in human bone marrow, spleen, lymph node, thymus, and fetal liver but not in non-lymphoid tissues. Cell binding studies with an Spα immunoglobulin (Spα-mIg) fusion protein indicated that Spα is capable of binding to peripheral monocytes but not to T or B cells. Spα-mIg was also found to bind to the monocyte precursor cell lines K-562 and weakly to THP-1 but not to U937. Spα-mIg also bound to the B cell line Raji and weakly to the T cell line HUT-78. These findings indicate that Spα, a novel secreted protein produced in lymphoid tissues, may regulate monocyte activation, function, and/or survival.
Leukocyte function is regulated by a discrete number of cell surface and secreted antigens that govern leukocyte activation, proliferation, survival, cell adhesion and migration, and effector function. Among the proteins that have been shown to regulate leukocyte function are members of the SRCR 1 family. This family of proteins can be divided into two groups based upon the number of cysteine residues per SRCR domain, in-tron-exon organization, and domain organization (1). Group B includes the cell surface proteins CD5 (2) and CD6 (3), which are predominantly expressed by thymocytes, mature T cells, and a subset of B cells, WC1 (4,5), which is expressed by ␥␦ T cells in cattle, and M130 (6), which is expressed by activated monocytes. Of these, only CD5 and CD6 have been studied extensively. Monoclonal antibody (mAb) cross-linking studies suggest that both CD5 and CD6 can function as accessory molecules capable of modulating T cell activation (7,8). The role of CD5 and CD6 in the regulation of T cell function is further supported by the finding that following T cell activation, Tyr residues in the cytoplasmic domain of these two proteins are transiently phosphorylated. This provides a molecular mechanism whereby the cytoplasmic domains of both CD5 and CD6 can interact with intracellular SH2 containing proteins involved in signal transduction (9). Furthermore, phenotypic analysis of a CD5-deficient murine strain showed that its T cells are hyper-responsive to stimulation (10,11), suggesting that CD5 expression is required for the normal regulation of T cell receptor (TCR)-mediated T cell activation.
CD5 and CD6 are structurally the most closely related members of the group B SRCR family of proteins (1). They are both type I membrane proteins whose extracellular region is composed of three SRCR-like domains, each containing eight cysteine residues that are thought to form intrachain disulfide bonds. The extracellular domains of CD5 and CD6 are anchored to the cell membrane via a hydrophobic transmembrane domain and a long cytoplasmic domain. It has been reported that CD5 binds to the B cell antigen CD72 (12) and to CD5L (13), an antigen which is transiently expressed by activated B cells and has yet to be fully characterized. CD6 has been shown to bind to the leukocyte activation antigen ALCAM (activated leukocyte cell adhesion molecule). Unlike CD5 and CD6, which are closely related, CD72 and ALCAM are not homologous. CD72 is a type II membrane protein that is homologous to the C-type lectins; however, a carbohydrate binding activity for CD72 has not been reported. ALCAM is a type I membrane protein whose extracellular region is composed of five Ig-like domains (14). The regions of CD5 and CD72 involved in their interaction have not been identified. Studies with truncated forms of both CD6 and ALCAM have shown that the interaction between these two proteins is primarily mediated by the membrane proximal SRCR domain of CD6 and the aminoterminal Ig-like domain of ALCAM (15,16).
Here we report the cloning, chromosomal mapping, and cell binding properties of Sp␣, a novel member of the SRCR family of proteins. Sp␣ is expressed in lymphoid tissues and has the same domain organization as the extracellular regions of both CD5 and CD6. Binding studies with an Sp␣ Ig fusion protein were carried out to identify cells expressing a putative receptor for Sp␣. FIG
Cloning of Sp␣
An expressed sequence tag (EST) data base screen for potential new SRCR domain-containing genes revealed a novel gene in EST clone number 201340. The partial (Sp␣) clone from a fetal liver-spleen was purchased from Research Genetics and used to screen a human spleen library (Clontech HL5011a) by plaque hybridization for full-length cDNAs. Approximately 1 ϫ 10 6 clones were plated onto 20 150-mm plates and transferred to Hybond Nϩ nylon membranes (Amersham Life Science, Inc., rpn132b) as per manufacturer instructions. Membranes were UV cross-linked and hybridized by the method of Church (17). The hybridization probe was a radiolabeled EcoRI fragment digested from the EST clone 201340. The EcoRI fragment contained base pairs 1-1594 and was radiolabeled with [ 32 P]dCTP (Amersham Life Science, Inc.) using a random labeling kit (Boehringer Mannheim). Membranes were washed at 60°C using high stringency wash buffer and exposed to Kodak x-ray film (X-Omat AR). A subset of positive plaques were then replated and rescreened. After three rounds of screening, ten individual clones were obtained, of which two were full-length. Both of these clones were sequenced in both directions using the dideoxy method.
Northern Blot
One cell line and two tissue Northern blots were purchased from Clontech (Nos. 7757-1, 7766 -1, and 7754 -1, respectively) and hybridized in 50% formamide at 42°C according to manufacturer instructions. Radiolabeled Northern blot probes were prepared as outlined above. mRNA normalization probes were either GAPDH or -actin. Positive blots were washed under high stringency conditions. Blots were exposed to Kodak x-ray film (X-Omat AR).
Chromosomal Mapping
Somatic Cell Hybrids and PCR Amplifications of Sp␣-Human Sp␣ was localized to a human chromosome using a panel of 17 human-Chinese hamster hybrid cell lines derived from several independent fusion experiments (18). PCR primers used to amplify the human Sp␣ gene sequence were derived from the 3Ј untranslated region, and they are 5Ј-GAGTCTGAACACTGGGCTTATG (forward at nucleotide 1231-1252) and 5Ј-GTAATGGTCTGCACATCTGACC (reverse primer at nucleotide 1431-1452). The PCR conditions were 94°C for 3 min, 35 cycles of 94°C for 30 s, 55°C for 40 s, and 72°C for 1 min followed by 72°C for 7 min.
Two human radiation hybrid (RH) mapping panels, GeneBridge 4 (Whitehead/MIT Genome Center) and Stanford G3 (Stanford Human Genome Center), were used to confirm and further define the localization of the Sp␣ gene. Typing was carried out using the primers and PCR conditions described above. 2 Fluorescence in Situ Hybridization-The chromosomal location of the human Sp␣ gene was independently determined by fluorescence chromosomal in situ hybridization (FISH) (22). Briefly, a genomic DNA clone containing a 2.4-kbp insert of the human Sp␣ genomic sequence in a TA cloning vector was labeled with biotin-16-dUTP by nick-transla-tion using commercial reagents (Boehringer Mannheim). Labeled probe was hybridized at a concentration of 300 ng/l/slide to pretreated and denatured human lymphocyte metaphase chromosomes. Hybridizations were performed in the presence of salmon sperm DNA and human genomic DNA.
After hybridization at 37°C overnight, the slides were washed in 50% formamide in 2 ϫ SSC at 42°C. To detect and amplify specific hybridization signals, slides were reacted with avidin-FITC (Vector Laboratories), washed, and treated with biotinylated goat anti-avidin D antibody (Vector Laboratories) followed by another round of incubation with avidin-FITC. Metaphase chromosomes were analyzed under an Axiophot (Carl Zeiss, Inc.) epifluorescence microscope. Specific hybridization signals were counted only when the fluorescence staining was observed on both chromatids of a chromosome. Digital images were generated using a cooled charge-coupled device camera (Photometrics PM512)/Macintosh computer system, with software supplied by Tim Rand (Yale University). Photographs were produced from PICT files.
Fusion Protein Constructs
DNA corresponding to the translated region of Sp␣ was obtained by PCR using full-length Sp␣ cDNA as template. Primers were designed with restriction sites enabling Sp␣ C-terminal ligation to the hinge, CH2, and CH3 domains of murine IgG 2a (see Fig. 4). All constructs were sequenced to verify correct sequence and correct reading frames. Sp␣-mIg (in the CDM8 expression vector) was transiently expressed in COS cells (23). The soluble Sp␣-mIg was purified from the COS cell supernatant by protein A column chromatography. Following protein A binding, the column was washed extensively with PBS (pH 7.0) and eluted with 4.0 M imidazole (pH 8.0) containing 1 mM each MgCl 2 and CaCl 2 . Proteins were dialyzed extensively with PBS.
Cell Culture
Human cell lines were grown to 0.5-0.9 ϫ 10 6 cells/ml in Iscove's modified Dulbecco's medium (Life Technologies, Inc.) containing 10% fetal bovine serum. Human peripheral blood T, B, and monocytes cells were separated by counterflow centrifugal elutriation.
Flow Cytometry
Approximately 5 ϫ 10 5 cells were incubated on ice for 1 h in 100 l of stain buffer (PBS containing 2% bovine serum albumin fraction V, 0.05% sodium azide, 1 mM each MgCl 2 and CaCl 2 ) containing 20 g/ml Sp␣-mIg fusion protein and 200 g/ml human IgG (Sigma I-8640). Cells were then washed with stain buffer, centrifuged, and aspirated. Following a second wash, cells were then incubated on ice for 1 h in 100 l of stain buffer containing 1:100 diluted FITC-labeled rabbit anti-mouse IgG 2a antibody (Zymed Laboratories, Inc. 61-0212). Cells were then washed twice and resuspended in 0.5 ml of stain buffer. Samples were run on a Becton Dickinson Facscan. Prior to running samples, propidium iodide was added to a final concentration of 1 g/ml. Dead cells were identified as propidium iodide-positive and were gated out and not used in the analysis. Mouse antibodies specific for CD3 (64.1 generously donated by Jeff Ledbetter, Ph.D., T cell, Bristol-Myers Squibb), CD19 (B cell, IOB4a Amak 1313), and CD14 (monocytes, MY4 Coulter 6602622) were used to verify elutriated cells. Second step staining for these antibodies was an FITC-labeled goat anti-mouse IgG (Biosource 4408). Informative hybrids 16 16 15 16 16 16 14 16 15 17 16 17 16 15 17 16 17 17 16 17 16 16 7 Percent discordance 0 19 33
RESULTS
Cloning of Sp␣, a New Member of the SRCR Family of Proteins-We have taken two approaches to isolating novel members of the SRCR family of proteins. The first involves a low stringency DNA hybridization technique, and the other involves a screening of the DNA data bases. This latter approach resulted in the identification of a cDNA fragment from the human EST data base that showed extensive sequence homology with members of the SRCR group B proteins, including CD5, CD6, M130, and WC1. The EST sequence (from fetal liver-spleen) was used as a probe to screen a cDNA library prepared from mRNA isolated from a human spleen. This resulted in the isolation of ten cDNA clones. The two longest clones, 1804 and 2152 bp, respectively, were sequenced in both orientations and found to contain a long open reading frame that encoded a 347-amino acid polypeptide, which had the features of a secreted protein (Fig. 1A). This protein was named Sp␣. Sp␣ contains 19 hydrophobic amino acids at its aminoterminal that act as a secretory signal sequence and are removed from the mature protein as determined by N-terminal sequence of the Sp␣ immunoglobulin fusion protein produced by COS cells. This secretory signal sequence is followed by three cysteine-rich domains, each approximately 100 amino acids in length. These domains are significantly homologous to the cysteine-rich domains found in the SRCR group B family of proteins (Fig. 1B) (1). The third SRCR domain of Sp␣ is followed by an in-frame stop codon. The predicted amino acid sequence of Sp␣ contained no putative N-linked glycosylation sites. The two clones differ in the length of their 3Ј-untranslated regions, where one clone is 348 bp longer. The shorter clone has a poly(A) sequence starting 18 bases downstream from a consensus polyadenylation sequence. The longer clone has two polyadenylation consensus sequences; the first one is identical to the one found in the shorter clone, and the second is located 351 bp downstream from the first site. The longer clone also contains three adenylate/uridylate-rich elements (AREs) in the 3Ј-untranslated sequence located between the two polyadenylation sites.
Amoung the SRCR group B members, the SRCR domain organization of Sp␣ most closely resembles CD5 and CD6 (Fig. 1C). The Gene Encoding Sp␣ Is Located on Chromosome 1-Genomic DNAs from a panel of 17 human-Chinese hamster hybrid cell lines were analyzed by PCR using primers that specifically amplified human Sp␣ sequence. The expected 222-bp PCR products containing the 3Ј-untranslated region sequence were obtained from human control DNA and from hybrid cell lines that had retained human chromosome 1. As shown in Table I, except for chromosome 1, all other human chromosomes were excluded by this panel. These results indicated that the human Sp␣ gene is located on chromosome 1. Fluorescence in situ hybridization confirmed the Sp␣ assignment to chromosome 1 and refined the physical map position. Based on the localization of the signal on R-banded chromosomes in 22 metaphase spreads, the human Sp␣ gene was assigned to human chromosome bands q21-q23 (Fig. 2).
To confirm this assignment and to map the Sp␣ locus more precisely, two human RH mapping panels were typed by PCR amplification with the Sp␣ specific primers. In the Stanford G3 mapping panel, 9 of 83 RH cell lines were positive for the human-specific Sp␣ gene signal. By maximum likelihood analysis, Sp␣ was placed 45.8 centiRays 10000 (cR) from the STS marker D1S3249. In the GeneBridge 4 mapping panel, 30 of 93 RH cell lines were positive, and Sp␣ was placed 3.0 cR 3000 and 3.1 cR 3000 from the chromosome 1 markers WI-8330 and CHLC.GATA43A04, respectively. The order of markers in this region from centromere to telomere is D1S305-WI-8330-Sp␣-CHLC.GATA43A04-D1S2635. D1S305, WI-8330, CHLC. GATA43A04, and D1S2635 are known markers in the WC1.17 contig (Whitehead Institute/MIT Center for Genome Research), while D1S3249 and D1S2635 are clustered as chromosome 1 bin 69 in the Stanford Human Genome Center RH map. A more distal marker D1S196, which is in Stanford Human Genome Center chromosome 1 bin 75 and WC1.17 contig, was previously mapped to the q22-q23 region (24). These results are consistent with our FISH mapping data that placed Sp␣ at q21-q23. The insertion of Sp␣ into the linkage map will enable the evaluation of this locus as a candidate for genetic disorders.
Sp␣ Is Produced by Lymphoid Tissues-RNA blot analysis using a Sp␣ cDNA fragment as a probe indicated that mRNA encoding Sp␣ is expressed in the spleen, lymph nodes, thymus, bone marrow, and fetal liver but not in peripheral blood leukocytes (PBL) nor appendix (Fig. 3). Hybridizing bands to Sp␣ were also not detected in prostate, testis, uterus, small intestine, and colon (separate blot, data not shown). In all cases, tissues expressing mRNA transcripts encoding Sp␣ expressed three hybridizing transcripts. Three bands in the spleen (Fig. 3) are seen with shorter film exposure. These transcripts are ϳ2.4, 2.1, and 1.8 kbp in length. The 1.8-and 2.1-kbp transcripts correspond in length to the two longest cDNAs isolated from the spleen cDNA library. Presently there is no information as to the nature of these transcripts; however, the finding that two of the cDNAs which we isolated have sizes that are consistent with those seen on the RNA blot suggest that they may all encode Sp␣ but differ from one another in the length of their untranslated regions. It should be noted that we cannot exclude the possibility that one or more of these transcripts may encode closely related proteins.
In an effort to determine which cells might produce Sp␣, we have analyzed several cell lines by Northern blot. The RNA message for Sp␣ was not detected in the following cell lines: HL60, K562, Raji, Molt4, A549, SW480, GA361, HeLa S3, and peripheral blood leukocytes (data not shown).
Binding of Sp␣-mIg to Myeloid Cell Lines and Monocytes-Previously, we had successfully used an Ig fusion approach to identify cells expressing a CD6 ligand (25). These studies eventually led to the isolation of a cDNA encoding a CD6 ligand named ALCAM (14). The successful application of this technique in the isolation of a CD6 ligand and the characterization of the CD6-ALCAM interaction led us to use the same approach to identify cells that express Sp␣ receptors. We prepared a full-length Sp␣ murine IgG 2a (Sp␣-mIg) fusion protein by transient expression in COS cells (Fig. 4).
We began a systematic examination of the ability of Sp␣-mIg to bind to human cell lines using flow cytometry. We observed that the myeloid cell line K-562 bound to Sp␣-mIg but not to a control protein (WC1-mIg) containing the aminoterminal three SRCR domains of bovine WC1 fused to the same constant domain of murine IgG 2a (Fig. 5A, panel A).
Binding of Sp␣ to the K-562 cells was concentration-dependent and saturable (Fig. 5B). Sp␣-mIg also displayed weaker binding to the myeloid cell line THP1 (Fig. 5A, panel B) but not to U-937 cells (Fig. 5A, panel C). Binding of Sp␣-mIg was also observed on the lymphoma B cell line Raji (Fig. 6, panel A) and also the T cell line Hut78 (Fig. 6, panel C). Binding was not seen with the control protein on these two cell lines (Fig. 6, panel B and D).
These observations led us to examine if the Sp␣-mIg fusion protein could bind peripheral blood mononuclear cells. As shown in Fig. 7, Sp␣-mIg (panels A and D), but not WC1-mIg (panels B and E), bound to peripheral blood monocytes. Binding of Sp␣-mIg was not seen on elutriated peripheral blood T cells (Fig. 8, panels A and D) nor on elutriated B cells (data not shown). The binding of Sp␣-mIg to elutriated monocytes from different donors could always be detected but showed some degree of variability (Fig. 7, panels A and D). DISCUSSION We have been interested in studying the structure and function of CD5 and CD6 and their regulatory role in the immune system. A large body of in vitro data suggests that these proteins play an important role in regulating T cell activation and, in the case of CD6, T cell development. The isolation and functional characterization of novel proteins that are closely related to CD5 and CD6 might provide further insights on the function and structure of this class of proteins. We screened the human EST data base for cDNA fragments that encoded polypeptides, which were homologous to CD5 and CD6, and identified a cDNA fragment encoding Sp␣. Analysis of fulllength cDNA clones encoding Sp␣ suggests that Sp␣ is a secreted protein that has the same domain organization as the extracellular region of CD5 and CD6. However, a detailed comparison of the amino acid sequence of SRCR domains of Sp␣ with all members of the SRCR protein family revealed a closer homology to WC1 and M130. This suggests that Sp␣ may be more closely related to WC1 than CD5 or CD6. Further evidence that points to a more distant evolutionary relationship between Sp␣ and CD5 or CD6 than that between CD5 and CD6 comes from the finding that the genes encoding CD5 and CD6 are found in close proximity on chromosome 11 (26 -29), whereas the gene encoding Sp␣ is located on chromosome 1. Presently there is no information of the genomic localization of the human equivalent of WC1 or M130.
The subgroup of SRCR family members, which contains CD5, CD6, WC1 and M130 (Group B), can be distinguished from other members of the family based on the number of cysteine residues contained within the SRCR domains and the observation that the extracellular domains of each of these proteins are composed exclusively of SRCR domains. More recently, analysis of the genomic organization of the genes encoding some of the members of this subfamily has indicated that a third distinguishing feature of this group of proteins is that each of its SRCR domains is encoded by a separate exon (27,30,31). This is in contrast to the type I macrophage scavenger receptor and related proteins (Group A). The SRCR domains of group A proteins have fewer Cys residues (six instead of eight), and each SRCR domain is encoded by two exons. Preliminary data on the genomic organization of Sp␣ indicates that the second SRCR domain is encoded by a single exon. 3 Based on these criteria, we propose that Sp␣ be considered a member of the SRCR Group B family of proteins.
RNA blot analysis indicates that transcripts encoding Sp␣ are exclusively expressed in lymphoid tissues. However, it appears that leukocytes do not express this protein. This finding indicates that Sp␣ may be produced by specialized epithelial and or endothelial cells in lymphoid tissues. The observation that Sp␣ is expressed in bone marrow, thymus, and fetal liver, as well as in the spleen and lymph nodes, implicates this protein in processes responsible for both the development and maintenance of the lymphoid compartment. Studies are currently underway to identify the cells that make this protein and factors that are involved in regulating its expression. The Northern blot probed with Sp␣ showed three bands. Based on our analysis of two different cDNAs encoding Sp␣, it appears that at least two of these transcripts correspond to mRNAs encoding Sp␣ and differ in the length of their 3Ј-untranslated regions. We also observed a significant difference in the 3Јuntranslated region of these Sp␣ mRNAs. We found that the longer clone contained three consensus ARE elements (AUUUA). ARE elements are located within the 3Ј-untranslated region of mRNAs and have been found to be the most common determinant of RNA stability (32,33). Messenger RNAs encoding cytokines and transcription factors, among others, contain these elements, which provide an additional mechanism for the regulation of protein expression by directing the stability and, therefore, half-life of the mRNA encoding the protein. The finding that at least one of the mRNAs encoding Sp␣ contains ARE motifs suggests that the expression of this protein might be tightly regulated.
Preliminary studies designed to identify cells that bind Sp␣ and are the target of its activity revealed that some resting myeloid cell lines, as well as peripheral blood monocytes, are capable of binding Sp␣. Sp␣-mIg was also found to bind to the B cell line Raji and also the T cell line Hut78. These studies were carried out using an Sp␣ immunoglobulin fusion protein, and thus, the possibility existed that the interaction between this protein and the myeloid cell lines and monocytes, which are known to express high levels of Fc receptors, was mediated via the Ig portion of the molecule rather than the Sp␣ moiety. This is unlikely for the following reasons. 1) Two Ig fusion control proteins, WC1-mIg (SRCR Group B member) and human ALCAM-mIg, showed no binding; and 2) the interaction between the Sp␣-mIg and myeloid cell lines and peripheral blood monocytes was detected in the presence of a vast excess of human IgG (up to 2 mg/ml) present in the binding studies.
The isolation of cDNAs encoding Sp␣, the preparation of Sp␣ immunoglobulin fusion proteins, and the identification of cells that express putative receptors will provide the basis for future studies on the structure and function of this new member of the SRCR family of proteins. The finding that this protein is expressed in lymphoid organs involved in the development of the lymphoid compartment as well as in immune surveillance, in conjunction with the observation that peripheral blood monocytes are capable of binding Sp␣, suggests that this protein may play an important role in regulating the immune system. | 5,605.6 | 1997-03-07T00:00:00.000 | [
"Biology"
] |
Error Performance Analysis of Access Point-based Reconfigurable Intelligent Surfaces in the presence of Gaussian-plus-Laplacian Additive Noise
In this paper, we investigate the error performance of access point-based reconfigurable intelligent surfaces (AP-RISs) under a Rayleigh frequency-flat slow fading channel with path loss in the presence of Gaussian-plus-Laplacian additive noise. Since the additive noise includes an impulsive Laplacian component, it describes a scenario that is more realistic in practice. A closed-form expression of the average bit error probability (ABEP) is derived and validated by simulation results. The ABEP formulation employs an approximation of the sum of Rayleigh random variables and agrees well with simulation results for arbitrary surface sizes. However, the ABEP expression takes relatively long to evaluate due to the required multiple computations of the confluent hypergeometric function. On this note, a simplified expression of the ABEP is formulated by employing an asymptotic cumulative distribution function representation of the Gaussian-plus-Laplacian noise. The simplified ABEP agrees well with simulation results. An asymptotic analysis of the ABEP shows that the asymptotic diversity order is not affected by the Laplacian component. Finally, the ABEP formulations are used in the validations of error performance for an AP-RIS-assisted two-way relaying network under the same channel conditions. Overall, the investigations in this paper demonstrates the vulnerability of RISs to this type of noise and highlights the need for the design of suitable mitigation techniques.
I. INTRODUCTION
T HE use of smart propagation in next generation networks is currently showing much promise in the research community and future releases of fifth-and sixthgeneration wireless communications technology may exploit these concepts. A very recent example of smart propagation is the phase-adjustable element reconfigurable intelligent surface (RIS) that has been proposed as a subset of RISs or intelligent reflecting surfaces [1]. An RIS is able to intelligently modify an impinging electromagnetic wave to enhance communications system objectives, including, but not limited to, reliability, capacity, energy and spectrum efficiency [1], [2]. The generic RIS is made up of a large number of low-cost and energy-efficient reflecting elements. Elements are associated with an adjustable parameter that may be software-defined, for example, amplitude, phase, frequency or polarization. Phase-adjustable RISs are very attractive because they can be passive and hence, low cost and can enable the superposition of several coherent reflected signals at the receiver [1]- [3]. This ensures that the average signal-to-noise ratio (SNR) increases proportionally to the square of the number of RIS elements, thus significantly increasing the SNR especially for large element RISs. Some very recent contributions in the literature are: Since co-channel interference can severely degrade the received signal, its effect on a RIS-assisted dual-hop mixed free-space optical-radio frequency (RF) communication system has been considered [4]. It is further demonstrated that the system still gains significant performance enhancement compared to its traditional counterpart. In [5], a RIS-assisted Alamouti scheme which employs only a single RF signal generator at the transmitter is proposed. It is shown that the diversity order is preserved compared to the classical Alamouti scheme. Furthermore, a RIS-assisted and index modulation-based vertical Bell Labs layered space-time scheme is proposed and supported by optimal and sub-optimal detectors. In [6], a simple yet effective model for RIS scattering is proposed and used to formulate an expression for path loss of the transmitter-RIS-receiver channel. The potential of RISs in anti-jamming communications is investigated in [7], by considering an aerial RIS (ARIS) that is deployed in the air for jamming mitigation. Optimization frameworks for ARIS deployment and passive beamforming are proposed and results show that legitimate transmissions are effectively enhanced. One of the key open challenges to the realization of RISs is the acquisition of channel state information [3], since the channel knowledge is a critical component in the transceiver design of RIS-assisted communications to achieve its full potential. Base station (BS)-User RIS-assisted communication requires channel knowledge pertaining to a cascaded channel, since estimation of the User-RIS and BS-RIS links are required [3]. On this note, several solutions involving active channel sensors, channel decomposition and structural learning are currently being investigated [3]. Meanwhile, an access point (AP)-based RIS (AP-RIS) was conceptualized [8] to allow both information transmission and phase adjustment, while retaining the use of passive and low-cost reflecting elements. An AP-RIS is realized by placing an RF source in very close proximity to an RIS. This inherently eliminates the cascaded channel and only knowledge of the User-RIS link is required; hence, naturally holding much promise in terms of reduced system complexity. Conventionally, wireless communication systems are assumed to be affected by only Gaussian noise. However, in practical scenarios or particular environments, such as metropolitan, manufacturing plants and indoor settings, the noise can have an impulsive component [9]. This component of the noise can occur in transients or bursts and is sporadic/non-contiguous in nature, resulting in the serious degradation of reliability or error performance. Man-made and natural sources that are responsible for generating the impulsive component in wireless communication systems include, but are not limited to, ignition noise in motor vehicles, switching transients in power lines, fluorescent lighting, multiple-access interference and lightning discharges; hence, resulting in distributions with positive excess-Kurtosis (heavy-tails) [10]. These distributions are more accurately described by models, such as the Symmetric-alpha Stable (SαS) which includes the Cauchy (α = 1) distribution, Middleton Class A and Class B, generalized Gaussian, Laplacian and Bernoulli-Gaussian [9]- [11]. In [11], an analysis of diversity-reception schemes in the presence of additive noise including an impulsive component is presented, where the additive noise assumes a Gaussianplus-Laplacian model. The Laplacian distribution is assumed over other impulsive noise models, due to its convenient analytical properties. Motivation and contributions: Impulsive noise can have a significant deleterious effect on the error performance of wireless communication systems [9]- [11]. In the current open literature, there has been no investigation into the effect of additive noise with an impulsive component on the error performance of RIS-assisted communications. On this note, since the AP-RIS 1 holds much promise due its low system complexity, we consider the vulnerability of its error performance to additive noise with an impulsive component. Due to the convenient analytical properties of the Laplacian distribution which may be used to model the impulsive noise component, we consider a mixture of Gaussian and Laplacian additive noise and study it's effect on the error performance of AP-RIS-assisted communications. Based on the above, the contributions of this paper 2 are as follows: a) We derive the theoretical average bit error probability (ABEP) of an AP-RIS for a Rayleigh frequency-flat slow fading channel with path loss in the presence of Gaussianplus-Laplacian additive noise. The formulation employs an approximation of the sum of Rayleigh random variables (RVs) and therefore agrees well for arbitrary RIS sizes. b) An asymptotic analysis is presented and includes the formulation of a simplified ABEP expression using an asymptotic representation of the cumulative distribution function (CDF) of the additive noise, and c) The formulated ABEPs are further used in the validations of error performance of a two-way relaying network under the same channel conditions.
II. SYSTEM MODEL AND PRELIMINARIES
Consider an AP-RIS transmitter and receiver as depicted in Fig. 1. The RIS at the transmitter is equipped with N elements and the transmitter and receiver each make use of a single RF antenna. The antenna at the transmitter is located sufficiently close to the RIS, such that there is no small-scale fading channel between the antenna and the RIS [8]. Assuming a Rayleigh frequency-flat slow fading channel with path loss in the presence of Gaussian-plus-Laplacian 1 In [8], [12] analyses of the error performance of AP-RISs under a frequency-flat Rayleigh fading channel in the presence of Gaussian-only additive noise have been investigated. Unlike the analysis in [8], which makes the simplifying assumption of large RISs, the analysis in [12] is valid even for small RISs.
additive noise at the receiver, the received signal may be defined as: (1) where the RV h i is distributed as CN (0, 1) with Rayleigh distributed magnitude α i and uniformly distributed phase θ i , such that h i = α i e jθi represents the channel between the receive antenna and the i-th, i ∈ [1 : N ] RIS element, φ i represents the adjustable phase at the i-th RIS element and is set as φ i = −θ i , such as to maximize the received SNR [8], ξ is the message carrying binary phase shift keying (BPSK) symbol and E{ξ 2 } = 1. The average transmit power is γ and P L is the total path loss [5], [6], which is defined in Section IV. The Laplacian and Gaussian additive noise components are represented by I and η, respectively. Both I and η are mutually independent and have probability density functions (PDFs) given by (2.1) and (2.2), respectively.
where c, c > 0 is a scale parameter of the Laplacian distribution and σ, σ > 0 is the scale parameter (standard deviation) of the Gaussian distribution.
In the upper part of Fig. 2 (refer to the top of the next page), we plot the empirical 3 versus analytical PDFs of I for c = 0.8, 1, 2 and 4. The analytical PDF of the Gaussian distribution given by (2.2) with σ = 1, is also depicted and serves to give an indication of the positive excess kurtosis of the Laplacian distributed noise. It is evident that the empirical and analytical PDFs of I agree well and as c increases from c = 0.8 to c = 4 the heaviness of the tails increase, i.e. the noise takes on extreme values with increasing frequency. The PDF of the total additive noise given by J = I + η was determined in [13], and is defined as: In the lower part of Fig. 2, the empirical versus analytical PDF plots of J for c = 0.8, 1, 2 and 4 assuming σ = 1 are shown and agree well. Once again, the Gaussian PDF (σ = 1) is depicted. While the Gaussian-plus-Laplacian PDF peaks are less sharp, as expected the positive excess kurtosis remains evident. Since, there is a high probability of extreme Gaussian-plus-Laplacian noise amplitudes occurring, this suggests the potentially significant deleterious impact this noise will have on error performance. In the ensuing analysis, we will employ the PDF given by (3). 3 The empirical PDFs were generated from simulated noise using the histogram function hist(·) in MATLAB. A Laplacian RV with scale parameter c was simulated using I = −c × sign(u) ln (1 − 2|u|), where u = −0.5 + rand(·) is a uniformly distributed RV and sign(·), rand(·) are the signum, uniform RV functions, respectively, in MATLAB.
III. ERROR PERFORMANCE ANALYSIS A. THEORETICAL ABEP
Consider an equivalent received signal model of (1) given Then with BPSK transmission, the ABEP of the AP-RIS system for a Rayleigh frequency-flat slow fading channel with path loss in the presence of Gaussian-plus-Laplacian additive noise may be given as: where f x (x) the PDF of x may be approximated 4 as [14]: is the CDF ofJ =Ī +η, which is defined as [11]: we may rewrite (4) as: 2b FJ (x,σ,c)dx. Using integration-by-parts, I 1 is given as (given in Appendix A): where 2b fJ (x,σ,c)dx and may be solved as (given in Appendix B): where , ϕ 2 = σ c and using the identity Q(−y) = 1 − Q(y), may be equivalently 5 written as: where Applying the trapezoidal rule to Q(·), we get: where n ≥ 2 is the number of intervals used in the integration; accordingly, we write I 31 as: Using [15, Equ. (3.462.1)], (12) is reduced to: where is the parabolic cylinder function [15], which is defined as: and in terms of the confluent hypergeometric function 1 F 1 (·; ·; ·) as [15]: Similar to the steps used to arrive at (13), we may determine the solution of I 32 in (10) as: where β 3 = 0.5ϕ 1 . Solving the first term in (7) (8), (9), (10), (13) and (16) in (7), then simplifying, we finally arrive at the approximate ABEP expression as: B. ASYMPTOTIC ANALYSIS 1) Simplified ABEP using an Asymptotic CDF The ABEP expression derived in (17) requires multiple computations of the confluent hypergeometric function, which is well-known to be relatively slow. By evaluating (7) using an asymptotic representation of (6), (17) may be simplified. Setting γ → ∞, it may be validated that (6) may be approximated as: Fig. 3 illustrates the curves of the CDF given by (6) and the asymptotic CDF given by (18). We have considered values of c = 0.8, 1, 2 and 4. Further, we have only considered N = 4, which represents the worst-case setting. It is immediately evident that at high SNRs, the CDFs match exactly. For example, in the case of c = 0.8, for small values of x the match is very tight from a worst case SNR of approximately 16 dB, while at higher values of x the match is tight even in the low SNR region. As c increases from c = 1 to c = 4, the match becomes tighter even at lower values of x. This investigation serves to show that if we use the asymptotic CDF representation in (18) to derive a simplified ABEP expression, then a good match with simulation results can be expected. Based on the above motivation, we may substitute (18) in (7) for FJ (x,σ,c), and arrive at: It is immediately evident that the ABEP in (20) is significantly simpler than (17) to evaluate, since the confluent hypergeometric function is evaluated only once, while it is evaluated N (2k + 1)[3 + 2(n − 1)] times in (17). Based on the previous motivation drawn from Fig. 3, the ABEP in (20) is also expected to agree well with simulation results at moderate-to-high SNRs. Comparison will be drawn in Section IV to demonstrate the accuracy of the expression.
Sincec = c √ γP L N and γ → ∞, the second and higher order terms will become very small and may thus be neglected, yielding the asymptotic ABEP: The result in (22) will be evaluated in Section IV.
3) Diversity order
Given the average SNR δ, the diversity order may be defined as: G d = lim δ→∞ − log Pe log δ and consequently P δ→∞ e ≈ δ −G d . In order to determine the diversity order, we consider γ → ∞. Substitutingc = c √ γP L N and using δ = γ σ 2 , (22) may be written as: (23) It is immediately evident that the diversity order is given by G d = N , and consequently does not affect the asymptotic diversity order of the AP-RIS in the presence of Gaussianonly additive noise [12].
C. TWO-WAY RELAYING
Two-way relaying employing physical-layer network coding is well-known [16] and enables two source nodes to exchange information via a relay node over two transmission phases. Two-way relaying has been considered for wireless communications to achieve spectrum efficiency gain and improved error performance as well as to extend network coverage, reduce shadowing effects, and increase power efficiency [16], [17]. Consider AP-RIS transceivers at Nodes A and B and a singleantenna transceiver at Node R as depicted in Fig. 4. Each of the nodes operate in half-duplex and the RISs are each equipped with N elements.
: System model of the AP-RIS-assisted two-way relaying network.
In the multiple access channel (MAC) phase, Nodes A and B transmit their message symbols to Node R. Node R detects the two symbols and in the broadcast channel (BC) phase, transmits a network-coded version of the message symbols using decode-and-forward to Nodes A and B. Each of the nodes detect the received symbol and then perform network coding with their respective transmitted symbols. Hence the exchange of messages between Nodes A and B is achieved. Perfect transmit synchronization is assumed and that transmission from Node A cannot directly arrive at Node B or vice-versa due to large-scale fading. These assumptions are consistent with related literature [16], [17]. Accordingly, assuming a Rayleigh frequency-flat slow fading channel with path loss in the presence of Gaussian-plus-Laplacian additive noise, the received signal at Node R in the MAC phase is: , hence yielding (24.2). The symbol ξ A(B) is the assumed BPSK symbol emitted from Node A(B) with E{ξ 2 A(B) } = 1 and ξ A(B) ∈ χ. γ A(B) is the average transmit power at Node A(B). P L A(B) is the total path loss with respect to Node A(B). J R = I R + η R with the mutually independent Laplacian and Gaussian additive noises represented by I R and η R , respectively. Given complete knowledge of the channel, the symbols ξ A , ξ B detected at the relay node is determined as: Given the bit representation bξR for the detected symbol ξ R A(B) , network coding is applied at the relay as: . The corresponding assumed BPSK symbol ξ R ∈ χ with E{ξ 2 R } = 1 is then transmitted by Node R in the BC phase. The received signal and subsequent detection rule (assuming complete knowledge of the channel) at Node A(B) are given as (26.1), (26.2) and (26.3), respectively. is the error probability at the relay assuming ξ A(B) is received in error, while ξ B(A) is received correctly and P
R→B(A) e
is the error probability at Node B(A). Assuming each node employs the transmission of a BPSK symbol, then the probabilities on the RHS of (27) may be given as P e in (17); hence, (27) may be simplified as: We may also employ the simplified ABEP given by (20) for P e in (28). Both results will be plotted and compared in Section IV.
IV. NUMERICAL RESULTS
In this section, we first present the numerical results for the AP-RIS assuming a Rayleigh frequency-flat slow fading channel with path loss in the presence of Gaussian-plus-Laplacian additive noise. Second, we present the numerical results for AP-RIS-assisted two-way relaying under the same channel conditions. For error performance comparisons, the figure-of-merit considered is the bit error rate (BER) versus average SNR. We consider the average SNR δ = γ σ 2 with γ = γ A = γ B = γ R . Comparisons are drawn at a BER of 10 −5 unless otherwise stated. We consider N = 4, 8, 16, 32 and 64. BPSK modulation is assumed. Values of c = 0.8, 1, 2 and 4 are considered. In all cases, we assume σ = 1. For large-scale fading, the total path loss P L is defined as [5], [6]: where λ = c/f c , with c the speed of light, f c is the carrier frequency, r 1 is the distance between the transmit antenna and AP-RIS and r 2 is the distance between the AP-RIS and receive antenna. For two-way relaying, we consider P L = P L A = P L B , r 1 the distance between the transmit antenna and AP-RIS at Node A(B), r 2 the distance between the AP-RIS at Node A(B) and antenna at Node R. In the following results, we choose f c = 1.8 GHz, r 1 = 1 m, r 2 = 9 m or r 1 = 1 m, r 2 = 12 m.
A. AP-RISs IN THE PRESENCE OF GAUSSIAN-PLUS-LAPLACIAN ADDITIVE NOISE
In Fig. 5 (refer to the top of next page), the simulation results and evaluated theoretical ABEP curves given by (17) and (20) are presented. We have also included the MATLAB symbolic math toolbox integrations of (4) with f x (x) given by (5) and its improved approximation (cf. Equ. (9) in [14]). We have considered the CDF given by (6) for these integrations.
In all settings of c and N , it is evident that the theoretical and simulation results agree well at moderate-to-high SNRs and are valid even for small RISs. More specifically, for c = 0.8 and 1 (Figs. 5(a) and (b)), the results agree well at moderate-to-high SNRs, while for heavier noise tails with c = 2 and 4 (Figs. 5(c) and (d)), simulation and theoretical results increase in tightness and generally agree well across the range of SNRs. However, only in some of these instances do they match at low SNRs. This will be discussed in brief shortly. In all settings of N , the simplified ABEP given in (20) matches the ABEP of (17) and the simulation results very closely for moderate-to-high SNRs for c = 0.8, 1 and for low-to-high SNRs for c = 2, c = 4.
It is also demonstrated that the curves plotted for (4) match the derived ABEP exactly at moderate-to-high SNRs in the cases of c = 0.8 and 1 and at low-to-high SNRs for the cases of c = 2 and c = 4. Furthermore, the improved approximation of f x (x) presented in [14], demonstrates no further improvement in accuracy of the ABEP. As mentioned earlier, in some instances it is evident that there is difference between the ABEP of (17), simplified ABEP of (20) and simulation results at low SNRs. At the same time, the curves for (4) and the improved approximation for f x (x) matches the simulation results much more closely at these low SNRs.
Based on this, we can state that the error is not due to the Gaussian-plus-Laplacian noise CDF employed at low values of c. This is further evidenced in the comparisons drawn in Figure 2. Instead, since the error at low SNRs is much more pronounced for (17) and significantly less for (20), we can induce that such error is due to the inaccuracy of the confluent hypergeometric function computation at these SNRs. SNR result in order to draw comparison with the asymptotic ABEP given by (22). Since it is not practical to obtain the simulation results at the high SNRs of interest, we have instead generated the curves using the simplified ABEP given by (20). Results have been shown for N = 4, 8, 16, 32 and 64 with σ = 1, c = 0.8, 1, 2, 4 and we consider r 1 = 1 m and r 2 = 9 m. In each of the cases, it is evident that the curves converge and a close match is seen at high SNRs. Using (22), the asymptotic diversity order was shown earlier to be G d = N . Since the curves converge and the slopes are identical at high SNRs, the diversity order is evident. In Fig. 7, comparison is drawn between the error performances with and without the impulsive noise component. Serious degradation is shown when the noise includes an impulsive component. This is even more so when the noise tails become heavier (c = 1, 2 and 4). Table 1.
B. AP-RIS-ASSISTED TWO-WAY RELAYING
Figs. 8 and 9 present the simulation results and evaluated theoretical ABEPs for the AP-RIS-assisted two-way relaying network. Comparison is drawn between the error performances with and without the impulsive noise component. We consider σ = 1 and c = 1. Two configurations r 1 = 1 m, r 2 = 9 m and r 1 = 1 m, r 2 = 12 m are considered. In each in-stant, it is evident that the theoretical ABEP, which is a bound [17] given by (17) in (28), agrees well with the simulation results. The simplified ABEP using (20) in (28) also agrees very well and is identical to (28) using (17) at higher SNRs.
V. CONCLUSION AND FUTURE WORK
In this paper, the error performance of an AP-RIS in the presence of Gaussian-plus-Laplacian additive noise was investigated. The formulated theoretical ABEP was validated by simulation results and allows arbitrary RIS sizes. A simplified ABEP that requires only a single evaluation of the confluent hypergeometric function was derived and matched simulation results well. Both formulations were used in the validation of the error performance of an AP-RIS-assisted two-way relaying network. Results presented in this paper demonstrate the vulnerability of RISs to additive noise with an impulsive component. Future work involves the investigation of techniques to mitigate the deleterious impact this type of noise has on the error performance of RIS-based communications. The effects of co-channel interference may also be investigated together with extending the result to generalized channels. .
APPENDIX A DERIVATION OF INTEGRAL I1
Given I 1 = | 5,695.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |