text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Diagnosis of Hepatozoon canis in young dogs by cytology and PCR
Background Hepatozoon canis is a widespread tick-borne protozoan affecting dogs. The diagnosis of H. canis infection is usually performed by cytology of blood or buffy coat smears, but this method may not be sensitive. Our study aimed to evaluate the best method to achieve a parasitological diagnosis of H. canis infection in a population of receptive young dogs, previously negative by cytology and exposed to tick infestation for one summer season. Results A total of 73 mongrel dogs and ten beagles younger than 18 months of age, living in an animal shelter in southern Italy where dogs are highly infested by Rhipicephalus sanguineus, were included in this study. In March-April 2009 and in October 2009, blood and bone marrow were sampled from each dog. Blood, buffy coat and bone marrow were examined by cytology only (at the first sampling) and also by PCR for H. canis (second sampling). In March-April 2009, only one dog was positive for H. canis by cytological examination, whereas in October 2009 (after the summer season), the overall incidence of H. canis infection by cytological examinations was 43.9%. Molecular tests carried out on samples taken in October 2009 showed a considerably higher number of dogs positive by PCR (from 27.7% up to 51.2% on skin and buffy coat tissues, respectively), with an overall positivity of 57.8%. All animals, but one, which were positive by cytology were also PCR-positive. PCR on blood or buffy coat detected the highest number of H. canis-positive dogs displaying a sensitivity of 85.7% for both tissues that increased up to 98% when used in parallel. Twenty-six (74.8%) out of the 28 H. canis-positive dogs presented hematological abnormalities, eosinophilia being the commonest alteration observed. Conclusions The results suggest that PCR on buffy coat and blood is the best diagnostic assay for detecting H. canis infection in dogs, although when PCR is not available, cytology on buffy coat should be preferred to blood smear evaluation. This study has also demonstrated that H. canis infection can spread among young dogs infested by R. sanguineus and be present in the majority of the exposed population within 6 months.
Background
Despite its wide geographical distribution and the fact that it was described in the early 20 th century [1], there are still knowledge gaps concerning canine hepatozoonosis caused by Hepatozoon canis (Adeleorina: Hepatozoidae), including insufficient understanding of its pathogenesis and the best diagnostic methods to employ for diagnosing this infection. The biological life cycle of H. canis in the canine host and its tick vector [1,2] has recently been elucidated in detail [3]. In contrast to other tick-borne protozoa, H. canis infects leukocytes and parenchymal tissues and is transmitted to dogs by the ingestion of ticks containing mature oocysts [4]. Following ingestion of infected ticks, sporozoites spread via the bloodstream and lymph to several organs including the spleen, bone marrow, lung, liver and kidney. In these organs, meronts are formed and undergo several cycles of merogony, releasing merozoites, which invade white bloods cells (mostly neutrophils and monocytes) where they form gamonts [3]. The brown dog tick, Rhipicephalus sanguineus (Ixodida: Ixodidae), is the main vector of H. canis [2,4], although oocysts of this protozoan have also been detected in other tick species feeding on dogs, including Haemaphysalis longicornis and Haemaphysalis flava in Japan [5] and Amblyomma ovale in Brazil [6,7]. H. canis is probably one of the most widespread canine vector-borne disease (CVBD)-causing pathogens due to its close association with R. sanguineus and the cosmopolitan distribution of this tick species [8,9]. Although large surveys on canine hepatozoonosis are scant [10], a number of reports suggest that H. canis infects dogs globally and infections have been reported from four continents [7,[10][11][12][13].
The diagnosis of hepatozoonosis is frequently based on the detection of intracytoplasmatic ellipsoidal-shaped gamonts in stained blood smears by microscopy and on the histopathological visualization of meronts and/or monozoic cysts in tissues [22,23]. Nonetheless, serological tests, such as the indirect fluorescent antibody test (IFA), have been developed to detect anti-H. canis antibodies [24] with a high sensitivity, mainly in dogs with chronic infections [19].
Molecular diagnosis based on both conventional [25] and real time polymerase chain reaction (PCR) [26], developed during the last decade, greatly contributed to understanding the spread of this protozoan in canine populations. From a practical standpoint, these methods applied on blood were shown to be more sensitive and specific for the diagnosis of this pathogen than other methods [10]. In addition, molecular analysis of target sequences also facilitated the separation of Hepatozoon americanum from H. canis and its designation as the agent of American canine hepatozoonosis [25,27,28].
Although PCR is considered the most sensitive detection method for canine hepatozoonosis, microscopic examination of blood smears is a simple technique frequently used for the diagnosis of this infection. Nonetheless, few studies have compared these methods [10] and a diagnostic gold standard has not been clearly established. Likewise, information is lacking on the reliability of different tissues for the molecular detection. Finally, little information is available on the incidence of hepatozoonosis in young dogs living in areas where this infection is endemic. Our study aimed to evaluate the best method to achieve a parasitological diagnosis of H. canis infection in a population of receptive young dogs, previously negative by cytology and exposed to tick infestation for one summer season. Tissue samples from a selected animal population monitored in a previous study [21] were used and the results of cytology (on whole blood, buffy coat and bone marrow) and of molecular detection (on whole blood, buffy coat, bone marrow and skin samples) were compared. The relationships between the presence of H. canis and laboratory parameters were also examined.
Animals and sampling procedures
Dogs enrolled in the study included 73 mongrels and ten beagles younger than 18 months of age that had been sequentially monitored during a field trial over a 1-year period [21]. The sampled population lived in a shelter located in southern Italy where ticks and fleas and the presence of sand flies were recorded in previous entomological studies [29,30]. In March-April 2009 (before the summer season started), all animals enrolled but one were negative for H. canis by cytology of blood, buffy coat and bone marrow smears whereas some dogs were positive for other CVBD-causing pathogens as reported elsewhere [31]. The dogs were kept under their usual housing conditions and untreated against ectoparasites from the baseline date (March-April 2009) until the second sampling in October 2009 (after the summer season). Between these two sampling dates, a high level of R. sanguineus infestation was recorded in the same dog population [29].
On October 2009, blood, skin tissue and bone marrow were sampled from all of the 83 dogs. The study and the diagnostic procedures were conducted in accordance with the principles of animal welfare and experimentation.
Cytology
Blood, buffy coat (separated by centrifugation), and bone marrow smears were prepared on glass slides and stained with the MGG Quick Stain (Bio Optica Spa, Italy). Stained-smears were examined under light microscopy for the presence of intracellular inclusions of H. canis. Each smear was examined for 10 minutes (100 microscopic fields) under a 100 × oil immersion objective.
Polymerase chain reaction (PCR)
DNA was extracted individually from buffy coat, bone marrow and blood samples using a commercial kit (Qiagen, Milan, Italy) and from skin samples by using a different DNA purification kit (Gentra Systems, Minnesota, USA), following the manufacturers' instructions. A fragment of the 18S rRNA gene (666 bp in size) was amplified by PCR, using the primers HepF (5'-ATACAT-GAGCAAAATCTCAAC-3') and HepR (5'-CTTAT-TATTCCATGCTGCAG-3') [32]. PCR amplifications were carried out in a total volume of 50 μl, including 100 ng of genomic DNA, 10 mM Tris HCl, pH 8.3 and 50 mM KCl, 2.5 mM MgCl2, 250 μM of each dNTP, 50 pmol of each primer and 1.25 U of AmpliTaq Gold (Applied Biosystems, Foster City, CA, USA). The amplification protocol was employed in a thermal cycler (2720, Applied Biosystems, Foster City, CA, USA) as following: 95°C for 12 min (for polymerase activation), followed by 34 cycles of 95°C for 30 sec (denaturation); 57°C for 30 sec (annealing); 72°C for 1 min and 30 sec (extension), followed by 7 min at 72°C (final extension), as previously described [32]. Negative controls (no DNA template, blood, bone marrow and skin negative reference samples) were included in all PCR reactions. Amplicons were resolved in ethidium bromide-stained agarose (Gellyphor, EuroClone, Milan, Italy) gels (1.5%) and sized by comparison with Gene RulerTM 100-bp DNA Ladder (MBI Fermentas, Vilnius, Lithuania) as molecular marker. Gels were photographed using Gel Doc 2000 (Bio-Rad, Hercules, CA, USA). Amplicons were purified using Ultrafree-DA columns (Amicon, Millipore, Milan, Italy) and sequenced directly (Applied Biosystems, Monza, Milan, Italy) using the Taq DyeDeoxyTerminator Cycle Sequencing Kit (Applied Biosystems, Monza, Milan, Italy). Sequences were determined in both directions (using the same primers individually as for the PCR). Sequences were compared with 18S rRNA gene sequences of H. canis available in GenBank.
Clinical and hematochemical evaluation and categorization
Clinical signs suggestive of H. canis infection (e.g., weight loss, pale mucous membranes, and lymphadenomegaly) were recorded in each dog's file at the time of the sampling only. In October 2009, hematological and serum biochemistry parameters including serum proteins were recorded only for 35 of the 83 animals enrolled. Serum protein electrophoresis was carried out by agarose gel electrophoresis and complete blood counts (CBC) were obtained using an automated cell counter (Abbott Cell-Dyn 3700), being the following parameters recorded: hemoglobin concentration (Hb), hematocrit (Hct), nucleated red blood cell count (nRBC), white blood cells count (WBC), platelet count (PLT). Total serum protein (TP), albumin and γ-globulin were also recorded. Alterations in these parameters were assessed in relation to infection by H. canis and to clinical signs recorded by the attending veterinarian, at the time of the sampling. Standard canine hematological reference ranges were used for comparison [33].
Statistical analysis
The prevalence recorded by each test was calculated at both follow-ups. A six-month incidence rate was calculated on the basis of cytology as the proportion of new positive cases divided by the initial population of dogs negative by cytology. The sensitivity of each test was calculated as the proportion of true positives divided by the sum of true positive and false negative dogs. The sensitivity of each test was also calculated in parallel (Multiple test evaluation in WIN Epi). The true positive status of a dog was a priori defined as a dog positive to one or more cytology or PCR tests, considering each test used as 100% specific (i.e., there was no possibility of misdiagnosis by cytology as the morphology of Hepatozoon is characteristic and also not by PCR as the identity of amplified products was confirmed by sequencing). Agreement among tests performed was evaluated by k statistics and kappa values were ranked as low (0.2 < k < 0.4), moderate (0.4 < k < 0.6), good (0.6 < k < 0.8), or excellent (k > 0.8). The software used was SPSS for windows, version 13.0 (SPSS Inc., Chicago, IL) and WinEpiscope 2.0 [34].
Results
In March-April 2009, out of the 83 animals enrolled and tested for H. canis by cytology on whole blood, buffy coat and bone marrow, only one (1.2%) was positive on bone marrow. On October 2009, after the summer season, the cytological examinations (Figure 1) of the same animals showed a positivity rate ranging from 10.8% (blood) up to 41.5% (buffy coat, Table 1) and the total percentage of animals positive by one or more cytological tests reached up to 44.6% (data not shown). This led to an overall incidence of H. canis infection inferred exclusively by cytology positivity to (Table 1). At the BLAST analysis the sequenced amplicons were identical with those of H. canis available in GenBank (AY461378, AF176835). No significant differences in H. canis infection rate were recorded between the mongrel dogs and beagles.
By combining all the cytological and molecular tests, 59% (49/83) of the dogs were found to be infected by H. canis after the summer season. All dogs that were positive by cytology were also positive by PCR, except one. The majority of infected animals (n = 33; 67.7%) were positive by 3 (n = 12), 4 (n = 11) or 5 (n = 10) cytological and/or molecular tests simultaneously with a few being positive by one or two tests (n = 6; 7.2%), or by six or seven tests (n = 10; 12%) (data not shown). In addition, 66.6% (n = 10) of the animals positive by two or three cytological tests were also positive by PCR on all the tissues examined.
PCR on blood or buffy coat proved to be the most sensitive assays thus able to detect the highest number of H. canis positive individuals (Table 2). In contrast, PCR on skin showed the lowest sensitivity. Interestingly, the likelihood of finding positive results on the skin samples increased with the higher number of other positive tests from the same dog (χ 2 = 46.78; p < 0.01).
Thus, skin PCR positivity is most likely linked to a disseminated state of the infection in the dog's body.
Overall, molecular detection on all tissues but skin, had a higher sensitivity than cytology (Table 2). Indeed, PCR on both blood and buffy coat showed the highest sensitivity (85.7%) whereas the cytology on blood had the lowest (18.4%). In particular, when comparing the sensitivity of PCR with the different tissues, PCR on buffy coat, blood and bone marrow was more sensitive (p < 0.05) than on skin. The agreement of the tests was never excellent, but was good between cytology and PCR on buffy coat (0.7) and among all the PCR tests (ranging from 0.7 up to 0.8), except on skin (data not shown). Again, when PCR on buffy coat and blood were used in parallel, the sensitivity increased up to 98%. On the molecular examination of cytology-negative dogs, bone marrow PCR detected the highest number of positive samples (23.9%) followed by buffy coat (22.2%), blood (21.7%) and skin (8.6%). Out of 49 dogs positive for H. canis, 19 were co-infected with one (11 dogs) or more pathogens (8 dogs) (see Table 3).
Discussion
By the comparison of cytological examination on different tissues before and after the summer season, a high A. platys, B. vogeli, L. infantum and Bartonella sp.
incidence of H. canis infection (43.9%) was recorded in the population of young dogs examined in this study. Indeed, the cytology of buffy coat and blood smears is routinely used for the diagnosis of canine hepatozoonosis. If it were possible to calculate an incidence rate based on PCR with comparison between March-April 2009 vs. October 2009, it would be expected that the incidence rate would have been even higher than the rate based on cytology, as the sensitivity of PCR proved to be considerably higher than that of cytology. Little information is available in the literature on the incidence of H. canis infection in pups and young dogs and thus data presented here are of interest in indicating that this infection could spread quickly among young dogs and be present in the majority of the exposed population. The high prevalence of infection detected in October 2009, soon after the summer season, indicated that the infection was transmitted to a large proportion of the dog population studied, which fits with data showing that the highest R. sanguineus population density occurred during the summer months in the same dog population [29]. In previous studies, the prevalence of infection inferred by blood smear cytology varied from 1% [35] up to 39.2% [36], being much higher in some studies using molecular tests (up to 63.8%) [37]. Accordingly, the molecular tests employed in the current study detected a higher proportion of positive animals (57.8%) than that diagnosed by combined cytology of several sample types (44.6%). Overall the results of the cytological and molecular tests in diagnosing of H. canis infection overlapped due to the fact that animals most likely had a recent infection, as also inferred from both the time of sampling collection (soon after the period of the highest tick population density) and the young age of animals. It is likely that a long time gap between the initial infection and the date of testing for it will increase the probability that cytological examination might fail in detecting low or intermittent parasitemia, thus resulting in false negative results. This suggests that when no information is available on the date of potential infective tick exposure, PCR on either blood or buffy coat should be preferred to cytology for the diagnosis of H. canis infection. The combination of PCR on all four samples (blood, buffy coat, bone marrow and skin) was able to detect 13% more of positive dogs when compared to PCR on buffy coat alone. This increased sensitivity justifies PCR on multiple tissues and not only a single one when searching for H. canis infection in a suspected dog.
Cytological detection of H. canis in buffy coat smears is certainly recommended over examination of a blood smear, as it is 3.8 times more sensitive, in agreement with a previous study [38] and also 2.5 times more sensitive than bone marrow cytology. A combination of cytological examination of blood, buffy coat and bone marrow smears allowed the detection of only 7.5% more samples than buffy coat alone, and therefore it might not be justified to sample the bone marrow of suspected dogs, if a buffy coat smear can be examined.
Although no apparent clinical signs were directly related to H. canis infection at the time of sampling, 26 of the H. canis infected animals showed hematological abnormalities eosinophilia being the most common alteration observed, followed by leukocytosis, lymphocytosis, neutrophilia, monocytosis and thrombocytopenia. These alterations, in particular eosinophilia, occurred either in animals with single H. canis infection or coinfected with other CVBD-causing pathogens. In the latter case, H. canis may complicate the panel of clinical alternations related to other pathogens [39]. This is of relevance in geographic areas were CVBD-causing pathogens occur simultaneously in the same individual dog, since it might result in complex disease manifestations in sick dogs, impairing the achievement of a definitive diagnosis and selection of proper therapeutic agents.
Conclusions
The results presented here suggest that the PCR on buffy coat and blood is the most sensitive assay for the detection of H. canis infection in dogs. This technique may be used also as an epidemiological tool for studies in areas were canine hepatozoonosis is endemic or where it is suspected. However, when PCR is not available for immediate testing (e.g., in most of the routine veterinary practices), cytology on buffy coat should be preferred to blood smear evaluation as indicated. This study has also demonstrated that H. canis infection can spread rapidly among young dogs infested by R. sanguineus ticks and be present in the majority of the exposed population within 6 months. Finally, the achievement of a prompt diagnosis of hepatozoonosis is pivotal in geographic areas were other CVBD-causing agents occur in order to reduce the clinical effects of simultaneous pathogen infections and to select the best therapeutic drug. and revision of the manuscript. MSL and SW run the molecular assays and contributed with data analysis and interpretation. GC, GB and DS contributed with data analysis and interpretation and revision of the manuscript. All authors read and approved the final version of the manuscript. | 4,527.6 | 2011-04-13T00:00:00.000 | [
"Biology",
"Medicine"
] |
Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.
Introduction
Based on a partitioning strategy, K-means clustering algorithm [1] assigns membership to data points by measuring the distance between each pair of data point and centroid of a designated cluster. The membership assignment will be progressively refined until the best possible assignment is yielded-that is when the total intradistances of the data points within a cluster are minimized and the total interdistances of the data points across different clusters are maximized. The final quality of the clustering results, however, depends largely on the values of the initial centroids at the beginning of the partitioning process. These initial centroid values are randomly generated each time the clustering kick-starts which are different from time to time. By such random chance, K-means can probably plunge into local optima whereby the final quality of the clusters falls short from the globally best. An example that is depicted in Figure 1 demonstrates some possible outcomes of K-means. The top two snapshots represent good clustering results where the cluster distributions are even; the bottom two snapshots show otherwise, the clustering results in uneven distribution. All these depend on the starting positions of the centroids which are randomly generated.
Technically it is possible though not feasible in achieving a globally optimum clustering result, via a brute-force approach in trying out exhaustively all partitioning possibilities. As the number of clusters and the number of data points increase, the combinatorial number of possible grouping arrangements escalates, leading to computationally prohibitive. Therefore, heuristic approach is desired for seeking global optima stochastically, improving the quality of the final clustering results iteration by iteration. Metaheuristics which enable incremental optimization by design are ideal candidates for such computation. A substantial collection of nature-inspired optimization methods aka metaheuristics have emerged recently with designs mimicking swarm behavior exhibited by living creatures. Each of the search agents represents a particular combination of centroid positions; they move and search for optimality in their own ways, they sometimes communicate with each other and are collectively guided towards a global optimization goal. To date, the proposed nature-inspired optimization algorithms have gained much attention among data mining researchers. The computational merits have been verified mathematically and their feasibilities have been applied in various practical applications. However, validating the efficacy of hybrids combining such nature-inspired algorithms with classical data mining algorithms is still at an infant stage [2].
By the merits of design, nature-inspired optimization algorithms are believed to be able to overcome the shortcomings of K-means clustering algorithms on the issue of getting stuck in local optima. The objective of this study is to validate the efficacy of the hybrids and to quantitatively measure the quality of clustering results produced by each of the hybrids. In our experiments, we used four popular nature-inspired optimization methods to combine with K-means clustering algorithm. The integration is paramount, because it enables enhancement over data mining algorithms which have many wide applications.
A preliminary experiment reported in [3] shows their feasibility as a successful pioneer exemplar. This paper reports further experiment including a case study of image segmentation using these hybrid algorithms. In the past, some researchers started to integrate nature-inspired optimization methods into K-means algorithms [4]; their research efforts are limited to almost the same form of swarming maneuvers-that is, there is constantly a leader in the swarm which the fellow agents follow. Some examples are nature-inspired optimization algorithms such as the Artificial Bee Colony [5], Firefly [6], and Particle Swarm Optimization [7]. For the sake of intellectual curiosity, in our experiments as well as whose models described in this paper, two nature-inspired algorithms which take on a slightly different course of swarming are included. They are the Bat Algorithm [8] which swarm with varying speeds and the Cuckoo Algorithm [9] which do not swarm but iterate with fitness selection improvement. They represent another two main groups of algorithms and their variants, which are adaptive to the environment by their sensing abilities, and utilize a special method to improve the solution by evolving the old solutions from one generation into better ones in new generations. Specifically, the performance indicators in terms of speed and time consumption for clustering by these two bioinspired algorithms integrated with K-means are observed. The technical details of the aforementioned nature-inspired algorithms and the K-means clustering are not duplicated here. Readers are referred to the respective references for the background information of the algorithms involved in this paper.
Enhancing K-Means Clustering by Nature-Inspired Optimization Algorithms
A major reason in obtaining quality K-means clustering results is having the right combination of centroid positions.
The Scientific World Journal 3 The resultant centroids ideally should be dispersed in such a way that the clusters formed upon them yield the maximum quality, which we call a global optimum. It is characterized by having the properties of maximum intrasimilarities and minimum intersimilarities in clustering. K-means is known to produce clustered data with nonoverlapping convex clusters, and they always converge quickly. One drawback is that clusters in K-means do not always converge into the global optimum. Just like any other partition-based clustering algorithm, initial partition is made by some randomly generated centroids which do not guarantee the subsequent convergence will lead to the best possible clustering result. K-means, for example, is reputed to have the final clusters stuck at local optima which prevent further exploration for the best results [10]. In reality, to achieve the best clustering results, it would require running many rounds with each round taken different random initiations of centroid values. This practice certainly would have to be done at the very high cost of computation and model training time. In [3], the authors first proposed the integration of nature-inspired optimization algorithms into K-means. We take a step further in the experiments and applying it for image segmentation.
To start with the integration, the formation of centroids, which are computed stochastically from start to convergence, is directed by the searching agents of the nature-inspired optimization algorithms. The evolution of the new generation is based on the principle that the centroids which are being relocated in each iteration are inclined to enable the new clusters that are being formed with better results. Hence, in order to achieve the optimal configuration of centroids as an objective, we let cen V be the centroid as the center point of the th cluster in the multidimensional search space by the Vth attribute. , is the membership of data point whether it exists in cluster . The centroid location can be calculated by (2) for each attribute V and for each cluster ; the clustering objective function is defined as (3): where is the number of search agents in the whole population, is the maximum number of clusters, and is the current cluster being processed. The highest dimension of attributes is for the dataset; a centroid is hence located by a tuple of size . In the design of our computational model, cen is a 2D matrix of size × holding all the values of the centroids (the cluster centers) indicated by cen ,V : The computation process scans through the cen matrix up to × times to check the values of all the attributes of the data point for measuring the distance or similarity between each pair of and the centroid. This process repeats for each cluster V. For the optimization algorithm to work, each searching agent represents a particular combination of centroids for all the clusters, as an optimization solution in the × dimensional search space. The best search agent being found in each iteration is supposed to produce the best clustering result in that particular iteration. For instance, in a simple dual-cluster clustering task, there are three variables for the objective function to work with. In this case, there are three dimensions in the search space. In the three dimensional search space, the th search agent may take the form of = ). Due to the fact that there are 2 × 3 attributes for a search agent, the centroid can be coded in the same format for the coordinate of the second dimension. A search agent may have a best fitness value as cen = According to the given definitions, the clustering strategy can be constructed as a minimization function, as follows, that aims at shortening the distances among the data points in a cluster: The ranges of the parameters are as follows: = 1, . . . , , = 1, . . . , , and = 1, . . . , . The double line notation in (4) means it is a function of Euclidean distance. The interpretation of (4) is that the th search agent that is now handling the th cluster takes a value by measuring the minimized distance between the th search agent and centroid of the th center. The equation is an objective function in which the smaller the value the better. As long as the value of clmat is minimized by this metaheuristic optimization approach, every data point within a cluster would be drawn as close as possible to the centroid. The metaheuristic will guide the search agents to find the appropriate centroids for the clusters.
Nature-inspired optimization algorithms require certain functional parameters to be initiated with values to run. The function parameters are defined as follows. They allow the users to set with user-specific values for customizing the operations of the algorithms. Some of the parameters are common across different bioinspired optimization algorithms. In this paper, we have four hybrids, which resulted from combining four bioinspired optimization algorithms into K-means. With the capital letter C denoting "clustering, " the four hybrid algorithms are called C-ACO, C-Bat, C-Cuckoo, and C-Firefly, respectively. The original mathematical models for the four bioinspired optimization algorithms can be found in [6,8,9,11], respectively. Table 1 consists of the parameters for the C-Firefly algorithm. is a composite matrix of size [ , ( ⊗ )], where ∈ since has a maximum of centroids and each centroid is represented by a maximum of dimensions by the attributes.
The C-Bat algorithm has more parameters than the others because it includes the velocity and location of each bat (Table 3). Velocity is determined by frequency, loudness, and pulse rate. However, only two of the four bioinspired clustering algorithms (C-Cuckoo and C-Bat) are described in this paper due to space limitations. Nonetheless, C-Firefly 4 The Scientific World Journal The classification matrix [ , ] where ∈ , ∈ Table 1 consists of the parameters for the C-Firefly algorithm. is a composite matrix of size [ , ( ⊗ )], where ∈ since has a maximum of centroids and each centroid is represented by a maximum of dimensions by the attributes. and C-Bat have recently been reported in [1]. Readers may refer to [1] for the detailed integration of a K-Means clustering algorithm with the firefly and bat algorithms.
Cuckoo Clustering Algorithm (C-Cuckoo).
In the original cuckoo algorithm, Yang and Deb used an analogy whereby each egg in a nest represents a solution, and a cuckoo egg represents a new solution. The goal is to use the new and better solution to replace a relatively poor solution by chasing off an old egg with a new cuckoo egg. We adopt the same analogy in constructing our C-Cuckoo algorithm. The solution represents the host nest. In the clustering algorithm, the solution is composed of a set of real numbers representing the cluster center. As defined earlier, takes the form of a ( , ⊗ ) matrix where is the population, is the number of clusters, and is the number of attributes associated with each data point. The second index of the matrix represents the center of all clusters, and the whole represents the current locations of all the cuckoos. We now give a simple example. If there is a given data set and two dimensions and we need to create two clusters ( = 2, = 2), then the value of is four. The th in the middle of the clustering process may look something like this: Cluster 2: 8 9 In the initialization phase, the population of host nests , where = 1, 2, . . . , , is generated. The cluster centers are represented by the means of the attributes. Each cuckoo has the same parameters (Table 2): Tol (tolerance) and pa (alien egg discovery rate). In this phase, the most important action is that a cluster ID is randomly assigned to each cuckoo as the initial clustering result.
Because the cuckoo has characteristics typical of Levy flight, when it comes to generating a solution ( +1) for cuckoo , we use the equation The classification matrix [ , ] where ∈ , ∈ Pa and Tol are conditional variables used for controlling execution of the cuckoo optimization algorithm [9]. All other parameters are the same as those in Table 1 the step size scalar used to control the resolution of the step length. In our algorithm, we use = 1, which satisfies most cases. The above formula means the cuckoo takes a random walk. In this case, the random walk is implemented as a Levy flight, which is based on Mantegna's algorithm [12]. The algorithm takes the following steps. First, the number of centroids is initialized, as are the other variables. By going through a random walk, the nest, which is regarded as the central point of a cluster, is updated. The step length is calculated by = /|V| 1/ and ∈ [1,2], where and V are drawn from normal distributions. That is, . This distribution follows the Levy distribution.
The goal of this clustering algorithm is to search for the best center to minimize the distance between the center of the cluster and its points. Our objective function is, thus, the same as (3) and its result is the degree of fitness. After calculating the degree of fitness, we use (4) to assign each point to a suitable cluster. A better degree of fitness represents a good quality cuckoo solution.
The best solution derived from the above equations is then nominated, and the new solution replaces the old. If no new solution is better than the old one, the current solution is retained as the best. The host bird cleans out the nest according to a given probability. The next iteration of the computation process then occurs to look for the next best solution. In each iteration, the best and hence the optimal solution to date is set as the clustering centroid.
The Scientific World Journal 5 The centroid is represented as a paired tuple, cen( , :), where is the central point of the cluster. The tuple has the format of ( , ), where is the th cluster and is the coordinates in ( , , , . . . etc.) or higher dimensions. For example, the locations of three clusters can be represented as [1, (3, 4, 5), 2, (8,8,9), 3, (5, 6, 4)]. Each cuckoo represents a cluster with coordinates ( , ), and is the central coordinate of the cuckoo. Each cuckoo is initially assigned a random value. Subsequently, is updated by iterative optimization. The progressive search for the best solution helps to avoid local optima in a manner similar to chromosome mutations in genetic algorithms. When the locations of the cuckoos are set in each round, the distances between the data points and the centers are measured by their Euclidean distance. The data points are reassigned to their nearest cluster by The clusters are then reformed on the basis of the newly assigned data points. At the beginning, the averages of the data points are used as starting centroids by In this way, the algorithm achieves better partitioning of clusters at the start to avoid the center points being too near or too far from each other, as would occur if they were assigned purely by random chance. As the algorithm runs, the clustering distribution is refined and changes to quality centroids are avoided by averaging the data. This is why (6) is only needed at the beginning to initialize the starting centroids. According to survival of the fittest, the partitioning process reaches a final optimum. The logic of the C-Cuckoo algorithm is shown as a flow chart in Figure 2.
Bat Clustering Algorithm (C-Bat).
Each bat has its own unique ID, position, and fitness attributes. However, in the bat algorithm, each bat is assigned the same loudness, pulse frequency, and wave length. The position of a bat is represented by a solution . Its location is determined by the values of dimensions. As for the C-Cuckoo algorithm, the solution uses a ( , ⊗ ) matrix, the second term identifying the location of the bat.
The initiation step is similar to that employed for the C-Cuckoo algorithm. However, the bats have an additional feature: each bat has a velocity V , which is similar to particle swarm optimization. The bat's position is partly determined by its velocity. At first, the bats are randomly distributed. After initialization, the bats move to a better place according to (8). A random number is then produced: if it is larger than the current bat rate, the algorithm selects a solution from those calculated and generates a local solution. The centroids are the averages of the nearby data points. The distances are then minimized according to the direction of the optimization goal. The objective functions are identical to (5) and (6) above. The convergence process then starts to iterate based on the following formula: Generate initial population of n host nest x and parameters Set the initial centroids to be mean of each attribute The positions of the bats are then updated. represents the frequency of echolocation. When the frequency equals the sum of the minimum frequency and the difference between the maximum and minimum frequencies, the speed of the bat is updated. The new speed is set to the previous speed plus the product of the previous frequency and the difference between the current position and the previous position. A variable called the pulse rate is also used in the algorithm. When the pulse rate is exceeded, the following formula is updated: Equation (9) serves as the updating function, where * is taken as the best solution. It is also used to represent the best position for the bat to move towards. If the loudness value is Iris 150 4 3 Wine 178 13 3 Haberman's survival 306 3 2 Libras 360 91 15 Synthetic 600 60 6 not high enough and the new solution is better than the old one, the better one becomes the solution. A fitness function the same as that employed for the C-Cuckoo algorithm is then applied by checking whether echolocation is loud enough. The logic of the C-Bat algorithm is shown as a flow chart in Figure 3.
Experiments
There are two sets of experiments; one is focused in evaluating the performance of algorithms using a series of multivariate real-life datasets. The other is to test out the efficacy of the algorithms in image segmentation. The purpose is to validate the new algorithms with respect to their clustering quality. The first test is about how the new algorithms work with general-purpose datasets with different number of attributes and instances. Their performances in particular are evaluated in details. The latter test is to observe how well these algorithms will work in the domain of machine vision. The setups of the experiments and their results are discussed as follow. Table 4. The full length of dataset is used for training-in clustering, building clusters are referred to until perfection is attained using the full set of data. Performance of the clustering is evaluated in terms of cluster integrity which is reflected by the intra-and intersimilarities of data points within and across different clusters, the average sum of The Scientific World Journal 7 CPU time consumption per iteration during the clustering operation, and the number of loops taken for all the clusters to get converged. The criterion for convergence which decides when the looping of evolution stops is the fraction of the minimum distance between the initial cluster centers that takes on a numeric value between [0, 1]. For example, if the criterion is initialized with 0.01, the looping halts when a complete iteration does not move any of the cluster centers by a distance of more than 1% of the smallest distance between any of the initial cluster centers.
Experiment
The quality of the final outcome of clustering is measured by the integrity of each of the clusters, which in turn is represented by the final fitness value of the objective function. The resultant fitness value of the objective function is driven by how much each variable contributes towards the final goal which is being optimized in the process. From the perspective of clustering the goal is finding a suitable set of centroids as guided by the metaheuristic of the natureinspired algorithm. The metaheuristic will always insist that the relocation of centroids in each step is progressive aiming at exploring for the optimum grouping. The end result of the ideal group should lead to having the data points within each cluster closest to their centroid. Iteratively, the search for the optimum grouping proceeds. During the search the centroids relocate in the search space step-by-step according to the swarming pattern of the nature-inspired optimization algorithm until no more improvement is observed. It stops when there is no further relocation that will offer a better result. To be precise, no other new relocation of centroids seems to provide better integrity of the clusters. The algorithm is geared at minimizing the intrasimilarity of each cluster by an objective function. In this case, it is a squared error function so any slight difference will be enlarged by the square function. Equation (10) defines such objective function. Consider In the experiments, each dataset is run ten times to test and obtain the average CPU time and is also run ten times to test the objective function values/best fitness value. The parameters are set as reported in Tables 5 and 6. Figure 4 below present snapshots of the experimental run for the Iris dataset. The original data points are shown in the topmost plot Figure 4(a), and the data points in different colors obtained by the new clustering algorithms are shown in Figures 4(b), 4(c), 4(d), and 4(e) by C-Firefly, C-Cuckoo, C-ACO, and C-Bat, respectively.
Testing the New Clustering Algorithms with General-Purpose Multivariate Datasets. The five diagrams in
The quantitative experimental results are shown in Tables 7, 8, 9, 10, and 11. To make observation easier, the best result across the four algorithms under test (in each column) is highlighted with a double asterisks. It is apparent to observe that the C-Cuckoo and C-Bat algorithms achieve a lot better of objective fitness value than do the C-ACO and C-Firefly algorithms. Overall, the four nature-inspired clustering algorithms execute in less time and succeed in achieving higher accuracy in clustering than the plain Kmeans. High accuracy is referred to high cluster integrity where the data points in a cluster are close to their centroid. This observation tallies with our proposition that K-means enhanced by nature-inspired optimization algorithms speeds up the searching time for the good centroids for good clustering integrity. This enhancement is important because for all the partitioning-based clustering methods potentially they can be enhanced by nature-inspired algorithms in the similar way-the end result is expected in search process acceleration and local optima avoidance. Our detailed performance evaluation tests include computing the final objective function fitness values and the CPU time consumption for clustering the data in the generalpurpose UCI datasets in Table 4. The objective function fitness value is computed using (10) which represents the overall cluster integrity, and CPU time consumption is timed as how long it is necessary for the clustering algorithm to converge from beginning to end. Given the datasets are of fixed volume, CPU time consumption is related directly to the speed of the clustering operation.
Tables 7 to 11 clearly show that the C-Cuckoo and C-Bat algorithms both yield better objective values than the C-ACO and C-Firefly algorithms. The study reported in [1] has already shown that the C-ACO and C-Firefly algorithms perform more quickly and accurately than a traditional K-means specification. Our evaluation results provide further evidence confirming this phenomenon: nature-inspired algorithms do indeed accelerate the process of finding globally optimum centroids in clustering, and partitioning clustering methods can be combined with nature-inspired algorithms to speed up the clustering process and avoid local optima. Furthermore, our results show that two new hybrid clustering algorithms, the C-Cuckoo and C-Bat specifications, are more efficient and accurate than the others we test.
The next experiment is undertaken to measure the average computation time required per iteration in the clustering process. Only the Iris dataset is used here, as it is one of the datasets in the UCI repository most commonly used for testing time spent per iteration for clustering with natureinspired algorithms.
From Figures 5 and 6, it can be seen that all four algorithms scale quite well-as the number of iterations increases, the computation time taken remains flat. In particular, C-ACO is very fast. It takes only a fraction of a second to execute each iteration of the clustering process. C-Firefly takes about 8 to 10 seconds. C-Cuckoo and C-Bat are relatively fast, taking less than a second per iteration for code execution. The CPU times taken for each algorithm are reported in Table 12. The The Scientific World Journal 9 figures are averaged, and the table shows the net CPU time taken per iteration. The following graphs in Figures 7 and 8 show the number of iterations required for the clustering algorithms to converge according to the given threshold criterion.
As shown in Figure 7, the C-Firefly, C-Cuckoo, and C-Bat algorithms take about two or three iterations to achieve convergence, which is extremely fast. In contrast, C-ACO in Figure 8 takes many rounds to converge, 4681 iterations to be exact. For the other three algorithms, the best objective function value is reached at 78.94. C-ACO goes no lower than 101 and remains there even if the number of iterations increases to a large value.
Therefore, we may conclude that the C-Firefly, C-Cuckoo, and C-Bat algorithms are suitable for static data. C-Firefly compares data ( 2 ) times in each iteration, so it takes a lot of time to converge. The traditional K-means algorithm converges easily to a local optimum, so the result of the objective function is worse than for the others. C-Bat has the ability to adjust itself in every iteration, and because it only changes location once, at the end of an iteration, it is very fast. Because C-Cuckoo retains better solutions and discards worse solutions, working like a PSO-clustering algorithm, it also performs well in providing objective function values. Although C-Bat, C-Cuckoo, and C-Firefly may need more time for each iteration, they are good optimization algorithms. They can find the optimal solution relatively quickly overall (because they converge very fast). However, the ants acting as the searching agents in the C-ACO algorithm make only a small move in each iteration. Many comparisons are thus required to find the best solution. In sum, C-ACO may be suitable for applications in which incremental optimization is desired and very little time is needed for each step, but it may take many steps to reach the optimal goal. The next set of experiments tests the quality of clustering in terms of accuracy (measured as 100% minus the percentage of instances overlapping in wrong clusters) and standard deviation. Standard deviation is related to how much variation from the average (mean) is caused by clustered data. The misclustered data derived in the experiments can be seen in Figure 3. Most of the results are satisfactory. The standard deviations indicate that the data points tend to be very close to the mean. The mathematical definition is simply where = (1/ )( 1 + ⋅ ⋅ ⋅ + ). Again, the most widely employed dataset, the iris dataset, is used for this set of experiments. Table 13 shows that the results obtained using the C-Bat algorithm are the best in the iris dataset, whereas those derived with the C-Cuckoo algorithm are the best in the wine dataset. In the Haberman data, all five algorithms are almost equally accurate, though the C-ACO algorithm is slightly more precise. Table 14 shows that the C-Firefly algorithm has the minimum deviation within clusters, whereas the original Kmeans algorithm deviates to the greatest extent. Pixel color oriented image segmentation is the core of image analysis which finds its applications in many areas of image interpretation, pattern identification/recognition, and robotic vision. Some popular applications [13] include but are not limited to geographical information remote sensing, medical microscopy, content-based audio/visual media retrieval, factory automation, and unmanned vehicle navigation, just to name a few. In practical scientific and industrial applications, the quality and accuracy of image segmentation are very important which depend on the underlying data clustering algorithms. A common choice of unsupervised clustering algorithm is K-means in image segmentation based on color. The regions of the image depending on the color features are grouped into a certain set of segments by measuring the intercluster distance and intracluster distance between each image pixel and the centroid within the cluster. The clustering process is exactly the same as that used in the previous experiment on UCI datasets, except that the images under test in this experiment are larger in amount. An 8 MB high-resolution photo like those used in the experiment here has typically 5184 × 3456 pixels. In addition to spatial information, -and -axises of the pixel position in the image, each pixel is triplet of red, green, and blue information, ranging from 0 in darkness to 255 being the strongest in intensity. It is well-known that every pixel of an image is made up by mixing the 256 independent intensity levels of red, green, and blue light. Each data point of the 17,915,904 pixels is a five-dimensional matrix comprised of the pixel location information and RGB information, [ , , , , ] where 0 ≤ ≤ 5184, 0 ≤ ≤ 3456, and 0 ≤ , , ≤ 255. The hybrid clustering algorithms are extended from Kmeans in the same way as described in Section 2. The required image data can be technically extracted by using MATLAB functions imread(filename) that creates a three-dimensional matrix and impixel() that returns the values of the RGB triplet for the pixel.
The experiment is run over four images whose pixels are to be clustered using different preset numbers of = 3 and = 4. The four images are shots of sceneries, namely, Tower Bridge (TB), Cambridge University (CU), Le Mont-Saint-Michel (MSM), and Château de Chenonceau (CDC).
They have similar image composition and identical size. Some particular features that are subtly contained in the images are used for testing the efficacy of the clustering algorithms like those as follows. The performance is again measured by intersimilarity and intrasimilarity across and within the same cluster, as well as time taken in seconds in processing a whole image. The performance results are tabulated in Table 15. The winning performance result by one of the four hybrid clustering algorithms or K-means is marked with a triple asterisks as distinction. The original images under test and the segmented images by the various clustering algorithms are shown in The performance is again measured by intersimilarity and intrasimilarity across and within the same cluster, as well as time taken in seconds in processing a whole image. The performance results are tabulated in Table 15. The winning performance result by one of the four hybrid clustering algorithms or K-means is marked with a triple asterisks as distinction.
In all the tests, K-means seems to take the shortest time, probably because it stops early in local optima. This is evident by the fact that none of the results by K-means score the best in either intercluster distance or intracluster distance. In the experiment of TB, C-ACO scores the widest intercluster length, and C-Firefly has the tightest intracluster distance. As a result, visually C-ACO produces slightly more details on the cloud at the top right corner. C-Firefly seems to produce the most details on the sun-facing side of the tower as well as the shaded side of the tower. C-Firefly again scores the best in intracluster distance in CU and CDC. Again, in CU, C-Firefly offers the most details on the lawn; the copula likewise has most details and reproduces seemingly most accurately on the college façade by C-Firefly. Interestingly, C-Cuckoo has the longest inter,cluster distance in MSM and CDC. In MSM C-Cuckoo gives the most structural outline of shades and colors on the wall of the chapel, while C-Firefly produces most details on the fortress wall. In CDC, C-Cuckoo and C-Firefly manage to produce the relatively best reflection images over the water, by visual inspect.
The overall results of the experiments described in this paper show that two of the new clustering algorithms observed here, the C-Cuckoo and C-Bat algorithms, which have never been tested by other researchers, are more efficient and accurate than the C-ACO and C-Firefly specifications. This represents a significant contribution to existing knowledge, because it sheds light on the encouraging possibility that optimization techniques derived from nature can be used to improve K-means clustering; we hope that this lays the foundation for more sophisticated bio-inspired optimization methods to be integrated with existing clustering algorithms.
The characteristics of each of the four bioinspired clustering algorithms are listed in the Appendix by way of summary. It is hoped that researchers will find them useful as a source of inspiration for developing better algorithms in future. As the phrase "meta-heuristics" suggests, these bioinspired optimization heuristics come in abstract and general forms. There is ample potential to extend, modify, and even build hybrids of them with other heuristic functions to suit different applications.
Conclusion
K-Means clustering algorithms, a classical class of partitionbased algorithms used for merging similar data into clusters, are known to have the limitation of getting stuck in local optima. As a matter of intellectual curiosity in computer science, how best to cluster data such that the integrity of the clusters is maximized, has always been a challenging research question. The ideal solution is to find an optimal clustering arrangement which is globally best-so that no other possible combinations of data clustering exist that are better than the global one. One way of achieving this is to try all the possible combinations by brute-force which could be computational intractable. Alternatively, natureinspired optimization algorithms, which recently rise as a popular research topic, are extended to work with K-means in guiding the convergence of disparate data points and to steer them towards global optima, stochastically instead of deterministically. These two research directions of metaheuristic optimization and data mining do fit like hand and glove. Constrained by the inherent limitation of K-means design and the merits of nature-inspired optimization algorithms, it is feasible to combine them letting them complement and function together. This paper evaluates four hybrid types of clustering algorithms developed by integrating natureinspired optimization algorithms into K-means. The results produced from the experiments clearly validate the new algorithms possess a performance enhancement, apparently for the C-Bat and C-Cuckoo. The extended versions of clustering algorithms enhanced by nature-inspired optimization methods perform better than their original versions, in two sets of experimental datasets-general purpose and image | 8,582 | 2014-08-18T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Dynamic characterization and interpretation for protein-RNA interactions across diverse cellular conditions using HDRNet
RNA-binding proteins play crucial roles in the regulation of gene expression, and understanding the interactions between RNAs and RBPs in distinct cellular conditions forms the basis for comprehending the underlying RNA function. However, current computational methods pose challenges to the cross-prediction of RNA-protein binding events across diverse cell lines and tissue contexts. Here, we develop HDRNet, an end-to-end deep learning-based framework to precisely predict dynamic RBP binding events under diverse cellular conditions. Our results demonstrate that HDRNet can accurately and efficiently identify binding sites, particularly for dynamic prediction, outperforming other state-of-the-art models on 261 linear RNA datasets from both eCLIP and CLIP-seq, supplemented with additional tissue data. Moreover, we conduct motif and interpretation analyses to provide fresh insights into the pathological mechanisms underlying RNA-RBP interactions from various perspectives. Our functional genomic analysis further explores the gene-human disease associations, uncovering previously uncharacterized observations for a broad range of genetic disorders.
Supplementary Note 2: Comparing icSHAPE with other in vivo and computationally predicted secondary structure profiles
To demonstrate the effectiveness of icSHAPE in our HDRNet model further, we conducted an experiment to compare ic-SHAPE with other RNA secondary structure representation models, namely RNAfold (9) and two in-vivo secondary structural characterization methods: DMS-seq (10) and DMS-MaPSeq (11).To ensure a fair comparison, we kept all other modules of HDRNet, substituting only icSHAPE with the structural features derived from the comparative methods.The experimental results are summarized in Supplementary Fig. 2. As depicted in Supplementary Fig. 2a, the HDRNet model with icSHAPE for structural representation outperformed HDRNet models with other structural features in identifying protein-RNA interactions.We speculate that computationally predicted secondary structures could be obsolete and hence prone to artifacts.In contrast, in-vivo secondary structure scores, representing the probabilities of nucleotide pairing, could potentially offer rich structural information.Taking the SLTM protein dataset from K562 cells as an example, we subsequently utilized t-SNE to visualize the embedding representation of HDRNet with various structural information.As illustrated in Supplementary Fig. 2b, icSHAPE demonstrated a distinct division between positive and negative samples, with each group clearly situated on opposite sides.However, for DMS-seq and DMS-MaPseq, positive and negative samples are considerably overlapped between the clusters, whereas the performance of RNAfold was deemed suboptimal due to insufficient sample separation.
INDEX
Supplementary Fig. 2c provides a visual representation of the high-attention binding regions identified by different RNA secondary structure representation models.Notably, icSHAPE and RNAfold, when used as the structural features within the HDRNet model, successfully captured continuous high-attention regions.Conversely, DMS-seq and DMS-MaPseq exhibited a considerable number of missing positions, resulting in a lack of continuity in the regions they identified.Moreover, the RNA models with those secondary structure features exhibited significant differences; for instance, icSHAPE could predict RNA models with greater structural heterogeneity and a higher number of nucleotide pairings.Conversely, RNAfold structures resembled those constrained using DMS-MaPseq, suggesting that the performance of DMS-MaPseq in structure prediction may be vulnerable to of missing positions.On this basis, we can conclude that these analyses not only highlight the effectiveness of icSHAPE features but also underscores potential limitations.Since the acquisition of icSHAPE data relies on sequencing experiments, RNAfold or other secondary structure prediction algorithms may serve as suitable alternatives for general tasks.These observations emphasize the importance of carefully selecting the most appropriate RNA secondary structure representation model based on the specific requirements of the task at hand.We also conducted further experiments to investigate and compare the performance of HDRNet by integrating RNA structure information from different methods.Specifically, we explored multiple secondary structure features obtained from different methods and then employed a concatenation strategy to integrate them into the HDRNet model.As demonstrated in Supplementary Fig. 3, our observations indicate that the performance of HDRNet was not further improved but slightly decreased by integrating other secondary structure features.This observation indicates that those in vivo secondary structure information provides a more accurate depiction of the RNA environment than the computationally predicted secondary structure information (RNAfold), thus ensuring the performance of HDRNet.These results support the reliability of HDRNet and suggest that the original HDRNet is sufficiently robust as a standalone model.To demonstrate the effectiveness of our chosen network architecture, we initially constructed a non-hierarchical HDRNet, referred to as HDRNet-nonhier, which combined multi-source features at the network's inception.After that, a comparative analysis was conducted between our proposed HDRNet and HDRNet-nonhier, evaluating their performance in both static and dynamic prediction tasks.The comparative results are summarized in Supplementary Fig. 6a, revealing that the HDRNet, characterized by its hierarchical structure, exhibited significantly superior prediction performance compared to its non-hierarchical counterpart (HDRNet-nonhier).Moreover, HDRNet, with its hierarchical architecture, demonstrated superior performance in intercellular dynamic prediction.Furthermore, we performed feature correlation analysis and visualization across different versions of HDRNet.For illustration purpose, we employed the TBRG4 protein dataset in HepG2 cells as a representative sample, as depicted in Supplementary Fig. 6b.We can observe that HDRNet with a hierarchical structure has a more distinct feature correlation and hierarchy than the non-hierarchical one.Therefore, sequence information and secondary structure profiles can be normalized and fused, enriching the features learned by HDRNet to be robust.
Supplementary Note 5: Existing sub-groups of binding events that are better characterized by HDRNet
In this part, we conducted a comparison between HDRNet and PrismNet, focusing on their performance across various cell lines, and the detailed results of this comparison are presented in Supplementary Fig. 7a.As demonstrated in this figure, we can observe that HDRNet was consistently superior to PrismNet in all cell lines evaluated, with significant enhancement observed in HEK293, HepG2, and K562 cells.To ensure the credibility of our findings, we selected datasets from these three cell lines in which HDRNet was at least 5% better than PrismNet.We then analyzed the performance gap between the proposed HDRNet and PrismNet models for these datasets, as demonstrated in Supplementary Fig. 7b.We found that HDRNet showed a more significant performance improvement in predicting multiple datasets in HEK293 cell line.For illustration purpose, we further explored those RBPs in HEK293 cell lines.To investigate it, we have mapped these RBPs into STRING (12) and then used the the Markov Cluster Algorithm (MCL) with an inflation rate of 2.5 to cluster them, as demonstrated in Supplementary Fig. 7c.As can be seen from the figure, RBPs with similar biological functions are clustered together, while RBP datasets exhibiting significant performance improvements are consistently grouped into the same functional clusters, e.g., 15% for FMR2 and 13% for FXR1, in the subclusters represented by the red nodes highlighted.Moreover, in this cluster, previous studies have demonstrated that several RBPs (e.g., FXR2, FMR1, LIN28B) have the coexpression in the same tissues (13), which further indicates that these RBPs in the cluster may have similar context or structures.
With the distinct subgroups identified, we proceeded with an analysis of the structural complexity of these RBPs.Initially, we utilized HDRNet to scan the RBP datasets and extracted the high-attention 6-mer fragments that HDRNet identified most frequently.Similar to PrismNet, we then calculated the structural complexity of these 6-mer fragments, generated the PWM matrix and represented their structural motifs where the structure component using the labels "U" for unpaired nucleotide and "P" for paired.As shown in Supplementary Fig. 7d, we observe that RBPs in the same subgroup show structural similarities.For instance, the subgroup of NUDT21, CPSF2, and CPSF4 exhibited a prevalent preference for binding to paired structures, whereas RBPs in the subgroup containing LIN28A demonstrated a higher tendency to interact with complex structure fragments.Meanwhile, the binding region of FBL usually does not pair.These phenomena demonstrate the potential structural differences between different subgroups.
To investigate context variations further, we explored the context variations between RBP clusters using 3-mer analysis, tailored to the tokenization used by our dynamic global contextual embedding approach.In specific, we calculated the relative content of each 3-mer token within each RBP dataset.Subsequently, we employed hierarchical clustering to group RBPs based on their 3-mer token content, thereby assigning RBPs with similar profiles to the same cluster.As shown in Supplementary Fig. 7e, it can be seen that the results are in close agreement with the clusters we obtained through the STRING database; for instance, RBPs such as FMR1, FXR2, LIN28A, and LIN28B were consistently assigned to the same cluster, while CPSF2 and CPSF4 were in a distinct cluster.These findings further support the notion that the 3-mer token with contextual information can serve as an informative feature to identify different sub-groups of RBPs.Supplementary Note 7: HDRNet has superior performance for RBPs with high and low expression levels and for target RNA events with high and low expression levels in different cellular contexts.
To observe the validity of HDRNet across various expression levels of RBPs in both the same and different cellular contexts, we utilized the expression levels of RBPs in HepG2 and K562 cell lines from the reference (19) published in Nature, which provides a comprehensive collection of human RBPs in K562 and HepG2 cells from the Encyclopedia of DNA Elements (ENCODE) project phase III, including the expression levels of RBPs across different cell lines within the eCLIP dataset.To distinguish between high and low expression levels of RBPs, we employed the average expression level as the threshold.RBPs with expression levels surpassing this threshold were classified as highly expressed, whereas those below were considered lowly expressed.On this basis, we compared the performance of HDRNet with baseline methods separately on K562 and HepG2 cell lines.As illustrated in Supplementary Fig. 9a, we observed that HDRNet exhibited superior static prediction performance for RBPs with varying expression levels in different cell lines.Furthermore, we noted a slight improvement in HDRNet's performance on RBPs with higher expression levels compared to those with lower expression levels within the same cell line.Meanwhile, to further investigate the relationship between RBP expression levels and the performance of HDRNet, we calculated their correlation.As depicted in Supplementary Fig. 9b, we found a weak positive correlation between the performance of HDRNet, as measured by metrics such as the AUC and RBP expression levels, suggesting that the binding sites of RBPs with higher expression levels were more likely to be accurately recognized by HDRNet.
Then, we analyzed the relationship between the performance of HDRNet and RBP expression levels from the perspective of dynamic prediction.As demonstrated in Supplementary Fig. 9c, we first computed the correlation of RBP expression levels in the K562 and HepG2 cell lines.We found a strong positive correlation between the RBP expression levels in these two cell lines, indicating similar RBP expression patterns between them.Therefore, we evaluated the dynamic prediction performance of HDRNet based on these categorizations.As depicted in Supplementary Fig. 9d, consistent with the static prediction results, HDRNet exhibited the most accurate dynamic predictive performance for these RBPs.However, it is worth noting that there are differentially expressed RBPs between the two cell lines.To identify these differentially expressed RBPs, we employed the DESeq2 package, as described in the reference (19).A total of 35 RBPs were identified, of which 15 were differentially expressed RBPs in K562 cells, and 20 were differentially expressed RBPs in HepG2 cells.Supplementary Fig. 9e visualizes the performance of HDRNet on these differentially expressed RBPs, confirming its superior performance in both cases.Based on these results, we further investigated whether the expression levels of RBPs influenced the dynamic prediction performance of HDRNet.As displayed in Supplementary Fig. 9f, we observed that the binding sites of RBPs with higher expression levels were more easily identified during the dynamic prediction process.
On the other hand, we conducted further analysis to evaluate the performance of HDRNet on RBPs with different target expression levels.RBPs play a crucial role in regulating gene expression by interacting with RNA and forming ribonucleoprotein complexes.However, the ENCODE project data (19) for RBPs in K562 and HepG2 cells does not provide expression values for their target RNAs, making it challenging to directly measure the high and low expression levels of target RNA events.Nevertheless, they have employed RNA sequencing after depleting RBPs using short hairpin RNA (shRNA) or CRISPR to investigate RBP target binding site functionality and examined expression level differences before and after knockdown (19).On this basis, there are 4 RBPs including DD3X3, DDX6, LARP4 and RBM15 that have previously been identified as RNA decay factors.After RBP-knockdown experiments, their target genes exhibited upregulation in specific cell conditions (knockdown-increased), indicating that these RBPs are associated with low expression levels of their target genes.Similarly, there are 6 RBPs including AKAP1, DDX55, APOBEC3C, FMR1, CPSF6, and IGF2BP3 that have been previously recognized to increase the stability of RNA targets; and their target genes showed downregulation after RBP knockdown in specific cell contexts (knockdowndecreased), indicating that these RBPs are associated with higher expression levels of their target genes.With these target RNA events with high and low expression levels, we first examined the static predictive performance of HDRNet on the specific cell line data for these RBPs.As depicted in Supplementary Fig. 10a,b, HDRNet consistently achieved the best performance and showed significant improvements compared to other baseline methods.For instance, in the RBM15_K562 dataset, HDRNet shows its performance gain of 8% over PrismNet; in the FMR1_K562 dataset, only HDRNet's AUC metric exceeded 0.9.These results indicate that HDRNet is capable of accommodating RBPs with different target expression levels and effectively identifying their binding characteristics.Furthermore, we selected those RBPs within this cohort that allowed for dynamic prediction and compared HDRNet's dynamic predictive performance.As demonstrated in Supplementary Fig. 10c,d, HDRNet also exhibited superior performance in dynamic prediction as well.For example, we observed a performance improvement of over 10% for RBM15 when comparing HDRNet to PrismNet.Notably, other baseline methods displayed considerable instability in the dynamic prediction tasks; for instance, DMSK failed to accurately identify the binding sites of DDX3X, and PrismNet performed poorly on AKAP1, with an AUC below 0.7.These results further validate the adaptability and robustness of HDRNet to RBPs with different target expression levels.
INDEX
Supplementary Note 8: HDRNet predicts dynamic RNA-RBP interactions on in vivo tissues under normal and disease conditions.
To demonstrate the effectiveness of HDRNet in both normal and disease conditions, we have meticulously curated the MBNL2 (Muscleblind Like Splicing Regulator 2) binding peak data (20) (GEO accession: GSE68890) in human brain tissues from POSTAR (21).This data source is comprised of five distinct datasets, including autopsy tissues (hippocampus and frontal cortex) from the patients with myotonic dystrophy type 1 (DM1, 2 datasets), myotonic dystrophy type 2 (DM2, 1 dataset of hippocampus), and control subjects (2 datasets), where DM1 and DM2 are progressive and multi-systemic neuromus-cular disorders, originating from the aberrant sequestration and activation of RNA processing factors and RAN translation.Then, we consolidated replicate data within each dataset and standardized the binding peaks to a length of 101 nucleotides.Subsequent evaluations of the dynamic prediction performance on these tissue datasets involved HDRNet, along with baseline models including PrismNet, DMSK, iDeep, GraphProt, DeepBind, and PRIESSTESS.Firstly, the evaluations focused on cross-tissue dynamic prediction experiments under single conditions, such as normal-normal and DM1-DM1 predictions.As shown in Fig. 4a, HDRNet consistently outperformed the other models across both conditions.Notably, we observed a significant performance gap between HDRNet and PrismNet in the cross-tissue prediction task, while PRIESSTESS failed to identify the binding sites of MBNL2 in the DM1 frontal cortex dataset, rendering it incapable of performing the dynamic prediction task.Then, to validate the efficacy of HDRNet in capturing dynamic tissue conditions between normal and diseases, we conducted experiments on cross-condition dynamic predictions, such as, we used the model trained on normal control tissue data to predict binding sites in disease tissues.As shown in Fig. 4b, HDRNet maintained strong dynamic prediction capabilities in cross-condition dynamic prediction.It achieved the highest AUC metric of 0.8, with an AUC difference exceeding 10% compared to PrismNet.These results demonstrate that HDRNet can effectively learn the binding patterns of RBPs across diverse conditions.Additionally, as highlighted in Fig. 4c and Supplementary Fig. 11a, HDRNet exhibited a robust capacity to detect and accurately capture disease-related high-attention binding regions.These regions represent the critical interaction sites between MBNL2 and DM1 extended CUG repeats and DM2 CCUG expansion RNAs, which are pathophysiological hallmarks of myotonic dystrophies, as previously revealed in (20,22).By successfully capturing these regions, HDRNet not only aids in identifying key molecular interactions but also enhances our understanding of disease mechanisms.
To further investigate the predictive capabilities of our proposed model regarding dynamic interactions in tissues, we conducted an experiment to predict two additional eCLIP RBP datasets from adrenal tissues obtained from ENCODE, namely DGCR8 and HNRNPU.The experiment results are summarized in Supplementary Fig. 11b, where our model achieved the highest AUC metric.As depicted in this figure, we can observe that HDRNet achieved identical performance in dynamically predicting RBP binding in both K562 and HepG2 cell lines, while other baseline methods exhibited variability in performance when using models trained on different cell lines for dynamic prediction.This observation further highlights the ability of HDRNet to identify RBP binding patterns across different sources of RBP binding data.Interestingly, we also observed that PrismNet, although slightly inferior to HDRNet, showed improved performance on the new eCLIP data compared to the previous MBNL2 data.We speculate that this improvement is due to the fact that the PrismNet model was specifically designed based on eCLIP data.In particular, the static encoding within PrismNet limits its performance on other platforms.In addition, our observations revealed that the HDRNet model, trained on cell line data, successfully highlighted significant binding regions in the tissue binding data which were functionally relevant; for example, focusing on the DGCR8 protein, it has been reported to bind to extended CGG repeat sequences, leading to partial sequestration of DGCR8 in CGG RNA aggregates, consequently reducing the processing of miRNAs (23).In Supplementary Fig. 11c, we can observe that HDRNet effectively identifies the dynamic CGG repeat binding region, indicating its capability to dynamically identify RBP binding patterns.Additionally, HDRNet accurately identified the G-rich binding region of HNRNPU, as depicted in Fig. 4d.Notably, upon visualizing potential RNA models, we discovered that the identified G-rich binding regions predominantly corresponded to G-quadruplex structures, which are stable secondary structures formed by G-G base pairs.Indeed, as reported by (24), HNRNPU exhibits a preference for recognizing G-quadruplexes in RNAs as binding motifs.These results highlight the robustness of HDRNet on RBP data across different platforms, demonstrating that HDRNet with K562 and HepG2 cell line data is sufficient to accurately predict dynamic interactions in tissues.
Lastly, we have also added the direct regulatory targets of MBNL1
1 .
(a) Scatter plot comparing AUC scores of HDRNet with other machine learning algorithms.(b) Comparative analysis of the overall dynamic prediction performance between HDRNet and other machine learning methods (n=68 in each group; center line, median; box limits, upper and lower quartiles; whiskers, 1.5× interquartile range; Wilcox test).Source data are provided as a Source Data file.
2 .
(a) Overall results of HDRNet using different structure information.HDRNet with icSHAPE performs best in both static and dynamic prediction tasks (n=68 in each group; center line, median; box limits, upper and lower quartiles; whiskers, 1.5× interquartile range; Dunn's test).(b) The latent embedding of the learned features by HDRNet using different types of input structure features.(c) Left: The high attention binding region identified by HDRNet with different input structure characterization.Right: The structural models predicted by RNAfold with the corresponding structural technology scores as constraints.Source data are provided as a Source Data file.
Supplementary Fig. 6.(a) Performance comparison of HDRNet with different hierarchical structures.The results demonstrate that the proposed hierarchical network achieves superior performance in both static and dynamic prediction tasks.(n=68 in each group; center line, median; box limits, upper and lower quartiles; whiskers, 1.5× interquartile range; T test.)(b) Feature correlation analysis of different HDRNet versions.Notably, HDRNet exhibits substantial feature correlation and feature hierarchy.Source data are provided as a Source Data file.
. 7 . 8 .
(a) Performance comparison of HDRNet and PrismNet through different cell lines (n=8, 17, 39, 84 and 112 in each group, with mean ± SD).(b) Performance gap between HDRNet and PrismNet through HEK293, HepG2 and K562 cell lines (n=17, 22 and 21 in each group, with mean ± SD).(c) Identified subgroups of RBP datasets in HEK293 cell line.(d) The integrative motifs identified by HDRNet on the RBPs within different subgroups.The top half presents the high-attention 6-mer fragment identified by HDRNet most frequently, while the lower half displays the structural motifs of the 6-mer sequence, where 'P' stands for paired, and 'U' indicates unpaired.(e) Heatmap representing the relative content of each 3-mer in the RBP binding site.RBPs with similar relative contents were grouped into a cluster by hierarchical clustering.Source data are provided as a Source Data file.HaoRan Zhu et al.(a) Bar chart of 6-mer contents for FMR1 and FXR2 across different cell lines.(b) Statistics of the most significant 6-mer regions identified by HDRNet on the FMR1 and FXR2 datasets.(c) Visualization of the salient map detected by HDRNet, highlighting specific binding regions within the FMR1 and FXR2 datasets.(d) Visualization of attention distribution using dynamic global contextual embedding, with attention concentrated in A/G-rich regions.Source data are provided as a Source Data file.HaoRan Zhu et al. | HDRNet bioRχiv | 13
INDEX. 10 .
Prediction Performance on RBPs with low expression level in K562 cell line Prediction Performance on RBPs with low expression level in K562 cell line Prediction Performance on RBPs with low expression level in HepG2 cell line Prediction Performance on RBPs with low expression level in HepG2 cell line a Prediction Performance on RBPs with high expression level in K562 cell line Prediction Performance on RBPs with high expression level in K562 cell line Prediction Performance on RBPs with high expression level in HepG2 cell line Prediction Performance on RBPs with high expression level in HepG2 cell line Dynamic Prediction Performance on RBPs with high expression level in both K562 and HepG2 cell line b Dynamic Prediction Performance on RBPs with low expression level in both K562 and HepG2 cell line Dynamic Prediction Performance on RBPs with low expression level in both K562 and HepG2 cell line Supplementary Fig. 9. (a) Static prediction performance comparison of HDRNet and baseline models on different expression levels of RBPs (top: n=54 in each group; bottom: n=42 in each group; center line, median; box limits, upper and lower quartiles; whiskers, 1.5× interquartile range).(b) The correlation between HDRNet's performance and RBP expression levels.(c) The correlation of RBP expression levels between K562 and HepG2 cell lines.(d) The performance comparison of HDRNet on RBPs with identical expression levels across cell lines.(top: n=25 in each group; bottom: n=31 in each group; center line, median; box limits, upper and lower quartiles; whiskers, 1.5× interquartile range) (e) The performance comparison of HDRNet on differentially expressed RBPs across cell lines.(top: n=3 in each group; bottom: n=3 in each group; center line, median; box limits, upper and lower quartiles; whiskers, 1.5× interquartile range) (f) The correlation between RBP expression levels and HDRNet's dynamic prediction performance.Source data are provided as a Source Data file.(top: HepG2.expHepG2->K562 p=0.0152,HepG2.expK562->HepG2 p=0.0253; bottom: K562.expHepG2->K562 p=0.0187,K562.expK562->HepG2 p=0.0677;Pearson Correlation).HaoRan Zhu et al.(a) The performance comparison of HDRNet on RBPs with lower target expression levels.(b) The performance comparison of HDRNet on RBPs with higher target expression levels.(c) The comparison of HDRNet's dynamic prediction performance on RBPs with lower target expression levels.(d) The comparison of HDRNet's dynamic prediction performance on RBPs with higher target expression levels.Source data are provided as a Source Data file.
b
Supplementary Fig. 11.(a) The salient map of the high attention binding region captured by HDRNet.HDRNet successfully identifies the disease-related RNA repeats.(b) Performance comparison of DGCR8 and HNRNPU dynamic prediction, using the model trained on cell line data.(c) HDRNet identifies the CGG-rich region of DGCR8 binding patterns.Source data are provided as a Source Data file.INDEX 22).As illustrated in Supplementary Fig. 12b, HDRNet can dynamically distinguish these diseases-related RNA repeats sequences using models trained on different MBNL1 binding data, indicating that HDRNet can effectively extract RBP binding properties from the RBP binding data in different physiological environments.In summary, these results demonstrate the comprehensiveness of HDRNet in the task of dynamic prediction of RBP binding sites in multiple tissues.a Dynamic prediction based on model trained on 129Brain b chr15:80715413-80715514, Using C2C12 trained HDRNet chr15:80715413-80715514, Using 129Brain trained HDRNet chr4:28859047-28859148, Using C2C12 trained HDRNet chr4:28859047-28859148, Using 129Brain trained HDRNet CUG enriched region CCUG enriched region chr15:80715413-80715514 chr4:28859047-28859148 Supplementary Fig. 12.(a) Performance comparison of MBNL2 dynamic prediction performance using model trained on 129Brain date of mouse.(b) Salient binding regions identified by HDRNet under dynamic conditions.Source data are provided as a Source Data file.HaoRan Zhu et al. | HDRNet bioRχiv | 19
Supplementary Fig. 3. Ablation
comparison results of integrating RNA structure information from different methods.It can be observed that combining other features didn't further improve the performance of HDRNet (n=261 in each group, with mean ± SD).Source data are provided as a Source Data file. INDEXSupplementary
Note 4: Hierarchical Structure Improves the Prediction Performance and Fea- ture Correlation of HDRNet
(Muscleblind Like Splicing Regulator 1) in brain, heart, muscle, and myoblasts from mice (Wang et al. Cell 2012, PMID: 22901804) (22) from the CLIP-Seq data to further explore the performance of HDRNet in different contexts.In particular, we collected a total of five datasets (GEO accession: GSE39911), including two from the brain (129Brain, B6Brain), one from muscle (B6Muscle), one from heart (B6Heart), and one from myoblasts (C2C12 cells), where 129 and B6 are individual mouse ID numbers.The experimental results, as illustrated in Supplementary Fig.12aand Fig.4e, demonstrate that HDRNet provided performance improvements compared to other baseline methods, resulting in an increase in AUC from 5% to 10%.Meanwhile, it is important to note that MBNL1 and MBNL2 are both members of the muscleblind-like (MBNL) protein family, and therefore MBNL1 shares similar binding patterns with MBNL2, as discussed previously, showing binding specificity to CUG and CCUG pathological expansions(20,
Table 2 .
List of the candidate drugs for neurological diseases (two-sided Fisher-Irwin exact test).Source data are provided as a Source Data file.C 17 H 12 ClF 3 N 2 O | 5,976.6 | 2023-10-26T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Identification of the Causative Disease of Intermittent Claudication through Walking Motion Analysis: Feature Analysis and Differentiation
Intermittent claudication is a walking symptom. Patients with intermittent claudication experience lower limb pain after walking for a short time. However, rest relieves the pain and allows the patient to walk again. Unfortunately, this symptom predominantly arises from not 1 but 2 different diseases: LSS (lumber spinal canal stenosis) and PAD (peripheral arterial disease). Patients with LSS can be subdivided by the affected vertebra into 2 main groups: L4 and L5. It is clinically very important to determine whether patients with intermittent claudication suffer from PAD, L4, or L5. This paper presents a novel SVM- (support vector machine-) based methodology for such discrimination/differentiation using minimally required data, simple walking motion data in the sagittal plane. We constructed a simple walking measurement system that is easy to set up and calibrate and suitable for use by nonspecialists in small spaces. We analyzed the obtained gait patterns and derived input parameters for SVM that are also visually detectable and medically meaningful/consistent differentiation features. We present a differentiation methodology utilizing an SVM classifier. Leave-one-out cross-validation of differentiation/classification by this method yielded a total accuracy of 83%.
Introduction
Intermittent claudication [1] is a walking symptom. Patients with intermittent claudication suffer from lower limb pain after a walking for a short time. However, rest relieves the pain and allows the patient to walk again. As intermittent claudication involves trouble walking, patients often consult an orthopedic surgeon, and the number of such consultations has recently increased markedly [2]. Unfortunately, intermittent claudication is predominantly caused by not 1 but 2 different diseases: LSS (lumber spinal canal stenosis) and PAD (peripheral arterial disease). Toribatake et al. [1][2][3] noted that PAD and LSS produce similar symptoms and emphasized the significance of their differential diagnosis. Therefore, it is clinically very important (especially for orthopedic surgeons) to identify the causative disease of intermittent claudication.
Among the types of LSS, we focused on the L4 and L5 subtypes, which cause radicular symptoms that are mainly responsible for intermittent claudication in patients with LSS; these subtypes of LSS are difficult to differentiate from PAD [1]. For this disease, L4 and L5 indicate the vertebral level affected by stenosis.
There are 2 categories of examinations for differentiating between these conditions. Some examinations are simple but imprecise and often fail to differentiate the underlying diseases; 2 examples are palpation and observation of standing posture. The other examinations are precise but invasive and expensive; some examples are angiography, myelography, magnetic resonance imaging (MRI), and measurement of the ankle brachial index (ABI) [4]. Furthermore, these examinations require highly skilled professionals and precision instruments. Such complicated and expensive examinations 2 The Scientific World Journal are difficult to conduct at small hospitals and clinics. Furthermore, their high cost is undesirable for the patient. The optimal differentiation method would be an examination that used a minimal number of simple instruments and could be easily performed even by nonspecialists. Notably, the affected parts of the legs differ between the causative diseases and could thus produce kinematical differences in patients' gait patterns.
In this context, this study was performed to develop a new differentiation methodology utilizing minimally required data; simple walking motion data. The key features of this methodology are as follows.
(1) The Differentiation Is Based on Minimally Required Walking Motion Data. Aiming at the usage in small hospitals or at home, the differentiation was designed to use only the 2-dimensional gait pattern in the sagittal plane from the simple walking motion analysis. This presented a challenge. Additional advantages of the system are easy set-up and calibration, a short duration of measurement, and usability in a narrow space and a noncontrolled environment by nonspecialists.
(2) SVM (Support Vector Machine) [5,6] Classifier-Based Methodology with a High Rate of Accuracy. There is no research focusing on differentiating between PAD and LSS using gait feature analysis except for our own previous reports [3,7]. We present an SVM classifier-based methodology which is an extended version of one-versus-the-rest multiclass SVM classifier. Leave-one-out cross-validation showed a high rate of accuracy (83% in total) for differentiating among the normal (normal healthy individuals), PAD, L4, and L5 groups.
(3) Derivation of Medically Meaningful Differentiation Features Available for Visual Examination.
The key for obtaining differentiation with high accuracy is how to construct/select the features, that is, the input to classifiers. Our own previous reports [3,7] show that different diseases produce kinematical differences in gait patterns. Focusing on this fact, we select/produce kinematical features in gait patterns for the differentiation. Additional advantage is that because these features are visually detectable and medically meaningful/consistent, these features are also available for a medical interview or visual examination.
Related Works. We have conducted the only studies thus far to use gait analysis to differentiate between patients with LSS and PAD [3,7]. The present study differs from our past reports [3,7] in its inclusion of the presented classification methodology and the results thereof as well as in the added differentiation features (amplitude of the femur angle, maximally contracted muscle length of the gastrocnemius muscle from the reference model, and maximally relaxed muscle length of the quadriceps muscle from the reference model). If we do not restrict ourselves to differentiation of the causes of intermittent claudication, there have been a number of studies concerning the analysis of intermittent claudication and the classification of gait patterns. Here, we briefly review them.
Gait Analysis of Patients with LSS. Suda et al. [8] evaluated the improvement in gait after surgical treatment of patients with neurogenic intermittent claudication. Papadakis et al. [9] compared the gait patterns of healthy people with those of patients with LSS; they also evaluated the postoperative progression of the gait pattern of patients with LSS [10] and showed that the variability of the gait decreased relatively to the preoperative gait pattern. Yokogawa et al. [11] compared gait patterns between patients with lumbar spinal canal stenosis (L4 radiculopathy) and those with osteoarthritis of the hip and found several differences.
Gait Analysis of Patients with PAD. Scherer et al. [12] compared the gait patterns of healthy people with those of patients with PAD and found several distinctive characteristics of the patients' walking gaits. Myers et al. [13] compared the gait patterns of patients with PAD before and after the onset of pain and found the gait to differ only at the ankle joint. Gardner et al. [14] compared the gait patterns of healthy people with those of patients with PAD and found differences in walking parameters such as the walking speed, stride length, and swing and stance times.
Classification of Gait Patterns. The gait analysis methods used for classification have been summarized previously [15,16]. Wang et al. [17] presented a decision tree-based algorithm for classifying human walking motion/behavior. Kamruzzaman and Begg [18] identified and classified children with a cerebral palsy-related gait via an SVM-based method. Mezghani et al. [19] derived features and constructed a classifier for distinguishing asymptomatic from knee osteoarthritis-affected gait patterns.
Materials and Methods
Here, the methodology for obtaining gait pattern, the extracted features, and the SVM classifier-based methodology for the differentiation are described.
Methodology for Obtaining Gait Pattern
2.1.1. Participants. The participants were 13 normal healthy individuals (5 men and 8 women), 10 patients with PAD (9 men and 1 woman), 13 patients with L5 LSS (4 men and 9 women), and 10 patients with L4 LSS (6 men and 4 women). The group to which each participant belonged was determined by his or her medical diagnosis, which was established by the medical doctors among the authors. The diagnoses were made from comprehensive consideration of the patients' clinical features; radiological findings; surgical findings; MRI, magnetic resonance angiography, ABI, and contrast-enhanced computed tomography results; and the effects of selective nerve root blocks. These experiments were approved by the Medical Ethics Committee of Koseiren Takaoka Hospital. Figure 1 shows the walking motion measurement system. As simplicity was one of our requirements, we constructed the system to obtain 2-dimensional gait patterns and aimed to derive the differentiation features and differentiate the diseases using the minimum amount of information. We constructed the measurement system as simply as possible, with easy set-up and calibration, to allow its use in small hospitals even by a few nonspecialized medical personnel. Other aims were a short duration of measurement, ability to obtain measurements in a narrow space, and ability to obtain measurements in a noncontrolled environment.
Motion Capture.
We measured the participants' gait patterns using lightemitting diode (LED) markers and had them walk on the treadmill in semidarkness so that the LED marker positions could be easily captured. Semidarkness can convert a noncontrolled environment into a controlled environment. We placed handmade LED markers on each participant's acromion, anterior superior iliac, fibular head, lateral malleolus, and fifth metatarsal head. Figure 2(a) shows the definition of the coordinate frame and the marker positions. Note that the right side is forward in Figure 2. The LED markers were attached to the impaired leg. The participants practiced walking on the treadmill before the experiment for safety and to determine the appropriate treadmill speed. The latter was set to allow the participant to walk normally. The measurement was stopped if the participant felt pain. For safety, medical doctors stood by and watched each participant so that they could immediately stop the treadmill and help the participant in the event of an accident. We used a commercially available camera with a frame rate of 30 frames/s. Note that the obtained angle values were only used for analysis and that no calibration (such as measurement of leg length) was required or conducted.
Analysis.
The angles used for analysis are shown in Figure 2(b). We detected the marker positions using our own algorithm [3] based on an LK (Lucas-Kanade) filter [20]. We derived the angles from the marker positions. Note that the angles are not identical to the actual joint angles because they are mapped on the sagittal plane, and we therefore renamed them according to our own system. The mean data for 1 cycle of gait pattern were analyzed. The accuracy of this system depends on the resolution of the camera and the distance between the treadmill and camera and was from 0.007 to 0.04 [rad] for our set-up.
Extracted Features.
Our goal was to identify the causative disease of intermittent claudication. As described previously, the candidate diseases are PAD and 2 varieties of LSS, L5 and L4. We also included a number of normal healthy participants as a control group. In order to get the features, that is, the input variables for classifiers, we extract the features of gait pattern in the 4 groups; normal, PAD, L5, and L4. Table 1 shows the list of the extracted features, including the information about which features are used in the presented SVM classifier-based methodology described in Section 2.3. Focusing on the characteristic of each disease such as the areas where disruption of sensation and ischemia occur, we extracted the features associated with the motions of the angles of single joints. We remark about how to the knee angle at the start of the stance phase. The start of the stance phase was defined as the time at which the component of the marker attached to the fifth metatarsal head was maximal. The time at which the foot reaches its most forward position is not always identical to the start of the true stance phase, which it may precede. Therefore, we define that the knee angle at the start of the stance phase is the mean of the values of the knee angles from 4 frames before the start of the stance phase to the start of the stance phase. Letting 3 ( ) be the knee angle at = and ss be the time at which the stance phase started, we calculated the mean 3 ( ) from = ss − 4 to = ss . This was recorded as the value of the knee angle at the start of the stance phase.
The analysis of the angles of single joints corresponds to analysis of monoarticular muscles. However, there are also muscles that influence the angles of 2 adjacent joints, and it may be possible to derive features associated with these biarticular muscles. Therefore, we also extracted the features associated with biarticular muscles. Individual differences in body size make direct comparison of muscle lengths illogical; therefore, we used a reference model based on the bone lengths of the actual human skeleton model in our laboratory. We based the sites of attachment of the muscles to the bones in the reference model on anatomical data [21,22]. Figure 3 shows the resulting reference model. We focused on the gastrocnemius muscle where it is considered to be the affected area for PAD, and the quadriceps muscle where it is considered to be the affected area for L4. The input for this model was the knee and ankle (upper body, femur, and knee) angles of a given participant and the output was the length of the gastrocnemius (quadriceps) muscle in the reference model. The value thus derived differed from the participant's actual muscle length but could be compared with those of the other participants. We therefore let the muscle length derived from the model represent the participant's muscle length. These values could thus be said to represent normalized muscle lengths.
Methodology Based on the Support Vector Machine (SVM).
We will first introduce SVM [23][24][25]. SVM was originally a binary (2-class) classifier. Consider the given data set (x 1 , 1 ), . . . , (x , ), where x ∈ R and ∈ {−1, 1} is labeled for x . Then, we solve the following quadratic function: min w, , Here, the function maps data x to the higherdimensional feature space : R → F. The following hyperplane in the feature space splits the data into the 2 labeled classes: where w and are the parameters that specify the linear hyperplane. This can provide a nonlinear boundary for the 2 labeled classes in the original data space. Note that the hyperplane containing (w (x) + ) = 1 − is called the support vector. is termed the slack variable, and its solution gives the maximal margin for classification error. > 0 is the parameter controlling the tradeoff between the number of misclassified data points in the training and the separation of the remaining data with the maximal margin. The function constructs a kernel function such as (x , x ) = (x ) (x ).
The Scientific World Journal 5 The following are well-known candidates for the kernel function: where , , and are kernel parameters and RBF is the radial basis function. Given test data x , the classifying decision is made by The goal of our differentiation was to determine to which of the normal, PAD, L5, and L4 groups a given (test) dataset belongs. Therefore, a 4-class classifier was needed. The binary SVM classifier can be extended to a multiclass classifier [23,25,26]. There are 2 main approaches to constructing a -class SVM. The first is to train the binary classifier with regard to all combinations (totaling ( − 1)/2). If test data are given, we apply the ( − 1)/2 classifiers to it and decide the final output by voting, with the most-voted class being the final output. This method is called one-versus-one (OVO). The other approach is to train a -independent binary classifier by training the th classifier to regard the th class as the class with the positive label and the remaining classes as the class with the negative label. The th classifier decision is made by sgn ( (x )) = sgn (w (x ) + ) .
The overall decision is made by This approach is called one-versus-the-rest (OVR).
Presented Methodology. Not all features are always suitable for differentiating a certain group from the other groups (see the data for the features shown in Figures 4∼12). In this case, a voting-based method, which does not take the magnitude of the possibility value into account (but instead provides 0 or 1), does not always work well. With this in mind, we present a differentiation methodology based on OVR. Other merits of OVR are that it is simple and first-running and that it is easy to optimize the parameters for the classifications. The methodology is as follows.
Training. We trained 4 binary classifiers, each of which was for classifying 1 class versus the other classes: (1) normal versus the other groups, (2) PAD versus the other groups, (3) L5 versus the other groups, and (4) L4 versus the other groups. We selected appropriate features for every binary classifier based on the significant differences described in Section 3.
The features selected are shown in Table 1. In the table, "+" denotes the selected features and "−" the nonselected features for each classifier. The first row shows the target class for the binary classifier: for example, "normal" means the classifier for distinguishing the normal group from the other groups.
Evaluation. First, the corresponding features of the given test/sample (walking motion) data were calculated. Using these features, we calculated the decision value (x ) (5) for each classifier. Then, from (6), we obtained the final output.
Features Associated with the Motions of the Angles of Single
Joints. We first investigated the features associated with the motions of the angles of single joints. Figures 4, 5, and 6 show the results. Figure 4 shows that the ankle angle was larger in the PAD group and smaller in the L5 group than in the normal and L4 groups and differed significantly between the PAD and L5 groups according to the Tukey-Kramer method. A multiclass classification such as the SVM (OVR) generally requires classification/differentiation between a given group and all other groups, so the most useful features are those that are largest or smallest for a certain (target) group. Therefore, the mean ankle angle appeared useful for differentiating either PAD or L5 from the other groups.
Mean Ankle Angle.
Patients with PAD are susceptible to ischemia of the triceps surae muscles and therefore move to prevent the collapse and stenosis of the blood vessels inside these muscles. The patient attempts to keep the radii of the blood vessels large in order to minimize the loss of blood flow and thus avoid muscle ischemia. To accomplish this, the patient with PAD tends to keep the angle of the ankle large at all times. In contrast, patients with LSS (L5) have disruption of sensation around the tibialis anterior muscle and the bottom surface of the foot. This increases the risk for collision between the tip of 6 The Scientific World Journal Table 1: Extracted features for differentiating among normal, PAD, L5 and L4, and which features are used in the presented SVM classifierbased methodology. The first row shows the target class for the binary classifier: for example, "normal" means the classifier for distinguishing the normal group from the other groups. "+" denotes the selected features and "−" the nonselected features for each classifier. the foot and the ground. In order to decrease this risk, patients with L5 tend not only to keep the ankle angle small but also not to lift up their legs, which makes their walking resemble shuffling. These trends are considered to produce large mean ankle angle in PAD group while small in L5 group. Figure 5 shows that this feature is large in the L4 group, although the difference was not significant in Tukey-Kramer method; however, when we conducted t tests for each pair, we obtained significant differences between L4 and the other groups, shown in Figure 4 by the dashed lines. The difficulty of differentiating L4 from the other groups is evident; however, we anticipated that the knee angle at the start of the stance phase would be effective for differentiating the L4 group from the other groups despite the lack of a marked significant difference. The L4 group experiences disruption of sensation around the quadriceps muscle, which causes large unexpected bending/flexion of the knee angle just after landing. In order to walk smoothly despite the unexpected bending/flexion, the patient tends to keep the quadriceps muscle contracted, especially during landing. The large knee angle at the start of the stance phase is consistent with this observation. The value was close to 180 ∘ , indicating that the knee is extended as much as possible. This configuration with the extended angle is a singular pose from the viewpoint of robotic manipulators [27] and enables the force along the links to be resisted without any additional joint torque, allowing absorption of the impulsive force in that direction upon landing. This may be one reason why the L4 group maintained a knee angle close to 180 ∘ . Figure 6 shows that the femur angle should be useful for differentiating the normal group from the other groups.
Features Associated with Biarticular Muscles.
We used the models shown in Figure 3 and the angle data to calculate the maximum relaxed and contracted lengths of the muscles and their ranges of motion. Figures 7, 8, and 9 show the maximally relaxed and contracted lengths and the range of motion, respectively, of the gastrocnemius muscle from the reference model, and Figures 10, 11, and 12 show the same parameters for the quadriceps muscle from the reference model. We first considered the values for the length of the gastrocnemius muscle. Both the maximally relaxed and maximally contracted lengths increased from the PAD to the L4 to the L5 group. Only the normal group showed a different tendency, consistent with the results for the range of motion. Statistically significant differences were evident between many of the groups. The small values of every feature in the PAD group were consistent with the hypothesis that the patient keeps the length of the gastrocnemius muscle short. On the other hand, the large maximally relaxed and contracted muscle lengths and small range of motion in the LSS (L5) group support the hypothesis that the patient keeps the angle of the ankle small and avoids lifting up the legs. The maximally relaxed length appeared useful for differentiating PAD or L5 from the other groups, whereas the maximally contracted length appeared useful for differentiating L5 from the other groups. The range of motion was significantly greater in the normal group than in all other groups and would therefore be useful for differentiating the normal group from the other groups. We next considered the values for the length of the quadriceps muscle. The maximally contracted length was smallest in the L4 group, although the differences were slight. This corresponds to the hypothesis that patients with L4 LSS keep the quadriceps muscle contracted, especially during landing. The range of motion was larger in the normal and L4 groups and smaller in the L5 group. This was attributed to the tendency of patients with L5 LSS to avoid lifting up their legs, as described above. This feature appeared potentially useful for differentiating between the L4 and normal groups and the other groups but more useful for differentiating the L5 group from the other groups. The maximally relaxed length was significantly larger only in the normal group and would therefore be useful for differentiating the normal group from the other groups.
Differentiation by Machine-Learning Based Methodology.
Utilizing the extracted features listed in Table 1, we tried to differentiate the causative disease of intermittent claudication. For the comparison, we used not only the presented methodology described in Section 2.3 but also popular classifiers; LDA (linear discriminant analysis), decision tree, conventional one-versus-one SVM (OVO), and one versusthe-rest SVM (OVR). Matlab (Mathworks) statistics toolbox was used for LDA and decision tree, while implantation was conducted based on LIBSVM [24] for SVM. We used leaveone-out cross-validation for the evaluation. Note that we used normalized data for input to every classifier. We remark about the implantation of SVM. The implantation was conducted based on LIBSVM [24]. The RBF was chosen as the kernel function. The values for the 2 parameters and for the classifications were determined by a grid search [24]. The grid search finds good values by evaluating (e.g., from crossvalidation results) exponentially increasing values of and (such as = 2 −5 , 2 −3 , . . . , = 2 −5 , 2 −3 , . . .). Note that in the presented methodology, we applied the grid search to each classifier separately because the features used in each classifier were different. Table 2 shows the classification results about total accuracy in every classifier. Table 3 shows the detail of the classification. It can be seen that the presented method got very high accuracy, compared to the other methods. The total classification accuracy (83%) supported the efficiencies Normal PAD L4 L5 * * * * * * * * * * * P < 0.01 * * P < 0.05 * P < 0.1 Figure 12: Range of motion of the quadriceps muscle from the reference model. of the extracted differentiation features and the presented classification methodology. The reason for low accuracy at conventional classifiers might be that a participant with a certain group does not always have the all extracted features in her/his gait pattern. This requires nonlinear classifier such as SVM. However, conventional SVM still did not show high accuracy. Not all features are always suitable for differentiating a certain group from the other groups. For example, the mean ankle angle shown in Figure 4 can easily differentiate the normal group from the PAD and L5 groups but cannot easily differentiate the normal group from the L4 group. This is the reason why we presented the new methodology.
If we focus the results of the presented methodology, aggregation of the classification results with respect to the group of test data yielded the highest classification accuracy for the PAD group and the lowest for the L4 group. The high classification accuracy for the PAD group is preferable from a clinical perspective because the failure to identify PAD can cause serious problems such as necrosis of the lower limbs. We attribute the high accuracy for this group to the large differences in the corresponding features between the PAD group and the other groups. In contrast, identification of the L4 group was difficult. This is true for the results in all the classifiers. One reason for this might be its similarity to the normal group; this might also be expected from Table 3 that the presented methodology remarkably improved the accuracy for the identification of the L4 group. The identification of the L4 group is considered to be the key for the improvement of accuracy. The identification of other features for differentiating between the normal and L4 groups and further increasing the accuracy of the classification remains for future work.
Conclusion
Intermittent claudication is caused mainly by 2 different diseases, LSS (lumber spinal canal stenosis) and PAD (peripheral arterial disease). LSS can be subdivided into L4 and L5 disease. The medical treatments for these conditions are completely different, making their differentiation very important. At present, the diagnosis is made by analyzing many results from many examinations, including sophisticated methods such as MRI. This methodology is available only at well-equipped hospitals. With this in mind, this paper presents a novel SVM-based methodology for differentiating among normal healthy people and patients with PAD, L4, and L5 utilizing minimally required data, simple walking motion data. The simple walking measurement system was constructed to obtain 2-dimensional gait patterns in the sagittal plane and is intended for use at small hospitals or at home. The system's other key features are easy set-up and calibration, a short duration of measurement, usability in a narrow space by nonspecialists, and ability to obtain measurements in a noncontrolled environment. From the gait patterns, we then extracted several visually detectable and medically meaningful/consistent differentiation features that are also available for a medical interview or visual examination. The extracted features can largely be categorized into 2 groups: those associated with the angles of single joints (monoarticular muscles) and those associated with the angles of 2 adjacent joints (biarticular muscles). We used the derived differentiation features to construct an SVM-(support vector machine-) based differentiation method. The differentiation/classification was developed successfully and yielded a total accuracy of 83% in leave-one-out crossvalidation. The accuracy of the differentiation/classification was lower for identification of patients with L4. Our future work will focus on improving the accuracy of this diagnostic method. | 6,631.8 | 2014-07-07T00:00:00.000 | [
"Biology"
] |
The Influence of E-Trust and E-Satisfaction on Customer E-Loyalty toward Online Shop in E-Marketplace during Pandemic Covid-19
. Pandemic COVID-19 encouraged Indonesia retail migrate their business to e-commerce, especially e-marketplace or online shop. The aim of this study is to examine the relationship of e-trust and e-satisfaction in building e-loyalty. This study also aims to examine the impact of e-coupon, information quality and financial risk on e-trust and e-satisfaction. This study is one of the few studies that examine the relationship of e-coupon, e-satisfaction, e-trust, and e-loyalty simultaneously. The population of this study is Indonesia consumers that already shopping online through e-marketplace. This study employs a structural equation model (SEM) PLS to analyze the research model and data. This study uses non-probability – quota sampling to collect data. The samples were collected through online questionnaires from 423 online shop customers in Indonesia. The result found that e-trust and e-satisfaction are essential factors in developing e-loyalty toward Indonesia's online shop in pandemic COVID-19. This study also found that e-coupon and information quality have an essential role in building e-satisfaction and e-trust. The finding from this study has a theory and practical implications.
Introduction
The first case infection of novel corona virus (SARS COV-2) or COVID-19 was reported in the Wuhan province, China in December 2019 [1]. The infection of novel corona virus or COVID-19 is highly infectious which mutated quickly, this disease that has been spread to many countries around the world and become worldwide pandemic [1,2]. By 2021, COVID-19 infection has reached 180 million globally and has caused more than 3 million casualties globally [3]. The pandemic of COVID-19 beside caused casualties also devastated the social and economic worldwide [4]. Indonesia is one of the countries that suffer from the pandemic COVID-19. Indonesian government tried to control the spread of pandemic COVID-19 by restricting their citizen movement, establishing social distancing, and massive screening tests [5]. Over the fear of the pandemic COVID-19, Indonesian government also encouraged their people to avoid unnecessary outside activities, working from home, and buying from home [4]. Pandemic COVID-19 drive people to use online activities in their daily life. One of the online activities that rapidly increase during pandemic COVID-19 is online shopping activities [6]. Indonesia coordinating ministry of economic affairs stated an increase in Indonesia's online shopping activities during pandemic COVID-19 [7]. Pandemic COVID-19 has encouraged traditional retailer to switch their business to online * Corresponding author<EMAIL_ADDRESS><EMAIL_ADDRESS>retail in e-commerce platform, especially online marketplace [8,9]. This study focusses on the online shop in Indonesia that uses e-marketplace in their transactions.
To survive in the growing competition in emarketplace, online shop needed to focus on their customer's loyalty. Loyal customers are an essential asset for online shop sellers because a loyal customer is willing to pay for the premium prices and refer new customers. Also, loyal customers tend to repeat their purchases [10]. By managing customer loyalty, online shop will gain momentum and advantage in the growth of Indonesian e-marketplace. Customer trust and satisfaction are popular constructs in building loyalty relationships with customers [11,12]. This study proposes that e-trust and e-satisfaction as essential factors in developing customer e-loyalty toward Indonesia's online shop in e-marketplace during pandemic COVID-19.
The previous study found that there is an essential role of customer satisfaction and customer trust in building loyalty ( [10], [13,14]). Despite numerous empirical studies that explore the relationship of trust, satisfaction and loyalty, there are still small number of empirical studies that explore the relationship of customer trust, satisfaction and loyalty in the developing countries [15]. This study tries to fill this gap in the marketing literature by examining the relationship between customer satisfaction, trust and loyalty toward online shop in Indonesia e-marketplace. Therefore, it is imperative to analyze variable-variable that essential in building customer e-satisfaction and customer e-trust.
Online shopping is high-risk activities, because online shopping can lead to unpredictable results and unpleasant outcomes [16]. This study proposes financial risk as one of the antecedents' variables in building customer e-trust and e-satisfaction. From marketing strategy perspective, coupon is a powerful marketing promotion for many products and brands [17]. Digital coupon or e-coupon will likely influence customer satisfaction and their trust toward online shop [18]. However, there still a little research on the effect of coupon proneness on both customer e-satisfaction and customer e-trust, this study addresses this gap by investigating how e-coupon impact on customer trust and satisfaction. The quality of information in the online retailer page will help customers in their online shopping decision [19]. The information quality in the e-market retailer site will influence customer satisfaction and customer loyalty toward e-market retailer.
Objectives
This study consists of three problems. First, this study explores the relationship of e-trust and e-satisfaction on developing customer e-loyalty toward Indonesia's emarket retailer in pandemic COVID-19. The second problem from this study is exploring the impact of information quality, e-coupon, and financial risk on building e-satisfaction toward Indonesia's e-market retailer in pandemic COVID-19. The third problem from this study is to explore the relationship of information quality, e-coupon, and financial risk in developing etrust toward e-market retailer in Indonesia during pandemic COVID-19.
E-Loyalty
Customer loyalty is vital in building profitability for a business, therefore it is essential to build customer loyalty [20]. The early view of customers loyalty is concern about costumers repeat purchase behavior [20]. While according to [21], customer loyalty is the customer's positive attitude toward a brand presented in their repeated buying behavior. According to [20], customer loyalty is presented when online customers have a supportive attitude toward online retailers, and this attitude is manifested in their repeat buying behavior. This research defines e-loyalty as the commitment and positive attitude from customers toward the online shop, embodied in their repeat buying behavior.
E-Satisfaction
Customer satisfaction is a popular construct in the field of customers behavior [22]. The theoretical foundation of customer satisfaction is based on the expectancydisconfirmation theory, which stated that customer satisfaction resulted from customers' subjective comparison of their expectancy of the product or services and their experience with the product or services [23,24]. Customer's judgment on satisfaction was affected by positive or negative emotions and also their cognitive disconfirmation [25,26]. Online shopping employs a different customer experience than traditional shopping. Hence a new adaptive approach is needed for customer satisfaction in the online context. Study by [27] stated that e-satisfaction relates to customer's evaluation of their past online shopping experience whether their online shopping experience meet their expectation. This study defines e-satisfaction as customers ' judgment about their online shopping experience in the online marketplace based on their experience and expectancy.
E-Trust
E-commerce transactions involve high complexity and anonymity; therefore, customers need the role of trust in their online shopping transactions [28]. Trust in the online shopping were consists of two subjects trustor or the trusting party and trustee or the trusted party [28,29]. Trust in the online shopping context involves vulnerability from trustor toward trustee [28,29]. The trustor in this study is the customer of an online shop in Indonesia. While the trustee party is typically an institution, retailer, e-commerce shop owner, in this study, the trustee party in this study is the online shop in Indonesia. Trustworthiness is the most important thing for customers to evaluate when shopping online, and ecommerce will be facing stagnation without customer trust [30,31]. A study by [32] defined trust in ecommerce as the customer general belief toward evendor, which resulted in their behavioral intention. The general belief of e-vendor competence, integrity, and ability will lead to the costumer's trust toward e-vendor [32,33].
Financial risk
Shopping involves risks because the customer's decision in their shopping experience involves unpredictable results and unpleasant outcomes [16]. Risk is defined as the level of consumers perceived the probability of facing unpredicted outcomes of their decision [33,34]. The online shopping experience involves more risk factors than the traditional shopping experience [16]. There perceived risks in the online shopping context are divided into four types of perceived risk: financial risk, product risk, psychological risk, and time risk [36,37]. Financial risk is the perceived risk that involves potential risk for the loss of costumers monetary in their online transactions [36,38] This study adopted financial risk from online shopping perceived risk. This study's definition of financial risk is the potential risk for the loss of customer's money in their online transaction with an online shop in e-marketplace.
Information quality
The quality of the information in online shopping is vital for online customers. The information quality presented in the online shop will help potential customers in their buying behavior [19]. Consumers' assessment on the quality of information for the product or service that the seller provided on their online shop is one of the most critical factors that predict their buying behavior [39]. Studies by [19] defined information quality as consumers' evaluation of the seller's accuracy and comprehensive information of product and transaction. According to [40], information quality is defined as consumers' evaluation due to their perception of product or brand information based on accuracy, relevance, helpfulness, up to datedness, and unbiased measures.
E-Coupon
Coupons have an essential role in marketing, and coupons provide customers with special prices or discounts on some products or services; however, the customers decided to use the coupons they have is a trade-off between coupons value and using the coupons [41]. This study e-coupon was based on the view of costumer's coupon proneness in online shopping. The theoretical foundation of coupon proneness can be traced back to the transaction utility theory by Thaler [42]. The transaction utility theory stated that the utilities that customers derived from their transaction are depended on the perceived value of their deal [41,42]. One of the early constructs of coupon proneness comes from a study by Lichtenstein, which stated that the definition of coupon proneness is customers' increased propensity to respond to a deal purchase offer due to their evaluation of the value from the deal purchase offer [41,43]. In this study, the definition of the e-coupon is customers' increased propensity to respond to a deal purchase offer offered via e-commerce. Customers will respond positively toward e-coupon when they have positive perception toward the value of the deal purchase offered in e-coupon, and it also will affect their purchase evaluation.
The first problem, this study examines the relationship between e-trust, and e-satisfaction with eloyalty toward online shop in e-marketplace. Customer satisfaction has been an essential topic in the marketing literature [26,44]. Prior studies in the online shopping context found that customer satisfaction has positively influence on customer loyalty [44][45][46]. Beside customer satisfaction, prior study also proved that customer trust have an essential impact in building customer loyalty toward online shop [45,47]. Therefore, this study suggests that e-satisfaction and e-trust will lead to better e-loyalty toward online shop in Indonesia emarketplace.
H1a. e-satisfaction positively impact on e-loyalty.
Second problem, this study investigates the impact of information quality, e-coupon, and financial risk on e-satisfaction toward e-market retailer. Good information quality will help customer in their buying process [39]. Information quality also lead to better customer satisfaction in the online shopping context [49]. Coupons are an essential aspect of marketing for online shopping market [50]. Prior study proved coupon as marketing tools lead to better customer satisfaction [51,52]. The risk concept is vital for customers because customers consider risk in their evaluation and decision when choosing a particular brand [53]. A previous study found that customer perceived risk is the main factor that drove customer satisfaction [12,53].
Thus, this study proposes the following hypotheses. H2a. information quality will lead to better esatisfaction.
H2b. e-coupon positively influence e-satisfaction. H2c. customer perceived financial risk have negative impact on e-satisfaction.
Third problem, this study examines the impact of information quality, e-coupon and financial risk on etrust toward e-market retailer. Consumers perceived that online shop owners who maintain their information quality would have better quality and dependability [19]. Customer perceived information quality proved to be an essential factor in explaining customer trust in the e-commerce context [54]. The coupon is a vital marketing utility, and a coupon will increase customers transactions, and encourage customers to repeat purchases [55]. Prior studies found that a marketing strategy using a coupon will build customers ' trust toward a brand [55,56]. Customers consider risk one of the critical elements in their online buying decisions [57,58]. Prior studies by [58] found that customers perceived risk as an essential factor that must be focused on building consumers' trust in online buying behavior. Other previous studies also emphasize the importance of customer perceived risk on building e-trust by [31].
Thus, this study proposes that information quality, ecoupon and financial risk will have significant impact on e-trust toward online shop in Indonesia emarketplace.
H3a. information quality led to better e-trust. H3b. e-coupon positively influence e-trust. H3c. customer perceived financial risk negatively influence e-trust.
Measurement
The emphasis from this research is on the customers' loyalty toward online shops in Indonesia. This research adopted several previous studies in building the research model. Information quality variable was adopted from previous studies of retailer quality by Gopal Das [59,60]. The E-coupon construct was adopted from previous studies of coupon proneness by Xuefeng and Liu [61]. The E-trust construct was adopted from previous studies of the online trust concept by Kim et al, and Fang et al [10,19,62]. The e-satisfaction construct was adopted from Beyari et al, and Kuo et al [63,64]. The financial risk construct was adopted from Hong and Cha, Liu et al, Ko et al,Masoud ([38], [65][66][67]). The questionnaires items in this study were measured using a 5-point Likert type scale. The content validity of the questionnaires was checked with ten millennial respondents.
Data collection 4.1 Sample dan data collection
The population of this study is Indonesia consumers that already shopping online through e-marketplace. This study uses non-probability -quota sampling to collect data. The respondent's data from this research was collected using a questionnaire survey which collected from February 2021 until April 2021. Respondents were asked to remember their past online shopping experience with their favorite e-market retailer in Indonesia. The questionnaires were administered to 423 respondents. The usable questionnaires that were used for this study are 407 out of 423 administered questionnaires. The numbers of usable questionnaires were ideal for analyzing using multivariate structural equation modeling (SEM) statistics [68].
Demographic profile
The respondent data was collected from an online questionnaire between February and April 2021. There was a total of 407 usable questionnaires from 423 questionnaires. 59% of the respondents were female, 63% of the respondents were between ages 18 and 26, 50% of the respondents were college students, and 42% of the respondents used smartphones in their online shopping. Most of the respondents stated that their favorite e-marketplace is Tokopedia (37%) followed with Bukalapak (25%) and Shopee (21%).
Results and discussion
This study used Partial Least Square Structured Equation Model (PLS-SEM) to assess the research model and data. PLS-SEM is structured equation modelling that combine the analysis of a principal component with ordinary least square regression [69]. This study's PLS-SEM analysis processes consist of two stages: the confirmatory factor analysis (CFA) and the structural measurement model.
CFA
The CFA result from this research model is presented in Table 1. The validity of the variables in the research model is measured by analyzing whether the standardized loading factors from each variable in the research model above the recommended value or 0.5 [68]. From Table 2, each of the standardized loading factors is above the recommended level (0.5). Overall, all the latent variables have good validity. The reliability of the research model is measured with the score of construct reliability (CR), and the average variance extracted (AVE) is above the recommended level (CR >= 0.7, AVE >= 0.5). Table 2 shows that overall, the information quality, e-coupon, risk financial, esatisfaction, and e-trust have good reliability because all their CR and AVE were above the recommended level. Thus, overall, the examination of latent variables in this study has passed for good validity and reliability.
The next step is to evaluate the discriminant validity from each of the latent variable. The discriminant validity calculates and explain the empirically different among each latent variable in the research model. This study uses the Fornell and Larcker metric to explain the discriminant validity [68]. The Fornell and Larcker metric from this study is shown in Table 2.
Structural model
The structural model is statistical procedures in PLS-SEM that used to calculate and examine the relationship among the latent variables in the research model [70].
The results of the structural model calculations are presented in Table 3 and Figure 2. Table 3. Structural model results.
Results
H1a: e-loyalty 5 In the assessment of structural model, the relationship between latent variables can be called significant if the t-values of the relationship is above the recommended level or 1.96 [68,70].
Discussion
This study proposes three problems. The first problem is investigating the relationship of e-trust and esatisfaction on developing e-loyalty toward Indonesia's e-market retailer in pandemic COVID-19. The result found that both e-trust and e-satisfaction are the main drivers for e-loyalty. Hypotheses 1 was accepted. This finding is consistent with the prior studies that stated that satisfied customers lead to customer loyalty formation [71,72]. This result also consistent with previous studies that concluded that customer trust would lead to customer loyalty [48,73]. This result suggests that online shop in Indonesia e-marketplace should focus their strategy on building customer e-trust and e-satisfaction if they want to build loyal customers to survive in the online shop competition in Indonesia during pandemic COVID-19.
This study's second problem is to investigate the relationship of information quality, e-coupon, and financial risk toward e-satisfaction. The result found that e-coupon and information quality are significant predictors for e-satisfaction. Hypotheses 2 is partially accepted. The essential role of the coupon on building satisfaction is consistent with prior studies [51,52]. The result was also consistent with prior studies, which stated that information quality is one of the significant factors in explaining customer satisfaction [74]. These findings imply that online shop owners need to use ecoupon in their marketing strategy and online shop owners should provide comprehensive and clear information in their online retailer channel because both e-coupon and good information quality will lead to better customer satisfaction.
The third problem is to examine the relationship of product information quality, e-coupon, and customer perceived financial risk in explaining customer e-trust. The result stated that information quality and e-coupon positively impact e-trust. Hypotheses 3 is partially accepted. The importance of customer perceived information quality on developing e-trust is consistent with prior studies on e-commerce context [54]. The result is consistent with prior studies by [55] which stated that marketing strategy using a coupon will lead to customers' trust toward a brand. The information provided by online shop owners should be comprehensive, easy to understand, and straightforward because customers' perceived information quality will lead to customer trust toward online shops in pandemic COVID-19. The result also implies that digital marketing strategy using digital coupon or e-coupon will lead to better customer trust toward e-market retailer.
Besides the main problem, this study also found several other findings. This study also found that financial risk did not have a significant relationship with e-satisfaction and e-trust. The result contrasts with prior, which revealed that customer perceived risk is an essential aspect in building customer satisfaction [53,75]. This study also contradicts previous studies by [31], which found that perceived risk impacts trust in the mobile payment context. This result occurs because online shop customers in Indonesia are accustomed to shop on e-marketplace channels in Indonesia so that the customer perceived financial risk of shopping in emarketplace is low.
Theoretical implications
This study contributes to the marketing literature in several implications.
First, most of prior studies explain the relationship between customer trust, satisfaction, and loyalty [75,76]. The present study evaluates the relationship between e-coupon, e-trust, e-satisfaction, and e-loyalty simultaneously.
Second, most of prior studies focusses their research on the customer of online shop and customers online marketplace ( [25], [77,78]). However, this study focuses specifically on the customer of online retail that uses online marketplace or online shop.
Third, this study found that e-coupon have positive influence on the development of customer trust and customer satisfaction toward online shop. This result extends the application of e-coupon in the customer behavior literature.
Managerial implications
These study findings generate insight for online shop owner through several implications.
The result shows that customer satisfaction and customer trust are essential predictors for building eloyalty. The online shop owner should understand and develop their customer satisfaction and trust, which later will lead to e-loyalty.
This study also found that customer feels satisfied when they transaction using e-coupon in e-marketplace. The online shop should include e-coupons such as discounts, cashback, promotion prices in their marketing strategies to build customer satisfaction.
The result also implies that online shop owners need to emphasize information quality in their online shop channel. Several strategies can help online shop owners keep their information quality, such as online shop needs to keep up-to-date information about their product; online shop owner needs to present easy-tounderstand information.
Limitations
There are several limitations from this study that can be used for future research references. First, this study concentrated on building e-loyalty from e-trust and esatisfaction; Future studies can explore other variables such as repurchase intention, purchase intention, brand loyalty, etc. Second, this study focuses on the customers of online shop in Indonesia in the pandemic COVID-19, future studies could focus on the specific product categories that retailer sell in the e-marketplace, such as drink, fruits, smartphone, electronic. Third, this study focuses on online shop that uses e-marketplace ; future studies could investigate on online retail that uses other channel such as social media in their transactions. | 5,069.4 | 2023-01-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Production of bacterial amylases and cellulases using sweet potato ( Ipomoea batatas . ( L . ) Lam . ) peels
Peels of sweet potato (Ipomoea batatas) were buried in the soil for 14 days and the isolates associated with the degradation of the peels were obtained using standard microbiological procedures. The bacterial isolates obtained were screened for amylolytic and cellulolytic activities under different pH and temperatures as parameters and optimized for enzyme production. Sixteen (16) bacterial isolates were obtained and characterized and screened for amylase and cellulase production. Bacillus pumilus has the highest frequency of occurrence (18.75%) followed by B. subtilis (12.50%). After 24 to 48 h of incubation, B. pumilus produced highest concentration of amylase at 55°C, pH 6 (5.4 U/mL) while B. subtilis had the best cellulase production of 0.75 U/mL at 55°C, pH 7. B. pumilus and Bacillus subtilis produced the highest amylase and cellulase concentrations and seem to be the potential sources of these enzymes for industrial application.
INTRODUCTION
Amylases are class of enzymes, which are of important applications in the food, brewing, textile, detergent and pharmaceutical industries.Their most relevant effect is employed during starch liquefaction to reduce their viscosity, production of maltose, oligosaccharide mixtures, high fructose syrup and maltotetraose syrup (Jose and Arnold, 2014).During detergents production, they are applied to improve cleaning effect and are also used for starch de-sizing in textile industry (Aiyer, 2005).α-Amylase is characterized by its random hydrolysis of α-1,4-glucosidic bonds in amylose and amylopectin molecules, while amylopectin α-1,6-bonds are resistant to its cleavage (Parmar and Pandya, 2012).Many micro-organism such as Bacillus subtilis, Bacillus cereus, Bacillus polmyxa, Bacillus amyloliquefaciens, Bacillus coagulans, Lactobacillus, Escherichia, Proteus, Bacillus lincheniformis, Bacillus steriothermophilu, Bacillus megaterium, Strepotmyces sp., Pseudomonas sp.etc. were used in α-and β-amylases production.Although, among bacteria, Bacillus sp. was widely used for thermostable α-amylase production so as to meet industrial needs (Parmar and Pandya, 2012).
Cellulose is the most abundant biomass on Earth (Tomme et al., 1995).It is the primary product of photosynthesis in terrestrial environments and the most abundant renewable bioresource produced in the *Corresponding author.E-mail<EMAIL_ADDRESS>+2348033750651.
Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License biosphere (Jarvis, 2003;Zhang and Lynd, 2004).Cellulose is commonly degraded by an enzyme called cellulase.This enzyme is produced by several microorganisms, mainly bacteria and fungi (Bahkali, 1996;Magnelli and Forchiassin, 1999;Shin et al., 2000;Immanuel et al., 2006).Cellulases from bacteria are more effective catalysts and less inhibited by the presence of material that has already been hydrolyzed.The greatest potential importance of using bacteria for cellulase production is the ease with which bacteria can be genetically engineered, high growth rate as compared to fungi, often more complex and in multi-enzyme complexes providing increased function and synergy, inhabit a wide variety of environmental and industrial niches (Ariffin et al., 2006;Sadhu and Maiti, 2013).However, the application of bacteria in producing cellulase is not widely used (Sonia et al., 2013).Some bacterial species used in cellulase production are Cellulomonas species, Pseudomonas species, Bacillus species and Micrococcus species (Nakamura and Kappamura, 1982).Cellulases are used: In the textile industry for cotton softening and denim finishing; in laundry detergents for colour care, cleaning; in the food industry for mashing; in the pulp and paper industries for drainage improvement and fibre modification (Cherry and Fidants, 2003).
Amylase and cellulase yields appear to depend upon a complex relationship involving a variety of factors like inoculums size, pH value, temperature, presence of inducers, medium additives, aeration, growth time, and so forth (Immanuel et al., 2006).
This study was therefore designed to isolate high amylase and cellulase producing bacteria from decaying sweet potato peels and to optimise for enzyme production.
Samples collection
Sweet potatoes (yellow skin) were purchased from Agbowo Market in Ibadan Metropolis, Oyo State, Nigeria.
Sample preparation
The peels of sweet potatoes were carefully scraped off so that the amount of corker removed was kept to a minimum.The scraped peels were buried inside the soil (14 cm deep) in Botanical Garden, University of Ibadan, Oyo State, Nigeria.
Isolation of organism
The buried scrapped peels were exhumed carefully after 14 days and put in a sterile nylon bag and carried to the laboratory.The adhering sand was shaken off and 1 g of the peel was homogenized aseptically using a sterilized mortar and pestle.Serial dilution was carried out and 1 mL of dilution 10 4 and 10 6 were mixed with 20 mL of plate count agar, poured on plate and allowed to set.This was incubated for 24 h at 37°C and observed for bacterial growth.Colonies with different morphology (shape, texture and colour) were isolated and purified by sub-culturing several times till pure cultures were obtained.Isolation was carried out in triplicates.
Identification of isolates
Organisms were identified based on their macroscopic, microscopic, physiological and biochemical characteristics of the isolates with reference to Bergey's Manual of Systematic Bacteriology (Sneath et al., 1986).The biochemical tests carried out are starch hydrolysis, catalase test, Voges Prokauer test, citrate utilization and endospore test.
Growth on carboxymethylcellulose (CMC)
CMC (2%) was prepared with nutrient agar, sterilized and allowed to cool to 45°C.It was poured into Petri dishes.The plates were inoculated with single streak of test organism and incubated at 37°C for 48 h.Presence of clear zones along line of growth indicates that the organism can utilize or break down cellulose and this was used to screen for cellulase production ability of the isolates.
Growth on starch
Starch agar was prepared by adding 1 g of soluble starch to 100 mL of nutrient agar.The mixture was homogenized and sterilized at 121°C for 15 min.This was then dispensed into sterile plates and allowed to set.A single streak of culture was made on the plate and incubated at 37°C for 48 h.After incubation, the plates were flooded with Gram's iodine.A positive result was indicated by retention of the iodine colour as a clear zone around the growth region indicating starch hydrolysis while unhydrolyzed starch formed a blue and black colouration with iodine.This was used to screen the bacterial isolates for amylase production.
Extraction of enzymes
The medium used was nutrient broth in which soluble starch and CMC (1%) was added respectively.It was sterilized at 121°C for 15 min, allowed to cool and the test organisms inoculated into it.It was then incubated at 30°C for 48 h after which the culture was centrifuged at 10,000 rpm for 15 min using a refrigerated centrifuge (IEC centra, MP4R model).The cell free culture supernatant was then assayed for amylase and cellulase production and activity.One unit (U) of enzyme activity is expressed as the quantity of enzyme, which is required to release 1 mol of glucose per minute under standard assay conditions (Muhammad et al., 2012).
Amylase assay
Amylase assay was determined using DNSA reagent method of Bernfeld (1955) as modified by Giraud et al. (1991).To 1 mL of culture supernatant was added 1 mL of the substrate containing 1.2% w/v soluble starch in 0.1 N phosphate buffer, pH 6.0.The enzyme substrate reaction was incubated at 45°C for 1 h.The reaction was brought to halt by adding a drop of 5 M NaOH.The amount of reducing sugar produced was determined with 3,5dinitrosalicylic acid (DNS). 1 mL of DNS reagent was added to filtrate-substrate reaction mixture and was heated in a boiling water bath at 100°C for 10 min.It was cooled with distilled water.The absorbance was measured at 540 nm using spectrophotometer (Unipec 23 D, Uniscope England).One millilitre of uninoculated blank similarly treated was used to set spectrophotometer at zero.Standard maltose concentrations were prepared within the range of 0.2 -3.0mg/mL maltose into the requisite medium.The results were then used to construct a standard curve.The spectrophotometer values were then extrapolated as maltose equivalent from the standard curve plotted (Bernfield, 1955).
Cellulase assay
Cellulase assay was determined using the method of Mandel et al., (1976). 1 mL of culture supernatant was added to 9 mL of the substrate containing 0.55% w/v of CMC (carboxymethylcellulose) in 0.55 M acetate buffer, pH 5.5.It was incubated at 45°C for 1 h.The reaction was brought to halt by adding a drop of 5 M NaOH. 1 mL of DNS was added to 1 mL of the filtrate in order to estimate the reducing sugar that was released.The mixture was boiled at 100°C for 10 min in water bath.After cooling, the absorbance was determined at 540 nm using Unispec 23D spectrophotometer.
Effect of different temperatures on amylase and cellulase productions
Nutrient broth was prepared and 10 ml each dispensed into screw capped bottles and sterilized at 121°C for 15 min and allowed to cool.Bacillus isolates were inoculated into each bottle and incubated at different temperatures (25, 37, 45, 55 and 65°C) for 24 h.Amylase and cellulase activities were then determined as described earlier.
Effect of different pH on amylase and cellulase production
Buffer was used to adjust the pH of nutrient broth to 3.0, 4.0, 5.0, 6.0 and 7.0 accordingly.10 mL of the adjusted nutrient broth was dispensed into screw capped bottles and sterilized at 121°C for 15 min.After cooling, test isolates were inoculated into each bottle and incubated at 37°C for 24 h.Enzymes activities were determined as earlier described.
RESULTS
Bacillus species had highest occurrence of bacterial isolate from buried potatoes peels after 14 days ( The effect of different temperatures on amylase and cellulase production of the Bacillus species are presented in Table 4, for all the isolates, there was a gradual increase in enzymes activities as the temperature increases with maximum concentration produced at 55°C before a general decline at 65°C.B. pumilus SPB8 produced highest concentration of amylase at 55°C (5.4 U/mL) while B. subtilus SPB7 produced cellulase best also at 55°C with 0.75 U/mL concentration.Least production of enzymes was noticed at 27°C for all isolates.Figure 1 shows the effect of pH, at pH 6, B. pumilus SPA7 produced the highest concentration of amylase (5.2U/mL) followed by B. megaterium SPB3 (4.2 U/mL) and the lowest producer at that pH is B. subtilis SPB7 (3.1 U/mL).The highest concentration of cellulase was produced by B. pumilus SPB8 (1.8 U/mL) at pH 5 followed by B. pumilus SPA1 (1.5 U/mL), while B. licheniformis SPB2 produced the least concentration at this pH (Figure 2).All the organisms recorded their highest cellulase production at either pH 5 or 6 with the exception of B. licheniformis SPB2 that recorded its highest cellulase production at pH 3.
DISCUSSION
The most predominant bacterial isolates obtained from decaying sweet potatoes peels were identified as In this study, B. pumilus produced the highest concentration of amylase (5.4 U/mL) at 55°C and pH 6 which was also reported by Andrea et al. (2007) which states that B. pumilus produced amylase between the pH of 5.8 and 7.5 and at a temperature of 55°C.Effect of temperature on amylase production was observed by varying growth temperature of isolates and optimum temperature was found to be 55°C.This findings agrees with the behaviour of amylases from Bacillus spp.isolated from soils as reported by Cordeiro et al. (2003) and Vipal et al. (2011) who reported 50°C as optimum temperature.The effect of temperature on cellulase production was also observed when temperature of the production medium was varied.Cellulase production was highest in the temperature range of 45 -55°C, with an optimum temperature of 55°C.Similarly, Shaikh et al. (2013) observed that Bacillus sp.produced cellulase optimally at 50°C and affirm that the thermostable property of cellulase has been shown to be of interest for industrial applications.Optimum pH for the production of cellulase by all the organisms used in this study ranged from 5 -7 with pH 5 been the most predominant.This result was in agreement with the findings of others like Goya and Soni (2011), Azzeddine et al. (2013) and Trinh et al. (2013) who reported pH 5, 6 and 7 respectively as the optimum pH for production of cellulase from Bacillus spp.
Conclusion
This study inferred that decaying sweet potato peels harbour amylolytic and cellulolytic Bacillus species and the enzymes produced by these bacteria can be harnessed for industrial application.Optimum temperature for amylase and cellulase production was 55°C, whereas optimum medium pH for amylase and cellulase was 6 and 5, respectively.B. subtilis and B. pumilus produced the highest concentration of amylase (5.4 U/mL) and cellulase (0.75 U/mL), respectively.
Table 1 .
Frequency of occurrence of bacterial isolate from decaying sweet potato peels.
Table 2 .
Colonial morphology of Bacillus sp.isolated from decaying sweet potato peels.
Table 1 )
. Bacillus sp.recorded 43.75% of occurrence; followed by Pseudomonas with 18.75%.Other bacteria isolated were Flavobacterium rigense, Proteus sp., Derxia gummosa, Azotobacter vinelandii and Micrococcus luteus.The colonial morphologies of Bacillus species isolated were represented on Table2, they all have raised elevations and cream colour while their texture are either smooth, dull or shiny.Also, they exhibit different colony shapes on the plate, B. pumilus is circular, B. licheniformis is rhizoid, B. megaterium is oval and B. subtilis is round.Table3shows the biochemical tests for bacillus isolates.The Bacillus spp.are Gram positive, rod shaped and endospore positive.All the bacillus isolates have the ability to hydrolyse starch and utilize citrate except B.
Table 3 .
Biochemical characteristics of Bacillus sp.isolated from decaying sweet potato peels.
Table 4 .
Amylase and cellulase concentration (U/mL) of Bacillus sp. at different temperature.Mean value of triplicate readings.Bold values indicate highest concentration of amylase and cellulase production, respectively at 55°C. | 3,088.4 | 2015-10-30T00:00:00.000 | [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
] |
Photonic-circuited resonance fluorescence of single molecules with an ultrastable lifetime-limited transition
Resonance fluorescence as the emission of a resonantly-excited two-level quantum system promises indistinguishable single photons and coherent high-fidelity quantum-state manipulation of the matter qubit, which underpin many quantum information processing protocols. Real applications of the protocols demand high degrees of scalability and stability of the experimental platform, and thus favor quantum systems integrated on one chip. However, the on-chip solution confronts several formidable challenges compromising the scalability prospect, such as the randomness, spectral wandering and scattering background of the integrated quantum systems near heterogeneous and nanofabricated material interfaces. Here we report an organic-inorganic hybrid integrated quantum photonic platform that circuits background-free resonance fluorescence of single molecules with an ultrastable lifetime-limited transition. Our platform allows a collective alignment of the dipole orientations of many isolated molecules with the photonic waveguide. We demonstrate on-chip generation, beam splitting and routing of resonance-fluorescence single photons with a signal-to-background ratio over 3000 in the waveguide at the weak excitation limit. Crucially, we show the photonic-circuited single molecules possess a lifetime-limited-linewidth transition and exhibit inhomogeneous spectral broadenings of only about 5% over hours’ measurements. These findings and the versatility of our platform pave the way for scalable quantum photonic networks.
R esonantly driven single two-level quantum systems provide a powerful route to generate indistinguishable single photons on demand and to coherently control the internal state of the individual quantum systems, which are crucial for the implementations of various quantum information processing schemes [1][2][3] . On-chip integration of single quantum systems offers the appealing prospects of having high degrees of scalability and stability of the systems [4][5][6][7] , which are increasingly important for the developments towards real applications. In practice, hybrid integrated quantum photonic systems provide an attractive viable solution to combine high-performance solid-state quantum systems and advanced nanophotonic elements in large scale on one chip. Among various types of solid-state quantum systems 7,8 , quantum dots and color centers in diamond naturally have benefitted from the rapid development of nanofabrication technologies for inorganic semiconductor materials and become relatively more sophisticated in building on-chip quantum devices 5,[9][10][11][12] . High-efficiency couplings of these quantum emitters with waveguides have been demonstrated by employing photonic nanostructures based on different operation principles 11,[13][14][15] . Very recently, exciting progresses, including large-scale integration of color centers 16 and low-background resonance fluorescence (RF) in waveguides from single quantum dot 14,17,18 , have been reported. However, despite impressive advances, the field of hybrid integrated quantum photonics still confronts several difficult challenges. For instance, the nanofabricated semiconductor structures readily result in charge fluctuations around the quantum systems and hence spectral wandering or diffusion of the emission energy from one photon to the other 6,11,19 , making the emitted photons indistinguishable only for a short-time delay. For quantum dots coupled to nanophotonic waveguides, the time delays for achieving indistinguishability are typically limited to tens of nanoseconds 14,17,18,20 . While lifetime-limited linewidths have been obtained with delicate control of the charge noise and fast scan 21,22 , spectral wandering persists and becomes a roadblock for scaling up the system size where longtime stable lifetimelimited transitions are required. Moreover, the randomness of the quantum emitters in spatial position, transition frequency and dipole orientation 6,20 complicates the collective control of their interaction with the circuits.
Single molecules embedded in organic crystalline matrices are a class of bit less popular solid-state quantum systems 23 but have been extensively studied in the context of biology, physical chemistry and spectroscopy 24 . Despite the usual impression that molecules suffer from photobleaching and broad spectra, it turns out that polycyclic aromatic hydrocarbons, such as dibenzoterrylene (DBT) embedded in anthracene (AC) or naphthalene crystal, can actually have narrow lines and be definitely photostable 25,26 . They have been demonstrated as stable single-photon emitters 26 , nearly ideal two-level quantum systems 27 , nanometer-sized acoustic detectors 28 and flexible interfaces with alkali atoms 29 . Recently, single molecules also began to be actively explored for integrated quantum photonics 30 due to their unique advantages of possessing stable lifetime-limited zero-phonon lines (ZPLs) at liquid helium temperature 24,25 and of being small (~1 nm) suitable for doping at high densities 24 , which are both important for the scalability. Towards integrated molecular quantum devices, several groups have reported encouraging results on integrating single molecules onto a variety of nanophotonic circuit structures 26,[31][32][33][34][35][36] . However, on-chip generation and routing of RF from single molecules with the suppression of excitation laser background have not been achieved. Moreover, the molecules near nanofabricated surfaces also tend to have inhomogeneous spectral fluctuations upon light excitation 33 . In addition, the existing integration schemes have difficulties to control the molecules' dipole orientation, which is crucial for coupling with the nanophotonic elements. Therefore, a grand challenge in this line of research is to seamlessly integrate organic molecules to inorganic semiconductor nanostructures in a controlled and cryogenic-temperature compatible manner without introducing appreciable spectral wandering and scattering background 31,33 .
In this work we report an organic-inorganic hybrid quantum photonic platform that does meet the aforementioned key challenges and demonstrate photonic circuited background-free resonance fluorescence from a lifetime-limited-linewidth transition of single molecules with ultrahigh spectral stability. Here single DBT molecules embedded in an ultrathin AC crystal is hybridly integrated to silicon nitride (Si 3 N 4 ) based waveguide circuits via a pickand-place approach. The dipole orientations of the DBT molecules can be identified and collectively aligned with the waveguide structure. By applying spatial filtering in both real and Fourier spaces as well as polarization filtering, we could greatly suppress the same-frequency excitation laser background and demonstrate single-molecule resonance fluorescence in the waveguide with record-high signal to background ratios (SBRs) of over 3000. In addition, we show that the photonic-circuited molecules exhibit lifetime-limited ZPLs and possess such linewidth for hours over many excitation cycles without any feedback control. Our platform decouples the nanofabrication of the semiconductor photonic structures and the organic host materials, and thus allows the inclusion of various advanced nano-optical elements and microelectrodes to control the photonic coupling and the molecules, ensuring a high degree of scalability.
Results
Photonic-circuited single photons from single molecules. Figure 1a sketches the architecture and operation principle of the hybrid integrated quantum photonic system, which is comprised of a nanofabricated inorganic photonic structure, i.e., Si 3 N 4 based waveguide circuits 37 , and an organic crystalline flake, i.e., an AC nanosheet with DBT molecules embedded 25,38 . The nanosheet is obtained through a co-sublimation process 25 and has a hexagonal shape with an area of 7200 μm 2 and a thickness of 150 nm (Supplementary Note 1) 39 . The thin thickness facilitates an evanescent coupling of DBT molecules to the fundamental transverse electric (TE 0 ) mode of the waveguide (see the mode profile in the lower inset of Fig. 1a; Supplementary Note 2). Through a pickand-place process illustrated in Fig. 1b, the nanosheet is integrated and bonded to the S 3 N 4 waveguides via van der Waals forces. A crucial advantage of our system is that the dipole orientations of the embedded DBT molecules can be collectively aligned to the target waveguide during the assembly (Supplementary Note 1) since molecules' dipole orientations are aligned to the "b" axis of the AC crystal 38 , recognizable from the hexagonal shape 40 . The AC nanosheet and the bonding remain robust in the helium bath cryostat with the temperature cooled down to 1.4 K. Under a narrow-band laser excitation from free space, individual molecules can be selectively excited and emit single photons into the waveguide, which are guided towards a 2 × 2 multi-mode interference (MMI) coupler for on-chip beam splitting 41 . The split streams of single photons are further routed by 560 μm before outcoupling to free space via two grating couplers (GC1 and GC2). According to the molecule levels indicated by the simplified Jabłoński diagram as in the top inset of Fig. 1a, a long-pass filter (LPF) is applied to collect the Stokes-shifted fluorescence while a narrow band-pass filter (BPF) is used for collecting RF, i.e., 00ZPL emission under resonant 0-0 excitation. As illustrated in Fig. 1c, the photon streams from GC1 and GC2 after passing the filter are separated by a D-shaped mirror and sent for further independent processing and detection (Supplementary Note 3). Figure 2a presents two fluorescence-excitation spectra obtained from recording the Stokes-shifted fluorescence from GC1 and GC2 as the laser frequency is scanned through molecules' 00ZPL inhomogeneous band of 1.25 THz. Each spectrum include 710 narrow lines which correspond to this number of molecules situated in slightly different nanoscopic environment 24 . With a consideration of the laser spot size, we estimate every 20 nm length there is one molecule coupled to the waveguide. Such a high density of well-aligned quantum emitters in combination with microelectrodes for the Stark effect promises scalability of our platform 6 . Figure 2b displays a high-resolution excitation spectrum from one molecule. Second-order cross-correlation function g (2) (τ) of the Stokes-shifted fluorescence collected through GC1 and GC2 depicts an anti-bunching dip of g (2) (0) = 0.013(1) in Fig. 2c. The dip residue is mainly contributed by APD dark counts. Figure 2 has featured on-chip generation, beam splitting and routing of single photons from single DBT molecules. Photonic-circuited resonance fluorescence from single molecules. The above demonstration lays the groundwork for photonic-circuited RF with ultrahigh SBRs. The key to achieve this is to enhance the signal and suppress the same-frequency laser background. As shown in Fig. 3a, the alignment of molecule dipoles with the TE 0 mode field asks for an aligned linearly polarized excitation to maximize the RF signal. This is faithfully confirmed by the polar plot of the emission intensity with the excitation polarization in the right panel of Fig. 3a. To suppress laser background, we take two innovative measures. Firstly, we exploit the advantage that the polarization state of the RF on the chip can be altered at will by bending the waveguide. Therefore we bend the photonic circuit in such a way that the output polarization is orthogonal to the excitation laser, enabling a cross-polarization detection to eliminate the excitation without blocking any signal. Secondly, we utilize the property that the output from the grating is directional and has a narrow distribution in the Fourier plane (Supplementary Note 2). Figure 3b shows a recorded Fourier-plane image (Supplementary Note 4) for the filtered 00ZPL fluorescence from GC1 for a molecule under 0-1 excitation. A small bright spot is observed. On the contrary, the laser background shows a broad speckle-like distribution in Fig. 3c. Therefore we place a pinhole at the Fourier plane, as indicated by the dashed rectangles in Fig. 3b, c, to block a large portion of background and allow the signal to pass. Besides applying the two above measures, a regular pinhole in the real-image plane is also used. These measures allow us to selectively suppress the laser background by over 7 orders of magnitude (Supplementary Note 5). Figure 3d displays a clean narrow RF-excitation spectrum recorded from GC1 under an excitation intensity level of S = 0.17, where S denotes the saturation parameter, i.e., the excitation intensity normalized by the saturation level (Supplementary Note 8). The excitation-intensity dependent laser background and SBR are presented in Fig. 3e, where the symbols are the measured data and the solid curves are the fittings. The SBR reaches 216 ± 10 at the weak excitation limit and decreases with the increase of the excitation intensity as expected, for instance, SBR = 108 ± 3 at S = 1.0. Note that here the SBRs result from a particular off-chip detection scheme, where the sites of laser excitation and RF collection are quite close due to the constraint by the field of view of our optics. The scheme inevitably incorporates scattered laser background that is not waveguide-coupled. As a matter of fact, the directly scattered laser background dominates in the total background and leads to slightly different measured SBRs for the two grating couplers and for molecules from different positions (Supplementary Note 5). However, for applications in future developments, laser excitation and RF detection could be completely separated by using different approaches, for instance, routing RF to locations centimeters away, photonic wire bonding to optical fibers 42 and on-chip integration of photon detectors 12 . Therefore, it is the waveguide-coupled laser background field E b,wg that really matters and should be quantified. It turns out to be a highly nontrivial task to separate E b,wg from the rest specklelike scattered background field E b,sp since they coherently interfere to form the final recorded intensity pattern Here we study the background intensity pattern as a function of the laser frequency and present the recorded images in Fig. 3f. The pattern at the central region exhibits a pronounced periodic evolution with the laser frequency. The color-coded traces in Fig. 3g plot the intensities at the marked positions in Fig. 3f as a function of the laser frequency change. All traces exhibit sinusoidal periodic evolutions with a common period of 173.4 GHz, which simply corresponds to the additional 804 ± 7 μm propagation length that the waveguide-coupled background experiences from the excitation site to GC1 (orange-dashed trace in Fig. 3a). These observations lead us to conclude that the background pattern variation is due to a simple two-part interference of waveguide-coupled and speckle-like scattered background fields. This finding and the measurements in Fig. 3g enable us to quantitatively determine the fraction of the waveguide-coupled background to be 6.5(3)% and thus the SBRs of 3320 ± 220 and 1660 ± 90 in the waveguide at the weak excitation limit and S = 1.0, respectively (Supplementary Note 6). Similar corresponding SBRs of 3610 ± 490 and 1810 ± 240 in the waveguide are obtained by studying the background properties from GC2 (Supplementary Note 6). We attribute such high SBRs to the following unique advantages of our platform. Both the AC nanosheet and nanofabricated Si 3 N 4 waveguides have smooth surfaces with measured roughness of 0.2 nm in root-mean-square deviation (Supplementary Note 1). The naturally formed crystalline AC nanosheet possesses excellent mechanical properties and is free of cracks in the device even when the temperature is cooled to superfluid helium temperature. The controlled collective alignment of the molecular dipoles with the waveguide minimizes the excitation laser intensity. All these properties greatly suppress laser scattering into the waveguide mode.
Ultrastable lifetime-limited molecular transition. Next we examine if the linewidth of the molecular transition is lifetimelimited, i.e., 2τ 1 ffi τ 2 , where τ 1 and τ 2 are the excited-state lifetime and decoherence time, respectively. Firstly we determine τ 2 by studying the RF-excitation spectrum of a single molecule at varied laser excitation intensities as shown in Fig. 4a. According to the optical Bloch equation 43 , the linewidth Δυ (in Hz) is related to the saturation parameter and decoherence time via Linewidths Δυ of the RF-excitation spectra are extracted and plotted in Fig. 4b as a function of S. By fitting the Δυ curve, we obtain the decoherence time of τ 2 = 7.30 ± 0.09 ns and the linewidth at the weak excitation limit (i.e., S→0) of 43.61 ± 0.56 MHz. Excited-state lifetime τ 1 can be extracted from the second-order photon correlation function g (2) (τ). Figure 4c presents g (2) (τ) of the RF photons split on the chip for the molecule under S = 0.64. A nearly perfect anti-bunching dip is again observed. By fitting g (2) (τ) with the consideration of APD dark count rate (Supplementary Note 7), we determine τ 1 = 3.89 ± 0.38 ns. Therefore we achieve τ 2 /2τ 1 = 0.94 ± 0.05, i.e., nearly lifetime-limited transition from a waveguide-coupled single molecule. This is a remarkable achievement for waveguidecoupled solid-state quantum emitters and is more stringent than the demonstration of subsequently emitted indistinguishable photons 21,22 .
In the following, we demonstrate that the waveguide-coupled molecules can be almost free of spectral wandering for hours. Figure 5a presents the recorded fluorescence-excitation spectra under repeated laser scanning for two hours with a scan speed of 200 MHz/s at S = 0.12 for another molecule. These spectra possess a nearly lifetime-limited linewidth of 43.5 MHz with a standard deviation of 2.7 MHz. In Fig. 5b, the red symbols depict a typical single-scan spectrum while the blue symbols display the spectrum obtained from a direct superposition of all recorded excitation spectra. The latter represents an inhomogeneously broadened spectrum due to spectral diffusion over two hours and exhibits a linewidth of 45.8 MHz, which amounts to a broadening by only 2.3 MHz (~5%). The inset of Fig. 5b presents a spectral autocorrelation analysis, which indicates the correlation kept Fig. 5c, d for the excitation intensity level of S = 1.2, the inhomogeneous broadening accumulated for two hours is as low as 8.5% and the spectral autocorrelation is kept above 0.98. Different molecules with similar spectral stability are shown in Supplementary Note 9. We remark that such level longterm spectral stability is not shared by any other waveguidecoupled solid-state quantum systems 16,18 . For quantum dots, the lifetime-limited linewidths are typically obtained from single-scan spectrums with a scanning frequency over 10 kHz (<0.1 ms per spectrum) to exclude effect of slow spectral diffusion 21 . Moreover, the spectral stability of our molecules is obtained without any feedback control for charge stabilization, which will become important when the system size scales up. The demonstration that the photonic-circuited molecules possess a lifetime-limited transition with such stability implies the generation of singlephoton streams indistinguishable for hours and the prospect of overcoming the formidable challenge to have multiple emitters on one chip with a matched transition frequency. We attribute the spectral stability to a series of advantages of our hybrid-integration platform. Spectral diffusion is often caused by nanoscopic charge fluctuations around the emitter from trapped or wandering charges around nanofabricated semiconductor surfaces 11,19,44 . The charge fluctuations could also be optically activated 33 in particular for low bandgap materials. In our platform, the nanofabricated photonic circuits are based on wide-bandgap material Si 3 N 4 and possess surfaces with a roughness of only 0.2 nm in root-mean-square deviation (Supplementary Note 1). The DBT molecules are embedded in the van der Waals bonded AC crystal 25,38 which provides a stable host environment. Our crystalline AC nanosheets are free of nanofabrication and have smooth surfaces with relatively large areas (~10000 μm 2 ) (Supplementary Note 1). A combination of these elements ensures a stable charge environment for the DBT molecules.
Discussion
In summary, we have presented an organic-inorganic hybrid quantum photonic platform that enables on-chip generation, beam splitting and routing of background-free resonance fluorescence from single molecules with an ultrastable lifetimelimited transition. The organic part is nanofabrication-free and allows a collective alignment of the dipoles of the denselydoped molecules with the nanophotonic elements. The sample format also offers an almost independent design and fabrication of the inorganic part of the circuits. These advantages will readily allow the extension of the current photonic structures to include more complex architectures, for instance, in-plane microcavities to improve emitter-waveguide coupling efficiency from the current value of 8% (Supplementary Note 8) to near unity and to enhance the 00ZPL emission 14,15,27,45 , microelectrodes to electrically tune molecules' transition via Stark effect for on-chip generation of indistinguishable single photons from independent molecules 46,47 , and superconducting nanowire single-photon detectors to benefit from the ultrahigh on-chip SBR 12 . The relatively large area (~10000 μm 2 ) of the AC nanosheets in principle could simultaneously cover several tens of waveguides in parallel, which allow thousands of molecules simultaneously coupled to these waveguides in the circuit, promising a large scale integration. We remark that apart from the Si 3 N 4 platform other platforms such as lithium niobate and aluminium nitride 7 , also could be employed to make properties of the nanophotonic elements electrically reconfigurable. Our molecular chip architecture together with the pick-and-place integration technique enables flexible interfacing with other on-chip systems such as color centers 10,16 , quantum dots 20 and rare-earth ions 48 to combine the merits of different systems. Thus we believe this work makes an important step to scalable molecular quantum photonics.
Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request. Fig. 4 Lifetime-limited-linewidth transition and anti-bunching of RF photons that are beam split on the chip. a RF-excitation spectra at varied S (color-coded). The RF signals from GC1 and GC2 are combined and the background and dark count rates are subtracted. The laser frequency is scanned with a speed of 200 MHz/s. b Linewidth extracted from the measured RF-excitation spectra as a function of S. The solid orange line is the fit 4υ ¼ ffiffiffiffiffiffiffiffiffiffi 1 þ S p = πτ 2 À Á . The error bars represent the fitting errors of the linewidths. c Second-order cross-correlation function (g (2) (τ)) of the RF photons from GC1 and GC2 when S = 0.64 (excitation power is 9.57 nW). The orange line is the theoretical curve for the RF (g (2) (0) = 0.020(1)) with the APD's dark count considered. Inset: sketch of HBT experiment with onchip beam splitting. | 4,949 | 2022-07-09T00:00:00.000 | [
"Physics"
] |
Measurement of W ±-boson and Z-boson production cross-sections in pp collisions at √ s = 2 . 76 TeV with the ATLAS detector The ATLAS
The production cross-sections for W± and Z bosons are measured using ATLAS data corresponding to an integrated luminosity of 4.0 pb−1 collected at a centre-of-mass energy √ s = 2.76 TeV. The decay channels W → `ν and Z → `` are used, where ` can be an electron or a muon. The cross-sections are presented for a fiducial region defined by the detector acceptance and are also extrapolated to the full phase space for the total inclusive production cross-section. The combined (average) total inclusive cross-sections for the electron and muon channels are:
Introduction
The processes that produce W and Z bosons1 in pp collisions via Drell-Yan annihilation are two of the simplest at hadron colliders to describe theoretically. At lowest order in quantum chromodynamics (QCD), W-boson production proceeds via qq → W and Z-boson production via qq → Z. Therefore, precision measurements of these production cross-sections yield important information about the parton distribution functions (PDFs) for quarks inside the proton. Factorisation theory allows PDFs to be treated separately from the perturbative QCD high-scale collision calculation as functions of the event energy scale, Q, and the momentum fraction of the parton, x, for each parton flavour. Usually PDFs are defined for a particular starting scale Q 0 and can be evolved to other scales via the DGLAP equations [1][2][3][4]. Measurements of on-shell W/Z-boson production probe the PDFs in a range of Q 2 that lies close to m 2 W /Z . The range of x that is probed depends on the centre-of-mass energy, √ s, of the protons and the rapidity coverage of the detector. Each measurement of these production cross-sections at a new value of √ s thus provides information complementary to previous measurements. The combinations of initial partons participating in the production processes of W + ,W − , and Z bosons are different, so each process provides complementary information about the products of different quark PDFs. This paper presents the first measurements of the production cross-sections for W + , W − and Z bosons in pp collisions at √ s = 2.76 TeV. The data were collected by the ATLAS detector at the Large Hadron Collider (LHC) [5] in 2013 and correspond to an integrated luminosity of 4.0 pb −1 . To provide further sensitivity to PDFs, and to reduce the systematic uncertainty in the predictions, ratios of these cross-sections and the charge asymmetry for W-boson production are also presented. The measurements are performed for leptonic (electron or muon) decays of the W and Z bosons, in a defined fiducial region, and also extrapolated to the total cross-section.
Previous measurements of the W-boson and Z-boson production cross-sections in pp collisons at the LHC were performed by the ATLAS and CMS Collaborations at √ s = 5.02 TeV [6], 7 TeV [7, 8], 8 TeV [9,10] and 13 TeV [11,12], and by the PHENIX and STAR Collaborations at the RHIC at √ s = 500 GeV [13,14] and 510 GeV [15]. This is the first measurement at 2.76 TeV. Other measurements of these processes were performed in pp collisons at
ATLAS detector
The ATLAS detector [24] at the LHC covers nearly the entire solid angle around the collision point. It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic (EM) and hadronic calorimeters, and a muon spectrometer (MS) incorporating three large superconducting toroid magnets. The inner-detector system (ID) is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the pseudorapidity range |η| < 2.5.2 The high-granularity silicon pixel detector covers the vertex region and typically provides three measurements per track. It is followed by the silicon microstrip tracker, which usually provides eight measurements from eight strip layers. These silicon detectors are complemented by the transition radiation tracker (TRT), which enables radially extended track reconstruction up to |η| = 2.0. The TRT also provides electron identification information based on the fraction of hits (typically 30 in total) above a higher energy-deposit threshold associated with the presence of transition radiation.
The calorimeter system covers the pseudorapidity range |η| < 4.9. Within the region |η| < 3.2, EM calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) sampling calorimeters, with an additional thin LAr presampler covering |η| < 1.8 that is used to correct for energy loss in material upstream of the calorimeters. Hadronic calorimetry in this region is provided by the steel/scintillator-tile calorimeter, segmented into three barrel structures with |η| < 1.7, and two copper/LAr hadronic endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for EM and hadronic measurements, respectively.
The muon spectrometer comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field generated by superconducting air-core toroids. The precision chamber system covers the region |η| < 2.7 with three layers of monitored drift tubes, complemented by cathode strip chambers in the forward region, where the backgrounds are highest. The muon trigger system covers the range |η| < 2.4 with resistive plate chambers in the barrel and thin gap chambers in the endcap regions.
The ATLAS detector selected events using a three-level trigger system [25]. The first-level trigger is implemented in hardware and used a subset of detector information to reduce the event rate to a design 2 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). Angular distance is measured in units of ∆R ≡ (∆η) 2 + (∆φ) 2 .
value of at most 75 kHz. This was followed by two software-based triggers that together reduced the event rate to about 200 Hz.
Data and simulation samples
The data used in this measurement were collected in February 2013 during a period when proton beams at the LHC were collided at a centre-of-mass energy of 2.76 TeV. During this running period a typical value of the instantaneous luminosity was 1 × 10 32 cm −2 s −1 , significantly lower than in 7, 8 and 13 TeV data-taking conditions. The typical value of the mean number of collisions per proton bunch crossing (pile-up) µ was 0.3. Only data from stable collisions when the ATLAS detector was fully operational are used, yielding a data sample corresponding to an integrated luminosity of 4.0 pb −1 .
Samples of Monte Carlo (MC) simulated events are used to estimate the signals from W-boson and Z-boson production, and the backgrounds containing prompt leptons: electroweak-diboson production and top-quark pair (tt) production. Background contributions arising from multijet events that do not contain prompt leptons are estimated directly from data, with simulated events used to cross-check these estimations in the muon channel.
Production of single W and Z bosons was simulated using P -B v1 r1556 [26][27][28][29]. The parton showering was performed using P 8.17 [30]. The PDF set used for the simulation was CT10 [31], and the parton shower parameter values were those of the AU2 tune [32]. Additional quantum electrodynamics (QED) emissions from electroweak (EW) vertices and charged leptons were simulated using P ++ v3.52 [33]. Additional samples of simulated W-boson events generated with S 2.1 [34] are used to estimate uncertainties arising from the choice of event generator model. In these S samples, simulation of W-boson production in association with up to two additional partons was performed at next-to-leading order (NLO) in QCD while production of W bosons in association with three or four additional partons was performed at leading order (LO) in QCD. The sample cross-sections were normalised to next-to-next-to-leading-order (NNLO) QCD predictions for the total cross-sections described in Section 8.
P
-B v1 r2330 was used to generate tt samples [35]. These samples had parton showering performed using P 6.428 [36] with parameters corresponding to the Perugia2011C tune [37]. The CT10 PDF set was used. Additional QED final-state radiative corrections were applied using P ++ v3.52 and τ-lepton decays were performed using T v25feb06 [38]. Single production of top quarks is a negligible contribution to this analysis, compared with tt production, so no such samples were generated.
Multijet production containing heavy-flavour final states, arising from the production of bb or cc pairs, were simulated using P 8.186. The CTEQ6L1 PDF set and AU2 tune were used. Events were required to contain an electron or muon with transverse momentum p T > 10 GeV and |η| < 2.8.
The detector response to generated events was simulated by passing the events through a model of the ATLAS detector [43] based on G 4 [44]. Additional minimum-bias events generated using P 8.17 and the A2 set of tuned parameters, were overlaid in such a way that the distribution of µ for
Process
Generator simulated events reproduced that in the real data. The resulting events were then passed through the same reconstruction software as the real data.
The simulated samples used for the baseline analysis are summarised in Table 1, which shows the generator used for each process together with the order in QCD at which they were generated.
Event selection
This section describes the selection of events consistent with the production of W bosons or Z bosons. The W-boson selection requires events to contain a single charged lepton and large missing transverse momentum. The Z-boson selection requires events to contain two charged leptons with opposite charge and the same flavour.
Events were selected by triggers that required at least one charged electron (muon) with p T > 15 GeV (10 GeV). These thresholds yield an event sample with a uniform efficiency as a function of the E T and p T requirements used subsequently to select the final event sample. The hard-scatter vertex, defined as the vertex with highest sum of squared track transverse momenta (for tracks with p T > 400 MeV), is required to have at least three associated tracks.
Electrons are reconstructed from clusters of energy in the EM calorimeter that are matched to a track reconstructed in the ID. The electron is required to have p T > 20 GeV and |η| < 2.4 (excluding the transition region between barrel and endcap calorimeters of 1.37 < |η| < 1.52). Each electron must satisfy a set of identification criteria designed to suppress misidentified photons or jets. Electrons are required to satisfy the medium selection, following the definition provided in Ref. [45]. This includes requirements on the shower shape in the EM calorimeter, the leakage of the shower into the hadronic calorimeter, the number of hits measured along the track in the ID, and the quality of the cluster-track matching. A Gaussian sum filter [46] algorithm is used to re-fit the tracks and improve the estimated electron track parameters. To suppress background from misidentified objects such as jets, the electron is required to be isolated using calorimeter-based criteria. The sum of the transverse energies of clusters lying within a cone of size ∆R = 0.2 around the centroid of the electron cluster and excluding the core3 must be less than 10% of the electron p T .
Muon candidates are reconstructed by combining tracks reconstructed in the ID with tracks reconstructed in the MS [47]. They are required to have p T > 20 GeV and |η| < 2.4. The muon candidates are also required to be isolated, by requiring that the scalar sum of the p T of additional tracks within a cone of size ∆R = 0.4 around the muon is less than 80% of the muon p T .
The missing transverse momentum vector [48] (E miss T ) is calculated as the negative vector sum of the transverse momenta of electrons and muons, and of the transverse momentum of the recoil. The magnitude of this vector is denoted by E miss T . The recoil vector is obtained by summing the transverse momenta of all clusters of energy measured in the calorimeter, excluding those within ∆R = 0.2 of the lepton candidate. The momentum vector of each cluster is determined by the magnitude and coordinates of the energy deposits; the cluster is assumed to be massless. Cluster energies are initially measured assuming that the energy deposition occurs only through EM interactions, and are then corrected for the different calorimeter responses to hadrons and electromagnetically interacting particles, for losses due to dead material, and for energy that is not captured by the clustering process [49]. The definition of the recoil does not make use of reconstructed jets, to avoid threshold effects. The procedure used to calibrate the recoil closely follows that used in the recent ATLAS measurement of the W-boson mass [50], first correcting the modelling of the overall recoil in simulation and then applying corrections for residual differences in the recoil response and resolution that are derived from Z-boson data and transferred to the W-boson sample.
The W-boson selection requires events to contain exactly one lepton (electron or muon) candidate and have E miss T > 25 GeV. The lepton must match a lepton candidate that met the trigger criteria. The transverse mass, m T , of the W-boson candidate in the event is calculated using the lepton candidate and ). The transverse mass in W-boson production events is expected to exhibit a Jacobian peak around the W-boson mass. Thus, requiring that m T > 40 GeV suppresses background processes. After these requirements there are 3914 events in the W → e + ν channel, 2209 events in the W → e −ν channel, 4365 events in the W → µ + ν channel, and 2460 events in the W → µ −ν channel.
The Z-boson selection requires events to contain exactly two lepton candidates with the same flavour and opposite charge. At least one lepton must match a lepton candidate that met the trigger criteria. Background processes are suppressed by requiring that the invariant mass of the lepton pair satisfies 66 < m < 116 GeV. After these requirements there are 430 events in the Z → e + e − channel, and 646 events in the Z → µ + µ − channel.
Background estimation
The background processes that contribute to the sample of events passing the W-boson and Z-boson selections can be separated into two categories: those estimated from MC simulation and theoretical calculations, and those estimated directly from data. The main backgrounds that contribute to the event sample passing the W-boson selection are processes with a τ-lepton decaying into an electron or muon plus neutrinos, leptonic Z-boson decays where only one lepton is reconstructed, and multijet processes.
The main background contribution to the event sample passing the Z-boson selection is production of two massive electroweak bosons.
The backgrounds arising from W → τν, Z → + − , diboson production, and tt production are estimated from the simulated samples described in Section 3. Predictions of the backgrounds to the W-boson and Z-boson production measurements arising from multijet production suffer from large theoretical uncertainties, and therefore the contribution to this background in the W-boson measurement is estimated from data. This is achieved by constructing a shape template for the background using a discriminating variable in a control region and then performing a template fit to the same distribution in the signal region to extract the background contribution. The choice of template variable is motivated by the difference between signal and background and by the available number of events. Previous ATLAS measurements at 7 TeV [7] and 13 TeV [12] found that multijet production makes a background contribution of less than 0.1% for Z-boson measurements; this is therefore neglected.
Electron candidates in multijet background events are typically misidentified candidates produced when jets mimic the signature of an electron, for example when a neutral pion and a charged pion overlap in the detector. Additional candidates can arise from 'non-prompt' electrons produced when a photon converts, and in decays of heavy-flavour hadrons. To construct a control region for the multijet template, a selection is used that differs from the W-boson selection described in Section 4 in only two respects: the medium electron identification criteria are inverted (while keeping the looser identification criteria) and the E miss T requirement is removed. By construction, this control region is statistically independent of the W-boson signal region. A template for the shape of the multijet background in the E miss T distribution is then obtained from that distribution in the control region after subtraction of expected contributions from the signal and other backgrounds determined using MC samples. The normalisation of the multijet background template in the signal region is extracted by performing a χ 2 fit of the E miss T distribution (applying all signal criteria except the requirement on E miss T ) to a sum of the templates for the multijet background, the signal, and all other backgrounds. The normalisation of the signal is allowed to vary freely in the fit as is the multijet background; however, the other backgrounds are only allowed to vary from their expected values by up to 5%, corresponding to the largest level of variation in predicted electroweak-boson production cross-sections obtained from varying the choice of PDF. The normalisation from this fit can then be used together with the inverted selection to construct multijet background distributions in any other variable that is not correlated with the electron identification criteria.
Muon candidates in multijet background events are typically 'non-prompt' muons produced in the decays of hadrons. The multijet background contribution to the W → µν selection is estimated by using the same method as described for the W → eν selection. In this case the control region is defined by inverting the isolation requirement and removing the requirement on m T . The distribution used for the fits is m T .
The overall number of multijet background events is estimated from a fit to the total W-boson sample. Fits to the separate W + -boson and W − -boson samples are used in the evaluation of the systematic uncertainties, as described in Section 7. The final estimated multijet contributions are 30 ± 11 events for W → e + ν and W → e − ν and 2.5 ± 1.9 events for W + → µ + ν and W − → µ − ν. The relative contribution of the multijet events (1%) is lower than in 13 TeV (4%) and 7 TeV (3%) data. This is in agreement with expectations for this lower pile-up running, where the resolution in E miss T is improved compared to the higher pile-up running.
Correction for detector effects
The measurements in this paper are performed within specific fiducial regions and extrapolated to the total W-boson or Z-boson phase space. The fiducial regions are defined by the kinematic and geometric selection criteria given in Table 2; in simulations these are applied at the generator level before the emission of QED final-state radiation from the decay lepton(s) (QED Born level).
The fiducial W-boson/Z-boson production cross-section is obtained from the number of observed events meeting the selection criteria after background contributions are subtracted, N sig W, Z , using the following formula: where L int is the total integrated luminosity of the data samples used for the analysis. The factor C W, Z is the ratio of the number of generated events that satisfy the final selection criteria after event reconstruction to the number of generated events within the fiducial region. It includes the efficiency for triggering, reconstruction and identification of W, Z → ν, + − events falling within the acceptance. The different components of the efficiency are calculated using a mixture of MC simulation and measurements from data.
The total W-boson and Z−boson production cross-sections are obtained using the following formula: The factor B(W, Z → ν, ) is the per-lepton branching fraction of the vector boson. The factor A W, Z is the acceptance for W/Z-boson events being studied. It is defined as the fraction of generated events that satisfy the fiducial requirements. This acceptance is determined using MC signal samples, corrected to the generator QED Born level, and is used to extrapolate the measured cross-section in the fiducial region to the full phase space. The central values of A W, Z are around 0.6 for these measurements, compared with 0.5 at √ s = 7 TeV and 0.4 at √ s = 13 TeV, so the fiducial region is closer to the full phase space in this measurement than for those at higher centre-of-mass energies. This is due to a combination of higher p T thresholds for leptons in other measurements, and more-central production of vector bosons at lower
Systematic uncertainties
The systematic uncertainty in the electron reconstruction and identification efficiency is estimated using the tag-and-probe method in 8 TeV data [45,51] and extrapolated to the 2.76 TeV dataset. The extrapolation procedure results in absolute increases of ±2%, due to uncertainties in the effect of the differing pileup conditions in 2.76 TeV data relative to the 8 TeV data. Transverse-momentum-dependent isolation corrections, calculated with the tag-and-probe method in 2.76 TeV data, are very close to 1, so the systematic W-boson fiducial region Z-boson fiducial region p T > 20 GeV p uncertainty in the electron isolation requirement is set to the size of the correction itself, that is ±1% for low p T and ±0.3% for higher p T . The electron energy scale has associated statistical uncertainties and systematic uncertainties arising from a possible bias in the calibration method, the choice of generator, the presampler energy scale, and imperfect knowledge of the material in front of the EM calorimeter [52]. The total energy-scale uncertainty is calculated as the sum in quadrature of these components.
Systematic uncertainties associated with the muon momentum can be divided into three major independent categories: momentum resolution of the MS track, momentum resolution of the ID track, and an overall scale uncertainty. The total momentum scale/resolution uncertainty is the sum in quadrature of these components. An η-independent uncertainty of approximately ±1.1% in the muon trigger efficiency, determined using the tag-and-probe method [47] in 2.76 TeV data, is taken into account. Furthermore, a p Tand ηdependent uncertainty in the identification and reconstruction efficiencies of approximately ±0.3 %, derived using the tag-and-probe method on 8 TeV data is applied. The uncertainty in the p T -dependent isolation correction in the muon channel, calculated with the tag-and-probe method in 2.76 TeV data, is about ±0.6% for low p T and ±0.5% for higher p T .
The luminosity uncertainty for the 2.76 TeV data is ±3.1%. This is determined, following the same methodology as was used for the 7 TeV data recorded in 2011 [53], from a calibration of the luminosity scale derived from beam-separation scans performed during the 2.76 TeV operation of the LHC in 2013.
Systematic uncertainties in the E miss T arising from the smearing and bias corrections applied to obtain satisfactory modelling of the recoil [48] affect the C W factors in the W → ν measurement, and are taken into account.
Uncertainties arising from the choice of PDF set are evaluated using the error sets of the initial CT10 PDF set (at 90% confidence level (CL)) and from comparison with the results obtained using the central PDF sets from ABKM09 [54], NNPDF23 [55], and ATLAS-epWZ12 [56]. The effect of this uncertainty on A W + (A W − ) is estimated to be ±1.0% (1.2%), and the effect on A Z is estimated to be ±1.4%. The effect on C W, Z is between ±0.05% and ±0.4% depending on the channel.
A summary of the systematic uncertainties in the C W, Z factors is shown in Table 3. The muon trigger, and electron reconstruction and identification uncertainties are dominant.
Uncertainties arising from the choice of event generator and parton shower models are estimated by comparing results obtained when using S 2.1 signal samples instead of the (nominal) P -B +P 8. The effect of this uncertainty on A W, Z is estimated to be ±0.9%.
The systematic uncertainty in the multijet background estimation can be divided into several components: the normalisation uncertainty from the χ 2 fit, the uncertainty in the modelling of electroweak processses by simulated samples in the fitted region, uncertainty from fit bias due to binning choice, and uncertainty from template shape. The scale normalisation uncertainty from the χ 2 fit is approximately ±13% for the W → eν channel. This uncertainty is neglected in the W → µν channel where the template bias is dominant. The mismodelling uncertainty is estimated by comparison of the fit results for + and − , and for the combined ± candidates. The central value used is 0.5N ± with the uncertainties N + − 0.5N ± and N − − 0.5N ± , where N + is the fitted number of + background events, N − is the fitted number of − and N ± is the fitted total number of ± background events. In the W → eν channel this leads to an uncertainty of ±28% in the multijet background. In the W → µν channel the multijet template normalisation is derived from the fit in the small-m T region, where electroweak contributions are negligible and there are many data events, and this source of systematic error is found to be negligible. The fit-bias uncertainty arising from the choice of bin width is estimated by repeating the fit with different binnings. This component is negligible in the W → µν case and ±15% in the W → eν case. The uncertainty due to a potential bias from template choice is estimated by employing different template selections. For the W → eν channel, different inverted-isolation criteria were investigated. The overall differences are considered negligible. For the W → µν channel, template variations were estimated from fits that use bb + cc MC samples as the multijet templates, leading to an uncertainty of ±75%; this is the largest uncertainty in the multijet background in the W → µν channel.
Combining results and building ratios or asymmetries of results require a model for the correlations of particular systematic uncertainties between different measurements. Correlations arise mostly due to the fact that electrons, muons, and the recoil are reconstructed identically in the different measurements. Further correlations occur due to similarities in the analysis methodology such as the methods of signal and background estimation.
The systematic uncertainties from the electroweak background estimations are treated as uncorrelated between the W-boson and Z-boson measurements, and fully correlated among different flavour decay channels of the W and Z boson. The top-quark background is treated as fully correlated across all W-boson and Z-boson decay channels. The multijet background and recoil-related systematic uncertainties are also treated as fully correlated between all four W-boson decay channels despite there being an expected uncorrelated component, since the statistical uncertainty is dominant in this case.
The systematic uncertainties due to the choice of PDF are treated as fully correlated between all W-boson and Z-boson channels. The uncertainties in electron and muon selection, reconstruction and efficiency are treated as fully correlated between all W-boson and Z-boson channels.
A simplified form of the correlation model with the grouped list of the sources of systematic errors is presented in Table 4.
Source
Muon channel Electron channel
Results
The numbers of events passing the event selections described in Section 4 are presented in Table 5, together with the estimated background contributions described in Section 5. The distribution of m T for W → ν candidate events is shown in Figure 1, compared with the expected distribution for signal plus backgrounds, where the signal is normalised to the NNLO QCD prediction. Similarly, Figure 2 shows the distribution of m for Z → + − candidate events compared with the expectations for signal. In this case, the background contributions are not shown, because they would not be visible in the figure if included.
The measured fiducial (σ fid ) and total (σ tot ) cross-sections in the electron and muon channels are presented separately in Table 6. For these measurements, the dominant contribution to the systematic uncertainty arises from the luminosity determination.
The results obtained from the electron and muon final states are consistent. The fiducial measurements from electron and muon final states are combined following the procedure described in Ref. [57] and the result is extrapolated to the full phase space to obtain the total cross-section. The total W-boson cross-section is calculated by summing the separate W + and W − cross-sections. The results are shown in Table 7.
Theoretical predictions of the fiducial and total cross-sections are computed for comparison with the measured cross-sections using D 1.5 [58] and F 3.1 [59][60][61][62], which provide calculations at NNLO in the strong-coupling constant, O(α 2 s ), including the boson decays into leptons ( + ν, −ν or + − ) with full spin correlations, finite width and interference effects. These calculations allow kinematic Table 6: Results of the fiducial and total cross-sections measurements of the W + -boson, W − -boson, and Z-boson production cross-sections in the electron and muon channels. The cross-sections are shown with their statistical, systematic and luminosity uncertainties (and extrapolation uncertainty for total cross-section).
Value ± stat. ± syst. ± lumi. (± extr.) Value ± stat. ± syst. ± lumi. (± extr.) [65]. The following input parameters are taken from the Particle Data Group's Review of Particle Properties 2014 edition [66]: the Fermi constant, the masses and widths of W and Z bosons as well as the elements of the CKM matrix. The cross-sections for vector bosons decaying into these leptonic final states are calculated such that they match the definition of the measured cross-sections in the data. Thus, from complete NLO EW corrections, the following components are included: virtual QED and weak corrections, real initial-state radiation (ISR), and interference between ISR and real final-state radiation (FSR) [67]. The calculated effect of these corrections on the cross-sections is (−0.26 ± 0.02)% for σ fid W + , (−0.21 ± 0.03)% for σ fid W − , and (−0.25 ± 0.12)% for σ fid Z . D is used for the central values of the predictions while F is used for the PDF variations and all other systematic variations such as QCD scale and α s . The predictions are calculated Table 7: Combined fiducial and total cross-section measurements for W + -boson, W − -boson and Z-boson production. The cross-sections are shown with their statistical, systematic and luminosity uncertainties (and extrapolation uncertainty for total cross-section).
Theoretical uncertainties in the predictions are also derived from the following sources: PDF: these uncertainties are evaluated from the variations of the NNLO PDFs according to the recommended procedure for each PDF set. A table with all PDF uncertainties and their central values is shown in Appendix A; the PDF uncertainty from CT14nnlo was rescaled from 90% CL to 68% CL.
Scales: the scale uncertainties are defined by the envelope of the variations in which the scales are changed by factors of two subject to the constraint 0.5 ≤ µ R /µ F ≤ 2.
α s : the uncertainty due to α s was estimated by varying the value of α s used in the CT14nnlo PDF set by ±0.001, corresponding to a 68% CL variation.
The statistical uncertainties in these theoretical predictions are negligible. The numerical values of the predictions for the CT14nnlo PDF set are presented in Table 8. The predictions for the acceptance factor A W, Z can differ by a few percent from those derived from simulated signal samples, this may be due to a poorer description of production of low p T W-bosons by the fixed-order calculations. The predictions are shown in comparison with the combined W-boson and Z-boson production measurements, and with results from pp and pp collisions at other centre-of-mass energies in Figure 3. A comparison of the measurements with predictions from various different PDF sets is presented in Figures 4 and 5. Overall there is good agreement.
Taking ratios of measurements leads to results that have significantly reduced systematic uncertainties due to full or partial cancellation of correlated systematic uncertainties, as discussed in Section 7. The ratios of the fiducial cross-sections for W-boson and Z-boson production are presented, together with the ratio for W + -boson and W − -boson production, in Figure 6. It can be seen that the predictions from the different PDF sets are mostly in good agreement with the measurements. There is a slight (less than two standard deviations) tension between the data and the prediction using the ABMP16 PDF set. The measured values of the ratios are: The measurement of the ratio R W + /W − is sensitive to the u v and d v valence quark distributions, while the ratio R W /Z can place constraints on the strange quark distributions. A common alternative way of presenting this information is in terms of the charge asymmetry, A , in W-boson production: This observable also benefits from the cancellation of systematic uncertainties in the same way as the cross-section ratios. The measured value is: The ratio of measured cross-sections in the electron and muon decay channels provides a test of lepton universality in W-boson decays. The measured ratios are: Table 7. The inner shaded band represents the statistical uncertainty only, the outer band corresponds to the experimental uncertainty (including the luminosity uncertainty). The theory predictions are given with the corresponding PDF (total) uncertainty shown by inner (outer) error bar. Table 7. The inner shaded band represents the statistical uncertainty only, the outer band corresponds to the experimental uncertainty (including the luminosity uncertainty). The theory predictions are given with the corresponding PDF (total) uncertainty shown by inner (outer) error bar. Figure 6: The measured ratio of fiducial cross-sections for (a) W-boson production to Z-boson production, (b) W + -boson production to W − -boson production. The measurements are compared with theoretical predictions at NNLO in QCD based on a selection of different PDF sets. The inner shaded band corresponds to statistical uncertainty while the outer band shows statistical and systematic uncertainties added in quadrature. The theory predictions are given with the corresponding PDF (total) uncertainty shown by inner (outer) error bar.
These results lie within one standard deviation of the Standard Model prediction and previous measurements by ATLAS.
The results obtained, and the ratios and charge asymmetries constructed from them, are in agreement with theoretical calculations based on NNLO QCD.
A Theoretical predictions
This appendix presents the theoretical predictions used for comparison with the measurements in the main body of the paper. Table 9 shows the predictions using the MMHT14nnlo68cl, NNPDF31_nnlo_as_0118, ATLASepWZ12, HERAPDF2.0, and ABMP16 PDF sets with associated PDF uncertainties. | 8,184.4 | 2019-07-08T00:00:00.000 | [
"Art",
"Physics",
"Computer Science",
"Chemistry"
] |
The X-ray Sensitivity of an Amorphous Lead Oxide Photoconductor
The photoconductor layer is an important component of direct conversion flat panel X-ray imagers (FPXI); thus, it should be carefully selected to meet the requirements for the X-ray imaging detector, and its properties should be clearly understood to develop the most optimal detector design. Currently, amorphous selenium (a-Se) is the only photoconductor utilized in commercial direct conversion FPXIs for low-energy mammographic imaging, but it is not practically feasible for higher-energy diagnostic imaging. Amorphous lead oxide (a-PbO) photoconductor is considered as a replacement to a-Se in radiography, fluoroscopy, and tomosynthesis applications. In this work, we investigated the X-ray sensitivity of a-PbO, one of the most important parameters for X-ray photoconductors, and examined the underlying mechanisms responsible for charge generation and recombination. The X-ray sensitivity in terms of electron–hole pair creation energy, W±, was measured in a range of electric fields, X-ray energies, and exposure levels. W± decreases with the electric field and X-ray energy, saturating at 18–31 eV/ehp, depending on the energy of X-rays, but increases with the exposure rate. The peculiar dependencies of W± on these parameters lead to a conclusion that, at electric fields relevant to detector operation (~10 V/μm), the columnar recombination and the bulk recombination mechanisms interplay in the a-PbO photoconductor.
Introduction
The ever-growing demand for advanced radiation medical imaging techniques sustains continued research interest in novel materials and technologies for imaging detectors, based on the direct conversion of diagnostic X-rays. In direct conversion flat panel Xray imagers (FPXIs), a uniform layer of the photoconductor is deposited over large area readout electronics based on either thin-film transistor (TFT) arrays or complementary metal-oxide-semiconductor (CMOS) active-matrix arrays. The photoconductor acts as a direct X-ray-to-charge transducer; i.e., it absorbs X-rays and directly creates electron-hole pairs (ehps), which are subsequently separated by a bias field to generate a signal.
Stabilized amorphous selenium (a-Se) is the most successful, commercially viable, large-area-compatible X-ray photoconductor used in direct conversion FPXIs for medical imaging due to its several distinct advantages over other potentially competing photoconductors [1,2]. Both X-ray-generated electrons and holes can drift in a-Se under appropriate conditions [3,4]. The dark current can be appropriately controlled by the use of blocking structures [5,6]. The X-ray attenuation coefficient, while not outstanding, is acceptable for the relatively soft X-rays in mammographic energy range (~20 keV) [1,3]. The fabrication technology of the practical photoconductive layers is mature enough, and thus cost-effective. Therefore, the most successful application of stabilized a-Se technology is in mammography where a-Se-based FPXIs became a dominant technology [1,7]. However,
Background
It was suggested that the intrinsic ehp creation energy, W 0 ± , of a semiconductor depends on its bandgap E g according to the relationship W 0 ± ≈ 2.8E g + ε ph (Klein rule for crystalline semiconductors [30]) or W 0 ± ≈ 2.2E g + ε ph (Que-Rowlands rule for amorphous solids [32]), where the term ε ph ≈ 0.5 − 1 eV is responsible for losses due to optical phonons. In practice, many low-mobility amorphous and polycrystalline semiconductors demonstrate effective W ± that is higher than the intrinsic value. For example, W ± is 45 eV/ehp for a-Se [3],~17 eV/ehp for poly-PbO [18], and~22 eV/ehp for a-PbO [20] at a practical electric field of F = 10 V/µm, whereas theoretical values are within 5-7 eV/ehp. The fact that experimental W ± exceeds the theoretical value indicates that a certain portion of the initially X-ray-generated charge undergoes deep trapping or recombination and thus does not contribute to the photo-signal, reducing the detector's sensitivity, and ultimately degrading the SNR of the image.
Generally speaking, the carriers can be trapped at localized states within the mobility gap of a-PbO, in either shallow or deep traps. However, a previous investigation of the ghosting effect [21] suggested that no deep trapping occurs in PI/a-PbO photoconductive structures, at least at the relatively low exposures used in this study. Ghosting is caused by deep bulk trapping of photogenerated carriers, which subsequently recombine with the drifting carriers of the opposite sign, resulting in sensitivity degradation. Since no detectable ghosting effect was observed at relevant exposure rates, deep trapping can be ruled out as a primary cause for W ± degradation. Additionally, the quasi-rectangular shape of the X-ray response indicates the unrestricted flow of the photogenerated carriers from the a-PbO photoconductive layer through the PI blocking layer into the ITO electrode [21], meaning that no accumulation of carriers in shallow states at the PI/a-PbO interface are expected as well. Therefore, a trapping mechanism can be excluded from the reasons for the carrier loss in a-PbO and will not be discussed further.
As for the recombination, there are three main theories that could explain the loss of the X-ray-generated carriers in the photoconductors: the bulk (Langevin), geminate (Onsager), and columnar (track) models [32][33][34][35][36][37][38][39][40][41]. Bulk Langevin recombination is a bimolecular process in which electrons and holes drift through the bulk of the photoconductor, due to the internal electric field within the layer, meet each other in space and time, and recombine. The two other intra-track mechanisms, i.e., geminate and columnar recombination, occur within the ionization column formed along the track of the energetic primary photoelectron. In the geminate model described by Onsager theory [42], the twin generated electron and hole pair recombine with each other while diffusing and drifting in the presence of their mutual Coulomb attraction and the applied electric field. Columnar recombination, first proposed by Jaffe [43] and expanded by Hirsch and Jahankhani [44], assumes that the photogenerated charge density inside the column is high enough so that the concept of independent geminate ehps is inapplicable. In this case, bimolecular recombination occurs between two non-geminate charges (i.e., electron and hole from two different twin pairs), just like in the bulk Langevin model, but within the ionization column.
The applicability of the recombination models depends on the properties of the material under consideration, and also on the source of excitation. For example, it was shown that the recombination of drifting holes with either drifting or trapped electrons in a-Se follows the bulk Langevin recombination mechanism [45,46]; initial recombination of optically excited carriers is controlled by the geminate mechanism [47], but columnar recombination prevails in the case of X-ray photogeneration [2,33,34,36]. On the other hand, geminate recombination controls the effective W ± in X-ray irradiated anthracene, PVK, and in electron-bombarded SiO 2 ([32,39,40] and references therein). Although these materials have some common properties (i.e., low mobility), they have different recombination mechanisms. Therefore, one cannot rule out any of these theories a priori, but must first assess their fitness based on the experimental results. Conveniently, the recombination rate of each mechanism depends uniquely on experimental parameters such as electric field, exposure, X-ray photon energy, and temperature, which can be used to identify the dominant process.
Exposure Dependency
Bulk bimolecular recombination in amorphous solids is usually described using the Langevin formalism, which states that the recombination rate is proportional to the concentration of both types of carriers. Therefore, if bulk recombination is a dominant process, the collected charge Q should change with exposure X according to Q ∼ X 1/2 [40].
The situation is different for the intra-track mechanisms. With increasing radiation intensity, the number of primary photoelectron tracks proportionally increases, but the recombination within each track remains unaffected. This means that for geminate and columnar mechanisms, the collected charge increases linearly with the exposure, following Q ∼ X [39]. Additionally, geminate recombination is a monomolecular process; therefore, the recombination probability does not depend on the concentration of the surrounding charges (since the separation between the geminate electron and hole is the smallest distance between any two oppositely charged carriers), and thus the relationship Q ∼ X is adhered to again [41].
Field Dependency
The X-ray sensitivity in many X-ray photoconductors (i.e., a-Se, poly-PbO, a-PbO, perovskites) shows a very pronounced electric field dependency [3,18,20,48]. It is usually described as W ± (F) = W 0 ± + B/F, where W 0 ± is the intrinsic ehp creation energy at an "infinite" field and B is a material-specific constant that depends on the energy of X-ray photons ( [1,33] and references therein).
Regardless of the mechanism, the recombination rate is determined by the probability of carriers meeting in space. It ultimately depends on the interplay between three main driving forces: charge carrier thermal diffusion, charge carrier drift in the applied electric field, and mutual attraction between the oppositely charged carriers. The applied electric field acts to overcome mutual Coulombic attraction between photogenerated electrons and holes, increasing the recombination escape probability. This results in a higher number of freed ehps and lower W ± [36].
Such field-dependent sensitivity is typical for both columnar and geminate recombination ( [33,34] and references therein), although each of them has its own peculiarities. In the columnar model, at the very low electric fields ( 1 V/µm, when diffusion dominates over drift), W ± is field-independent [41]. In the geminate model, the low-field portion of the photogeneration efficiency η(F) = W 0 ± /W ± (F) is a straight line with a slope-to-intercept ratio R SI = e 3 / 8πε r ε 0 k 2 T 2 , where e-elementary charge, ε r -relative permittivity of the photoconductor, ε 0 -vacuum permittivity, and k-Boltzmann's constant ( [33] and references therein).
The fraction of carriers lost to bulk recombination is proportional to F −2 , and thus the collected charge is given by Q ∼ 1/ 1 + F −2 [41,49].
X-ray Energy Dependency
To the best of our knowledge, the only photoconductor whose X-ray energy dependency on W ± has been examined (both experimentally [31,41,[50][51][52] and theoretically [34,35,37,[53][54][55]) is a-Se. As was discussed in [34] (and references therein), within the framework of the geminate recombination model, the initial separation between an electron and a hole controls the probability of their escape from recombination. Therefore, if the initial separation is independent of the incident photon energy, W ± should be too, if the geminate recombination is the dominant process.
On the other hand, through the example of a-Se, it has been shown that the columnar recombination rate drops with increasing X-ray photon energy [33,34,53]. This is due to a rise in the mean separation of the electrons and holes within the ionization column. As the charge density decreases, the recombination rate between non-geminate electrons and holes within the column also declines. This increases the number of free electrons and holes which, in turn, leads to a reduction in W ± .
Detector Preparation
A single-pixel direct conversion digital detector based on an amorphous lead oxide (a-PbO) photoconductor with a single blocking layer of polyimide (PI) was used in this work. Commercially supplied pre-washed and vacuum-packed ITO-coated glass (bottom biasing electrode) was rinsed with acetone, methanol, and isopropanol; dried with dry nitrogen; and placed on a hot plate at 90 • C for 10 min, to ensure cleanliness. A 1 µm thick PI layer was then spin-coated onto the ITO-coated glass. 19 µm of a-PbO was deposited on the prepared substrate by ion-assisted thermal evaporation. Finally, a top Au contact (readout electrode) 1.1 mm in diameter was sputtered atop of the a-PbO, which provided an effective detector area of 0.95 mm 2 . Detailed descriptions of the PI application and a-PbO deposition can be found in [21,56].
The density of the a-PbO photoconductor ρ was calculated from the mass m and volume of the film, which can be treated as a cylinder with a height d and radius r; thus, ρ = m/ πr 2 d . The a-PbO deposition was performed using a shadow mask with a window of radius r = 6.25 mm. The photoconductor thickness d = 19 µm was measured with a stylus profilometer (KLA Tenchor Alpha-Step D-100, Milpitas, CA, USA). The glass substrate with the applied PI layer was weighed on a microbalance (Sartorius CP2P, Göttingen, Germany) before and after deposition of a-PbO film to calculate the mass of the photoconductor layer: m = 20.5 mg. The density was found to be ρ = 8.8 g/cm 3 , which was 92% of the crystalline PbO density (9.53 g/cm 3 ), owing to high packing density and the absence of voids in the a-PbO layer [56].
Experimental Apparatus
X-ray characterization of the PI/a-PbO detector was performed using the X-rayinduced photocurrent method (XPM). The experimental setup is shown in Figure 1. A detector was placed in a shielded aluminum box. Prior to measurement, the detector was short-circuited in the dark to allow for the complete detrapping of charge carriers. A positive dc bias was applied to the ITO by a high voltage power supply (Stanford Research Systems PS350, Sunnyvale, CA, USA) to create a strong electric field in the photoconductor. The photocurrent due to the drifting carriers was read out from the Au electrode by an oscilloscope (Tektronix TDS 2024C, Beaverton, OR, USA) with a native input resistance of 1 MΩ. In this work, the electric field is referred to as an applied field to the detector F = V bias /(d PbO + d PI ), where V bias is an applied bias, and d PbO and d PI are the thicknesses of the a-PbO and PI layers, respectively. After such a bias is applied, the dark current in the PI/a-PbO detector exponentially decreases with time due to the accumulation of trapped charge within the PI blocking layer [21]. Therefore, the bias was applied to the detector for 15 min prior to irradiation to allow the dark current to stabilize and to drop to a level below 5 pA/mm 2 . An X-ray unit (tube Dunlee PX1412CS, insert DU-304, generator CPI Indico 100, Georgetown, ON, Canada) with a tungsten target was used to generate X-ray pulses. The tube voltage could be varied in the range of 40-100 kVp and the tube current could be set in between 25 mA and 400 mA. 2-mm Lead collimators were used to form a narrow-beam geometry and to minimize scattering. An added filtration of Aluminum (type 1100, min 99.0% purity) was placed in the cassette in front of the X-ray tube to harden the X-ray beam. The exposure was monitored by dosimeter Keithley 35040 (Cleveland, OH, USA) with ionization chamber Keithley 96035 (Cleveland, OH, USA). The ion chamber was positioned midway between the detector and the tube to avoid any contribution of backscattered X-rays to the exposure reading. chamber was positioned midway between the detector and the tube to avoid any contribution of backscattered X-rays to the exposure reading. ± is derived as a ratio of the total energy absorbed in the photoconductor upon Xray irradiation to the number of collected ehps : A detailed description of calculation of the absorbed energy, the number of collected charges, and X-ray sensitivity is provided in Appendix A.
Monte Carlo Simulations
Monte Carlo simulations of the electron trajectories in PbO were performed using the Stopping and Range of Electrons in Matter (SREM)-type Monte Carlo software CASINO (monte CArlo SImulation of electroNs in sOlids) [57]. A PbO sample was irradiated with an electron beam and transport of electrons was simulated, taking into account the physical interaction with the matter. The electron beam energies were selected to represent the kinetic energy of the ejected primary photoelectrons = ℎ − , where ℎ is the mean energy of the incident X-ray photons in the beam and is the binding energy of that photoelectron. Since the mean energies of the X-ray beams used in this work (see Figure A1a) were lower than the K-edge energy of PbO ( = 88 keV), the photoelectrons were considered to be ejected from the L3-subshell with binding energy = 13 keV [58]. The Monte Carlo simulation method and the physical models used were described in [59,60]. For each beam energy, the trajectory information (such as collision event coordinates and energy) from the 500 primary electrons, which included ~10 5 -10 6 events (depending on the beam energy), was recorded and further analyzed. The energy difference between two consecutive events was calculated and used as the dissipated energy per collision event, and the coordinates were used to calculate a distance between these consecutive collision sites and the average total path length. Finally, the average ratio of the dissipated energy to the distance between collision sites was calculated for each electron beam energy, which can be treated as the rate of energy deposited in the photoconductor. Figure 2 shows a typical X-ray response of the PI/a-PbO detector to irradiation by a 100-ms X-ray pulse at different applied electric fields and a tube voltage of 60 kVp. Without irradiation, the detector produces only dark current in the order of several picoamps. Upon X-ray irradiation, the detector exhibits a quasi-rectangular response with a uniform amplitude. The photocurrent increases with the electric field and begins to saturate after 10 V/μm. After the irradiation is terminated, the photocurrent rapidly drops to a dark W ± is derived as a ratio of the total energy absorbed in the photoconductor upon X-ray irradiation E abs to the number of collected ehps N ehp :
Results
A detailed description of calculation of the absorbed energy, the number of collected charges, and X-ray sensitivity is provided in Appendix A.
Monte Carlo Simulations
Monte Carlo simulations of the electron trajectories in PbO were performed using the Stopping and Range of Electrons in Matter (SREM)-type Monte Carlo software CASINO (monte CArlo SImulation of electroNs in sOlids) [57]. A PbO sample was irradiated with an electron beam and transport of electrons was simulated, taking into account the physical interaction with the matter. The electron beam energies were selected to represent the kinetic energy of the ejected primary photoelectrons KE = hν − BE, where hν is the mean energy of the incident X-ray photons in the beam and BE is the binding energy of that photoelectron. Since the mean energies of the X-ray beams used in this work (see Figure A1a) were lower than the K-edge energy of PbO (BE K = 88 keV), the photoelectrons were considered to be ejected from the L 3 -subshell with binding energy BE L 3 = 13 keV [58].
The Monte Carlo simulation method and the physical models used were described in [59,60]. For each beam energy, the trajectory information (such as collision event coordinates and energy) from the 500 primary electrons, which included~10 5 -10 6 events (depending on the beam energy), was recorded and further analyzed. The energy difference between two consecutive events was calculated and used as the dissipated energy per collision event, and the coordinates were used to calculate a distance between these consecutive collision sites and the average total path length. Finally, the average ratio of the dissipated energy to the distance between collision sites was calculated for each electron beam energy, which can be treated as the rate of energy deposited in the photoconductor. Figure 2 shows a typical X-ray response of the PI/a-PbO detector to irradiation by a 100-ms X-ray pulse at different applied electric fields and a tube voltage of 60 kVp. Without irradiation, the detector produces only dark current in the order of several picoamps. Upon X-ray irradiation, the detector exhibits a quasi-rectangular response with a uniform amplitude. The photocurrent increases with the electric field and begins to saturate after 10 V/µm. After the irradiation is terminated, the photocurrent rapidly drops to a dark current level, demonstrating almost negligible signal lag. A detailed analysis of the temporal performance (evaluated in terms of signal lag and ghosting) of the a-PbO-based detectors can be found in [20,21]. current level, demonstrating almost negligible signal lag. A detailed analysis of the temporal performance (evaluated in terms of signal lag and ghosting) of the a-PbO-based detectors can be found in [20,21].
Figure 2.
A typical X-ray response to 60 kVp irradiation at different electric fields.
± values were calculated using Equation (1) and plotted as a function of the applied electric field or the reciprocal electric field for different tube voltages in Figure 3a,b, respectively. For the reasons discussed later in the text, the tube current and source-to-detector distance (SDD) were adjusted for each tube voltage to keep a constant exposure level in the photoconductor's plane of 100 mR. As is evident from Figure 3a,b, the sensitivity improves ( ± decreases) as the field increases, rapidly saturating after 10 V/μm. The saturated values of ± depend on the tube voltage: 31, 22, 20, and 18 eV/ehp for 40, 60, 80, and 100 kVp, respectively. As it can be seen, ± decreases with increasing tube voltage, and thus with the mean energy of X-ray photons in the beam (see inset in Figure A1a). W ± values were calculated using Equation (1) and plotted as a function of the applied electric field or the reciprocal electric field for different tube voltages in Figure 3a,b, respectively. For the reasons discussed later in the text, the tube current and source-to-detector distance (SDD) were adjusted for each tube voltage to keep a constant exposure level in the photoconductor's plane of 100 mR. As is evident from Figure 3a,b, the sensitivity improves (W ± decreases) as the field increases, rapidly saturating after 10 V/µm. The saturated values of W ± depend on the tube voltage: 31, 22, 20, and 18 eV/ehp for 40, 60, 80, and 100 kVp, respectively. As it can be seen, W ± decreases with increasing tube voltage, and thus with the mean energy of X-ray photons in the beam (see inset in Figure A1a).
Figure 2.
A typical X-ray response to 60 kVp irradiation at different electric fields. ± values were calculated using Equation (1) and plotted as a function of the appli electric field or the reciprocal electric field for different tube voltages in Figure 3a,b, spectively. For the reasons discussed later in the text, the tube current and source-to-d tector distance (SDD) were adjusted for each tube voltage to keep a constant exposu level in the photoconductor's plane of 100 mR. As is evident from Figure 3a,b, the sen tivity improves ( ± decreases) as the field increases, rapidly saturating after 10 V/μ The saturated values of ± depend on the tube voltage: 31, 22, 20, and 18 eV/ehp for 60, 80, and 100 kVp, respectively. As it can be seen, ± decreases with increasing tu voltage, and thus with the mean energy of X-ray photons in the beam (see inset in Figu A1a). Figure 4. It was found that W ± changes with the exposure rate, but not with the exposure itself (i.e., W ± is identical for two X-ray pulses with the same amplitude but different duration). Therefore, the exposure X dependency of W ± was measured at a fixed pulse duration t pulse = 0.1 s and plotted as a function of exposure rate X t = X/t pulse . Figure 4 shows these results in different electric fields and at different tube voltages: W ± increases with the exposure rate. At lower fields, W ± changes more drastically: almost 200% growth when the exposure changes by a factor of 50. The rate of change is similar for different tube voltages. It should be noted that the exposure rates used were much larger than typical radiation levels used in the clinical practices (~10 −4 R/s for fluoroscopy and~10 −1 R/s for 3D mammography [61,62]). However, it was not feasible to use exposures in the micro-roentgens range due to the limited sensitivity of the oscilloscope.
The effect of exposure (X-ray flux) is examined in Figure 4. It was found changes with the exposure rate, but not with the exposure itself (i.e., ± is iden two X-ray pulses with the same amplitude but different duration). Therefore, the e dependency of ± was measured at a fixed pulse duration = 0.1 s and as a function of exposure rate = / . Figure 4 shows these results in differ tric fields and at different tube voltages: ± increases with the exposure rate. A fields, ± changes more drastically: almost 200% growth when the exposure cha a factor of 50. The rate of change is similar for different tube voltages. It should b that the exposure rates used were much larger than typical radiation levels use clinical practices (~10 −4 R/s for fluoroscopy and ~10 −1 R/s for 3D mammography However, it was not feasible to use exposures in the micro-roentgens range du limited sensitivity of the oscilloscope. Since the exposure rate significantly affects the detector's sensitivity, all expe were performed with the exposure fixed at the lower end of the available range per 0.1 s), unless otherwise is specified. This was achieved by adjusting the tube and SDD.
To investigate the effect of the X-ray photon energy on the ± , one has to use ure of energy that would take the complex shape of a polyenergetic X-ray spectr account. The most common parameters are the tube voltage kVp (or, equivale maximum energy of X-ray photons in a beam) and the mean energy (calcu the energy-weighted average). However, it should be noted that neither of these ters characterizes a polyenergetic spectrum unambiguously and thus they sh treated as an approximate measure of the beam energy only [63].
± for different X-ray tube voltages and corresponding mean X-ray ener shown in Figure 5. The detector's sensitivity improves ( ± decreases) as the ener rays increases. Since the exposure rate significantly affects the detector's sensitivity, all experiments were performed with the exposure fixed at the lower end of the available range (100 mR per 0.1 s), unless otherwise is specified. This was achieved by adjusting the tube current and SDD.
To investigate the effect of the X-ray photon energy on the W ± , one has to use a measure of energy that would take the complex shape of a polyenergetic X-ray spectrum into account. The most common parameters are the tube voltage kVp (or, equivalently, the maximum energy of X-ray photons in a beam) and the mean energy E mean (calculated as the energy-weighted average). However, it should be noted that neither of these parameters characterizes a polyenergetic spectrum unambiguously and thus they should be treated as an approximate measure of the beam energy only [63].
W ± for different X-ray tube voltages and corresponding mean X-ray energies are shown in Figure 5. The detector's sensitivity improves (W ± decreases) as the energy of X-rays increases.
An alternative way to vary photon energy is by hardening the X-ray beam with added Al filtration. At a fixed tube voltage, a thicker Al filter attenuates the low-energy end of the spectrum and effectively shifts the mean energy towards a higher value. Figure 6 shows W ± values as a function of mean X-ray energy for different electric fields and tube voltages.
Within each tube voltage group, W ± decreases as the mean energy increases. A discrepancy between W ± values at the same F and E mean , but different tube voltage, is not surprising, since, as it was mentioned earlier, E mean alone is not a sufficient parameter to characterize the incident polyenergetic X-ray beam. Nevertheless, the trends of the dependencies in Figures 5 and 6 closely resemble each other. An alternative way to vary photon energy is by hardening the X-ray beam with added Al filtration. At a fixed tube voltage, a thicker Al filter attenuates the low-energy end of the spectrum and effectively shifts the mean energy towards a higher value. Figure 6 shows ± values as a function of mean X-ray energy for different electric fields and tube voltages. Within each tube voltage group, ± decreases as the mean energy increases. A discrepancy between ± values at the same F and , but different tube voltage, is not surprising, since, as it was mentioned earlier, alone is not a sufficient parameter to characterize the incident polyenergetic X-ray beam. Nevertheless, the trends of the dependencies in Figures 5 and 6 closely resemble each other. An alternative way to vary photon energy is by hardening the X-ray beam with added Al filtration. At a fixed tube voltage, a thicker Al filter attenuates the low-energy end of the spectrum and effectively shifts the mean energy towards a higher value. Figure 6 shows ± values as a function of mean X-ray energy for different electric fields and tube voltages. Within each tube voltage group, ± decreases as the mean energy increases. A discrepancy between ± values at the same F and , but different tube voltage, is not surprising, since, as it was mentioned earlier, alone is not a sufficient parameter to characterize the incident polyenergetic X-ray beam. Nevertheless, the trends of the dependencies in Figures 5 and 6 closely resemble each other. Figure 6. W ± as a function of the mean energy of X-ray photons at different tube voltages and electric field strengths. W ± decreases as the energy of X-rays increases.
The electron transport was simulated using the Monte Carlo software CASINO [57]. Table 1 summarizes the results of the simulations and Figure 7 shows an example of typical electron trajectories for 37.7-keV incident electrons in the PbO sample. The electron beam energy of 37.7 keV represents the kinetic energy of a primary photoelectron ejected from the L 3 -subshell with the binding energy of 13 keV by the 100 kVp X-ray beam with a mean energy of 50.7 keV. The sample was irradiated by an electron beam from the top side; the electron trajectories are coloured according to their kinetic energy. As the primary electron traveled through the photoconductor, it collided with the atoms and gradually lost its energy. The average energy dissipated in a collision event did not appreciably vary with the initial energy of the primary photoelectron (Table 1). However, the average distance between the collisions (i.e., mean free path) and total path length (i.e., electron range) increased with the primary photoelectron energy, resulting in a declining energy deposition rate (Table 1). The electron transport was simulated using the Monte Carlo software CASI Table 1 summarizes the results of the simulations and Figure 7 shows an exampl ical electron trajectories for 37.7-keV incident electrons in the PbO sample. The beam energy of 37.7 keV represents the kinetic energy of a primary photoelectron from the L3-subshell with the binding energy of 13 keV by the 100 kVp X-ray beam mean energy of 50.7 keV. The sample was irradiated by an electron beam from side; the electron trajectories are coloured according to their kinetic energy. As mary electron traveled through the photoconductor, it collided with the atoms a ually lost its energy. The average energy dissipated in a collision event did not app vary with the initial energy of the primary photoelectron (Table 1). However, the distance between the collisions (i.e., mean free path) and total path length (i.e., range) increased with the primary photoelectron energy, resulting in a declining deposition rate (Table 1).
Discussion
The obtained experimental results show well-pronounced dependencies o the electric field, X-ray energy, and exposure rate. Now we will try to take into the presented dependencies in the recombination models, as was previously do Se [33,34,36].
Firstly, let us consider a field dependency in a-PbO demonstrated in Figure 3 firstly rapidly decreased according to 1/ (in the range of fields 1-10 V/μm), bu to saturate at higher fields with no further improvement observed, as is seen in F Replotting results from Figure 3
Discussion
The obtained experimental results show well-pronounced dependencies of W ± on the electric field, X-ray energy, and exposure rate. Now we will try to take into account the presented dependencies in the recombination models, as was previously done for a-Se [33,34,36].
Firstly, let us consider a field dependency in a-PbO demonstrated in Figure 3. W ± (F) firstly rapidly decreased according to 1/F (in the range of fields 1-10 V/µm), but started to saturate at higher fields with no further improvement observed, as is seen in Figure 3b. Replotting results from Figure 3 as η(F) yields R SI = 0.6 − 2 µm/V, depending on the X-ray energy; however, for a-PbO with ε r = 26, Onsager theory requires a value of R SI = 0.041 µm/V, which plays against the geminate recombination model as a plausible mechanism for photogenerated charge carrier loss in a-PbO.
Furthermore, analysis of the energy deposition during ehps generation and the X-ray energy dependency of the recombination rates rule out the geminate model completely. Indeed, Table 1 shows that the average portion of energy dissipated in a scattering event almost did not change with X-ray energy. Since photoelectric absorption is the main photon interaction mechanism in PbO for the diagnostic X-ray energy range [64], the deposited energy primarily causes ionization and excitation of atoms, i.e., ehps generation. Therefore, the same amount of ehps, on average, is generated in each collision event, and the separation of the geminate pairs remains the same. If geminate recombination is a dominant process, W ± will be independent of X-ray energy. However, this is not the case: Figures 5 and 6 clearly illustrate that W ± monotonically decreases with the energy of X-rays. This behaviour disagrees with Onsager formalism but adheres to the columnar model. The decrease of W ± with gradually increasing energy of X-rays is due to the reduction in the columnar recombination rate caused by the lowering of the photogenerated charge carrier density along the track of primary photoelectron (since the average distance between ionizing events increases, as was demonstrated by our Monte Carlo simulations (Table 1)). Therefore, geminate recombination can be excluded from the reasons for carrier loss in a-PbO, leaving columnar recombination as the dominant process.
Let us now examine the exposure dependency of the collected charge and W ± . For this, the collected photogenerated charge was measured at a constant exposure rate and plotted in Figure 8a as a function of exposure in a log-log scale. The collected charge increased strictly linearly with the exposure, as demonstrated by the unity slope values in the inset to Figure 8a. In this case, both the number of collected ehps and the total energy absorbed were proportional to the exposure, and therefore, W ± remained unchanged (see Equations (1), (A2) and (A4)). However, if the collected charge is measured at a variable exposure rate and a fixed pulse duration t pulse , its dependency on the exposure is different. This is shown in Figure 8b: the slope values deviate from unity, and thus W ± changes, as was demonstrated in Figure 4. At the lower field of 5 V/µm, the collected charge increased as Q ∼ X α with an intermediate exponent α = 0.785; and at the higher field of 20 V/µm, it changed almost linearly: α = 0.957 (see slope values in the inset to Figure 8b). In addition, the slope value decreased with increasing X-ray energy (Figure 8c). Since the exponents take an intermediate value between that for the bulk recombination (α = 0.5) and columnar recombination (α = 1), this analysis suggests the interplay between bimolecular Langevin recombination in the bulk and the column. Indeed, the carriers first experience the initial columnar recombination, and afterward, the escaped carriers drift through the bulk of the photoconductor and recombine with the carriers from the different columns, giving rise to the bulk Langevin recombination.
Although the above considerations allow for a qualitative model of X-ray generation and recombination in a-PbO, the saturation of W ± at energy-dependent values well above the intrinsic W 0 ± remains unclear. The lowest experimentally achievable W ± ranges from 18 eV/ehp at 100 kVp to 31 eV/ehp at 40 kVp (to be compared with energy-independent W 0 ± around 5-6 eV/ehp, as suggested by Klein and Que-Rowlands rules for lead oxide with E g = 1.9 eV [1]).
The saturation of W ± has been previously observed in the High-gain Avalanche Rushing Photoconductor (HARP) detector with a-Se photoconductor at high electric fields [36]. Indeed, W ± in a-Se initially decreases with the field as 1/F. However, in the fields stronger than 80 V/µm, W ± saturated at a level of~9 eV/ehp. This saturated value is larger than that theoretically predicted by Klein rule, 5-7 eV/ehp. Such behaviour is explained by the modified columnar recombination model which takes into account that the recombination rate is limited by the smaller of two parameters: time needed for carriers to meet in space and duration of the recombination event itself. In a high electric field, the time for an electron and hole to meet in space becomes smaller than the time needed for the recombination of the electron-hole pair that is on a scale of~10 −12 s. As the result, charge drift no longer influences the probability of recombination, which becomes independent of the electric field, and W ± saturates.
Although saturation of W ± in a-PbO occurs in an electric field weaker than that for a-Se, it confirms the findings in [36] that the Langevin recombination mechanism should not be expected at strong electric fields. As is shown in Table 1, the energy-dependent mean free pass r MFP between the ionizing collisions in a-PbO is at a scale of several nanometers. This distance can be approximated as a maximum separation between the oppositely charged non-geminate carriers r 0 (although, realistically, r MFP significantly overestimates a mean separation r 0 , taking into account that at this distance not a single ehp is created, but rather multiple ehps that form a dense electron cloud-a spur (see Appendix B)). Considering the intrinsic W 0 ± ≈ 5 eV/ehp for a-PbO, the number of pairs generated in each spur could be estimated from the dissipated energy per collision (~35 eV, see Table 1) as~7 ehps per spur, providing r 0 ≈ 10 −7 cm. Now, assuming that the mobility of holes (faster carriers in PbO) µ ≈ 1 cm 2 /(V·s) at F =10 V/µm where W ± (F) saturation begins (which seems a reasonable assumption for hole mobility in PbO at 10 V/µm [65]), the hole drift velocity v d = µF ≈ 10 5 cm/s. Therefore, the time τ = r 0 /v d that defines the probability for recombining carriers to meet in space is in the order of 10 −12 s-shorter than the characteristic time of the recombination event for carriers of opposite sign placed at the same spatial point [36]. Similarly to a-Se, in strong electric fields the recombination in a-PbO becomes limited by the duration of the recombination event: the recombination rate no longer depends on the electric field. Therefore, W ± saturates, as was demonstrated in Figure 3. This also explains the saturation of W ± at different values depending on the X-ray energy. As the mean X-ray energy in a beam increases from 28.8 to 50.7 keV, the mean free path of the primary photoelectron r MFP increases by a factor of 1.7 (Table 1). This results in a reduced initial recombination rate and a saturated W ± that is lower by the same factor ( Figure 3). Although the above considerations allow for a qualitative model of X-ray generation and recombination in a-PbO, the saturation of ± at energy-dependent values well above the intrinsic ± remains unclear. The lowest experimentally achievable ± ranges from 18 eV/ehp at 100 kVp to 31 eV/ehp at 40 kVp (to be compared with energy-independent ± around 5-6 eV/ehp, as suggested by Klein and Que-Rowlands rules for lead oxide with = 1.9 eV [1]).
The saturation of ± has been previously observed in the High-gain Avalanche Rushing Photoconductor (HARP) detector with a-Se photoconductor at high electric fields [36]. Indeed, ± in a-Se initially decreases with the field as 1/F. However, in the fields stronger than 80 V/μm, ± saturated at a level of ~9 eV/ehp. This saturated value is larger
Conclusions
The X-ray sensitivity in terms of the electron-hole pair creation energy W ± of a single-pixel PI/a-PbO direct conversion X-ray detector prototype was characterized in a wide range of electric fields, X-ray photon energies (in diagnostic energy range), and exposures using polyenergetic irradiation. W ± decreased with electric field strength, and Sensors 2021, 21, 7321 13 of 19 above 10 V/µm saturated at 18-31 eV/ehp, depending on the energy of the X-rays-higher photon energy resulted in a lower W ± . In addition, W ± increased with radiation exposure rate, especially in weaker electric fields. This demonstrates that the PI/a-PbO detector performs best in strong, practical electric fields (10-20 V/µm) in the diagnostic energy range and under low exposures, offering improved sensitivity as compared to a-Se.
The analysis of the field, X-ray energy, and exposure dependencies of the W ± indicated an interplay between Langevin recombination within the ionization column (i.e., columnar recombination) and bulk Langevin recombination, which together are responsible for the carrier loss and suboptimal W ± in a-PbO in electric fields weaker than 10 V/µm. In stronger fields, the columnar Langevin recombination cannot account for the observed field dependency of W ± , as the recombination process is no longer determined by the probability of X-ray-generated carriers meeting in space.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Acknowledgments:
The authors are grateful to Andrey Lomako for valuable discussions on the prospects for the application of a-PbO technology in practical detectors and to Giovanni DeCrescenzo for technical support and useful deliberations.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. X-ray Sensitivity Calculations
In this work, the detector was irradiated with a polyenergetic X-ray beam. Therefore, to accurately calculate the absorbed energy, the shape of the X-ray spectrum must be considered. The X-ray spectrum for the tungsten target at a given tube voltage, tube current, Al filtration, and source-to-detector distance (SDD) was simulated using the standard Tucker-Barnes-Chakraborty (TBC) model [66].
The validity of the model was verified by a half-value layer (HVL) of Al. Inherent filtration (glass, oil, and Al) can be adjusted in the model until a close match between the experimental and simulated HVL values is achieved, indicating that the modeled spectrum closely represents the one generated by the X-ray unit. To derive an experimental HVL of the beam, the exposure was measured for a naked tube and with added Al filtration of a different thickness up to 3 mm. The data were then interpolated to determine the Al thickness required to reduce the exposure to half of its original value.
The simulated spectrum represents the X-ray photon fluence incident on the detector (the number of photons for each energy E i , per unit area per unit exposure) N(E i ). The fraction of photons absorbed in the photoconductor is given by the energy-dependent Beer-Lambert law [63]: where µ(E i ) ρ is the mass attenuation coefficient; ρ-density; d-thickness of a-PbO layer.
The absorbed energy is then calculated by summing the absorbed fraction of the energy fluence (= N(E i ) × E i ) over the entire energy range. For the detector with an area A and an incident exposure X, the total absorbed energy is: where is the mass-energy absorption coefficient. The mass attenuation and massenergy absorption coefficients for PbO were derived from the elemental coefficients for Pb and O (obtained from the NIST database [58]) as averages weighted by atomic mass.
The effective area of a detector is determined by the area of a smaller electrode, which in our case was a top gold contact. The exposure X was measured with an ionization chamber located at a distance d 1 from the tube's anode, away from the detector at a distance d 2 . The exposure in the detector's plane was then calculated by the inverse square law. Since the photoconductor was irradiated through the 0.7 mm thick glass substrate, its attenuation was also considered. The transmittance of the substrate T s was separately measured for each spectrum as the ratio of exposures with and without a "blank" substrate (i.e., the glass substrate without the photoconductor film deposited) in front of the Xray tube window with all other parameters fixed. Finally, the incident exposure on the photoconductor is therefore given as: The number of collected photogenerated ehps was obtained from the current transients. The dark current was subtracted from the current transient, and the resulting photocurrent was integrated over the pulse duration. The amount of charge was divided by an elementary charge e to obtain the number of carriers collected: The simulated X-ray spectra for the X-ray tube with a tungsten target and 2 mm added Al filtration at different tube voltages are shown in Figure A1a. The low-energy end (up tõ 15 keV) was attenuated by the inherent filtration (glass housing) and added Al filtration. The peaks at 58-59 keV and 67-69 keV are due to the emissions of the characteristic K α and K β X-ray photons of tungsten, respectively. The spectra were normalized to 1 R of incident exposure for better representation. The inset shows the mean energy, HVL, and exposure at a given tube voltage and typical parameters (2 mm added Al filtration, tube current 200 mA, pulse duration 0.1 s, SDD 80 cm) for unnormalized X-ray beams. Figure A1b shows measured exposure as a function of the added Al filtration thickness to the naked tube for selected tube voltages. The calculated and measured HVL values for the naked tube are listed in the inset: the difference is <2% for all tube voltages. The error of 2% in HVL value translates into 1% uncertainty in the calculated value of absorbed energy E abs , and therefore, of W ± , which is smaller than a symbol size in Figures 3-6. Figure A1b shows measured exposure as a function of the added Al filtration thickness to the naked tube for selected tube voltages. The calculated and measured HVL values for the naked tube are listed in the inset: the difference is <2% for all tube voltages. The error of 2% in HVL value translates into 1% uncertainty in the calculated value of absorbed energy , and therefore, of ± , which is smaller than a symbol size in
Appendix B. Model for the Charge Generation and Recombination Processes in a-PbO
A qualitative model of the charge generation and recombination processes in a-PbO is based on our experimental results, and previous experimental observations and theoretical simulations in a-Se.
Upon impinging on the photoconductor, the energy of the X-ray photon (minus the binding energy of the electron) is mostly transferred to the kinetic energy of a primary electron, since the photoelectric absorption is the main photon interaction mechanism in PbO for the diagnostic X-ray energy range [64]. The kinetic energy of the primary photoelectron is deposited into the material during the inelastic collisions with the outer-shell atomic electrons, which leads to the ionization of these atoms (creation of the ehps) or emission of a phonon (energy loss). A single ionization event can result in the creation of multiple ehps in the vicinity of the interaction site, composing a spur core ( Figure A2a). This event could be interpreted as the excitation of plasma waves, which very quickly decay into multiple ehps [53]. After the ehps are created, they diffuse away from the excitation location in a thermalization process and gradually lose their initial kinetic energy. At the end of the thermalization process, ehps are separated by a finite thermalization distance (which can be estimated by the Knights-Davis equation [67], but usually is interpreted as a free fitting parameter), constituting an electron cloud-namely, a spur ( Figure A2b). As the primary electron makes its way through the photoconductor, it collides with the atoms and creates many localized spurs along its track. If the ionization density along the track is large enough, individual spurs overlap and form a column of Xray-generated secondary ehps surrounding the electron's track [41,53] (Figure A2c).
If at the end of the thermalization process, the distance between any oppositely charged carriers is smaller than the Coulombic capture radius (such that their mutual attraction is stronger than the thermal diffusion and the drift in the applied electric field), Figure A1. (a) The simulated X-ray spectra at different tube voltages, normalized to 1 R of exposure. The inset shows typical (see text) parameters of the beams. (b) Exposure measured as a function of added Al filtration thickness for different tube voltages. The dashed lines correspond to 50% of the original exposure and HVL of Al. The inset shows calculated and measured HVL values for a naked tube.
Appendix B. Model for the Charge Generation and Recombination Processes in a-PbO
A qualitative model of the charge generation and recombination processes in a-PbO is based on our experimental results, and previous experimental observations and theoretical simulations in a-Se.
Upon impinging on the photoconductor, the energy of the X-ray photon (minus the binding energy of the electron) is mostly transferred to the kinetic energy of a primary electron, since the photoelectric absorption is the main photon interaction mechanism in PbO for the diagnostic X-ray energy range [64]. The kinetic energy of the primary photoelectron is deposited into the material during the inelastic collisions with the outershell atomic electrons, which leads to the ionization of these atoms (creation of the ehps) or emission of a phonon (energy loss). A single ionization event can result in the creation of multiple ehps in the vicinity of the interaction site, composing a spur core ( Figure A2a). This event could be interpreted as the excitation of plasma waves, which very quickly decay into multiple ehps [53]. After the ehps are created, they diffuse away from the excitation location in a thermalization process and gradually lose their initial kinetic energy. At the end of the thermalization process, ehps are separated by a finite thermalization distance (which can be estimated by the Knights-Davis equation [67], but usually is interpreted as a free fitting parameter), constituting an electron cloud-namely, a spur ( Figure A2b). As the primary electron makes its way through the photoconductor, it collides with the atoms and creates many localized spurs along its track. If the ionization density along the track is large enough, individual spurs overlap and form a column of X-ray-generated secondary ehps surrounding the electron's track [41,53] (Figure A2c).
If at the end of the thermalization process, the distance between any oppositely charged carriers is smaller than the Coulombic capture radius (such that their mutual attraction is stronger than the thermal diffusion and the drift in the applied electric field), then the carriers will recombine ( Figure A2c). Due to a high density of the X-ray-generated carriers inside the column [32,34,41,53], the mean separation of the twin ehp is larger than the separation between any adjacent non-geminate electrons and holes, meaning that nongeminate ehps are more likely to recombine than the geminate pairs, leading to a columnar recombination mechanism. The carriers with separation larger than the Coulombic capture radius are likely to escape the recombination ( Figure A2d) and contribute to the X-ray signal, although the probability of escape depends on the combined effects of the diffusion and extraction fields [34,44,67]. A fraction of the electrons and holes that escaped recombination will drift in the applied electric field towards the opposing electrodes where they are collected ( Figure A2e). If columns are generated closely in space, the carriers from different columns and spurs can meet during their drift and recombine in the bulk ( Figure A2f), contributing to the bulk Langevin recombination.
then the carriers will recombine ( Figure A2c). Due to a high density of the X-ray-generated carriers inside the column [32,34,41,53], the mean separation of the twin ehp is larger than the separation between any adjacent non-geminate electrons and holes, meaning that nongeminate ehps are more likely to recombine than the geminate pairs, leading to a columnar recombination mechanism. The carriers with separation larger than the Coulombic capture radius are likely to escape the recombination ( Figure A2d) and contribute to the X-ray signal, although the probability of escape depends on the combined effects of the diffusion and extraction fields [34,44,67]. A fraction of the electrons and holes that escaped recombination will drift in the applied electric field towards the opposing electrodes where they are collected ( Figure A2e). If columns are generated closely in space, the carriers from different columns and spurs can meet during their drift and recombine in the bulk ( Figure A2f), contributing to the bulk Langevin recombination. Photogenerated charge density is an important parameter in the columnar recombination model since it directly affects the recombination rate. It can be described in terms of the energy deposition rate. The rate of energy deposition by a primary electron (i.e., stopping power) decreases with its kinetics energy and so does with photon energy (see Table 1 and [68]). Thus, the density of ehps in the column decreases with increasing photon energy and ehps have a greater probability of escape [34,35,40,41,54]. Therefore, with increasing energy of the incident X-ray photon, the columnar recombination rate within the photoconductor decreases, increasing the fraction of charge collected. | 12,362 | 2021-11-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Projective measurement onto arbitrary superposition of weak coherent state bases
One of the peculiar features in quantum mechanics is that a superposition of macroscopically distinct states can exist. In optical system, this is highlighted by a superposition of coherent states (SCS), i.e. a superposition of classical states. Recently this highly nontrivial quantum state and its variant have been demonstrated experimentally. Here we demonstrate the superposition of coherent states in quantum measurement which is also a key concept in quantum mechanics. More precisely, we propose and implement a projection measurement onto an arbitrary superposition of two weak coherent states in optical system. The measurement operators are reconstructed experimentally by a novel quantum detector tomography protocol. Our device is realized by combining the displacement operation and photon counting, well established technologies, and thus has implications in various optical quantum information processing applications.
Quantum measurement plays an essential role in Quantum Information Processing (QIP). In quantum optical system, the standard measurements are homodyne detector and photon detector that measure the physical quantities of light: quadrature amplitudes and photon numbers, respectively.
However, one can consider more general quantum measurement that has no correspondence to these standard physical quantities, more precisely, any measurement satisfying the mathematical condition of the positive operator valued measure (POVM) formalism. The example of such non-standard measurement considered here is the projection measurement onto the superposition of coherent states (SCS), a 0 |α〉 + a 1 |−α〉, where |±α〉 are the coherent state, i.e. classical state, with amplitude ±α. More precisely, we consider the arbitrary projection measurement in the space spanned by the SCS bases α α | 〉 = | 〉 ± |− 〉 ± ± C ( ) / ( ± are the normalization factors): and φ denotes the relative phase between |C ± 〉. Each vector of the measurement in Eq. (1) is equivalent to the SCS state, which is a typical example of macroscopic quantum superposition (and thus sometimes regarded as "Schrödinger cat state") showing highly nonclassical properties. Generation of such optical states have been experimentally accomplished by several groups [1][2][3][4][5][6][7][8][9] for relatively small α. Especially in ref. 6 , generation of the approximate SCS with arbitrarily controlled {c 0 , c 1 , φ} is demonstrated. On the other hand, a few attempts have been made for the exploration of the measurement described by the SCS bases. It is well known that a specific projection measurement |C ± 〉 (i.e. c 0 = 1, c 1 = 0, φ = 0) is realized by the parity measurement of photon numbers. However, the implementation of the measurement for general {c 0 , c 1 , φ} remains as a challenge.
In this paper, we propose and experimentally demonstrate physical implementation of the SCS measurement with arbitrary {c 0 , c 1 , φ} in the regime of small α. The structure of the implemented measurement (i.e. its positive operator valued measure (POVM)) is reconstructed by the quantum detector tomography (QDT) [10][11][12][13][14][15][16] and we evaluate the fidelity between the experimentally reconstructed POVM of our measurement device and the ideal SCS measurement in Eq. (1). We experimentally demonstrate the fidelities that cannot be achieved by conventional measurements such as homodyne detector or photon number resolving detector (PNRD).
It is worth to mention the related work in ref. 16 where they performed a full detection tomography of a hybrid measurement of homodyne and PNRD in continuous variable Hilbert space and was able to reveal the wave-particle duality in measurement process. In contrast, the purpose of our work is to implement specific but nontrivial POVMs in the Hilbert space spanned by the SCS bases in Eq. (1). Note that though it is two-dimensional, the SCS bases consist of the so-called continuous variable state vectors, and thus its implementation and tomographic verification are nontrivial. To do so, we develop a modified QDT technique that is of independent interest. Our technique has direct implications in applications using SCS states and their measurements such as quantum computation with optical coherent states 37,38 or optimal detection of coherent states in optical communication [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] , where the homodyne measurement and the photon counting are non-optimal.
Results
Physical implementation of SCS measurement. Figure 1(a) is a schematic of our measurement which approximately realizes the projection in Eq. (1). It consists of a PNRD preceded by a displacement operation, which we call the displaced-photon counting hereafter. The measurement operators of our schematic in Fig. 1(a) are given by, mented by combining the signal state with a local oscillator (LO) at a beam splitter with nearly unit transmittance. A measurement operator of the PNRD is given by a set of photon number bases Π = | 〉〈 | n n { } n . The intuition explaining how Eq. (2) approximates Eq. (1) is as follows. If the coherent amplitude is small, the SCS bases defined in the Eq. (1) can also be simply tailored in a superposition of a vacuum and single photon bases, The coefficient of the single photon basis in the above equation can be freely controlled by adjusting the amplitude and phase of the displacement operation. Thus the combination of the displacement operation and the photon counting provides high fidelity with the SCS measurement in small coherent amplitude region. The amplitude and phase of the displacement operation are numerically optimized so as to maximize the fidelity between the SCS measurement and the displaced-photon counting measurement that is defined as, To evaluate the performance of our measurement strategy compared with the conventional measurements, we calculate the fidelities of the SCS measurement with the homodyne measurement and the PNRD without displacement. We define the POVM of the homodyne measurement with binary outcomes as where {|x φ 〉} is the quadrature basis and φ is adjustable by changing the optical phase of the local oscillator. We determine the threshold value x th such that the fidelity is maximized. The PNRD is given by setting β = 0 in Eq. (2). The fidelity to the SCS is then given by a simple form, We compare the fidelities for the three-type of measurements in Fig. 1(b). The relative phase and the coherent amplitude of the target SCS measurement are set to φ = 0 and α = 0.50 respectively. The displaced-photon counting shows high fidelity for a whole range of c 0 . Figure 1(c) depicts the optimal displacement amplitude as a function of the superposition coefficient c 0 2 . The optimal amplitude of displacement is decreased with the increase of the target coefficient and reaches to zero at = c 1 0 2 where the SCS measurement can be achieved by the parity measurement using the PNRD. Also, as will be shown later, our scheme can approximate the SCS measurement with a complex phase factor (φ ≠ 0, π) by optimizing both amplitude and phase of the displacement. In Fig. 1(d) we evaluate the fidelities as a function of the superposition coefficient c 0 2 ( . ≤ ≤ . ) and the square of the coherent amplitude α 2 (0.1 ≤ α 2 ≤ 2.3). The displaced-photon counting offers a clear advantage over the conventional measurements up to α 2 ≈ 1.5. A possible approach to achieve high fidelity for arbitrary {c 0 , c 1 , φ, α} will be addressed in Discussion. Fig. 2. We prepare a sequence of optical pulses at a telecom wavelength 1549 nm with repetition rate 900 kHz and pulse width 100 ns by modulating continuous wave laser using an acousto-optic modulator (AOM). The optical pulse is first divided into two parts where one is the local oscillator for the displacement and the other is the probe pulse for the measurement characterization. For each state, we adjust the optical amplitude independently by means of a set of a half wave plate and a polarizer. The probe state is interfered with the LO light on an asymmetric fiber coupler with transmittance τ = 0.99, which leads to the physical implementation of the displacement operation, and detected by the photon counter. We achieve the visibility = 0.998 for the displacement operation. In the experiment, instead of the PNRD, we use a superconducting nanowire single photon detector (SNSPD) which is capable of discriminating if the photon exists (on) or not (off) 39,40 .
Experimental setup. Our experimental setup is depicted in
The degradation of the fidelity due to the lack of the photon number resolution is negligible when the coherent amplitude α is small enough such that the probability of having more than one photon is negligible. Detection efficiency and dark count noise of the SNSPD are experimentally measured to be 68.9% and 5.32 × 10 −5 counts per pulse respectively. The optical relative phase between the probe state and the local oscillator for the displacement operation, which determines the phase φ of the SCS measurement, is controlled by a piezo transducer (PZT). We acquire 2 × 10 5 experimental data for each probe state.
The implemented POVM is experimentally characterized by quantum detector tomography. Characterization over the entire phase space is in principle possible by sweeping the quadrature amplitudes of coherent state 10,16 . In the continuous variable QDT, usually one has to prepare a set of different quadrature amplitudes that entirely cover the phase space. However, here we can drastically reduce the number of quadrature amplitudes since our measurement device is on the space spanned by |C ± 〉 which is intrinsically two-dimensional. Nevertheless, it is still not an easy task to tomograph it since each basis is a highly nonclassical continuous variable state. two-dimensional space, four different probes are enough for the tomography, in our case |±α〉 and α α | 〉 ± |− 〉 i ( ) / 2. The coherent states |±α〉 are easy to prepare. In contrast, to prepare well-calibrated SCSs as the probe is still challenging with the current technology. Thus we develop a method which replaces the SCS probes by 2k-set of coherent states {|±iγ k 〉} with various amplitudes. Details of the method is discussed in Method. By numerically simulating the proposed QDT method, we find that the four probe states {|±iγ 1 〉, |±iγ 2 〉} and the coherent probes |±α〉 suffice to characterize our measurement. The set of the probe states {|±α〉, |±iγ 1 〉, |±iγ 2 〉} and their measurement outcomes enable us to reconstruct the POVM and we adopt the maximally likelihood procedure for the reconstruction 41 .
Fidelity between experimentally realized displaced-photon counting and the ideal SCS meas-
urement. An example of the experimentally reconstructed POVM is depicted in Fig. 3. The amplitude and the phase of the target SCS bases are α = 0.499 and φ = π/2 respectively. As a corresponding measurement, we prepare the local oscillator for the displacement operation with the amplitude β = 0.894 and the relative phase π/2 with respect to |α〉. Figure 4(a,b) plot the fidelities between the target SCS measurement and the experimentally reconstructed displaced-photon counting for various c 0 2 (red circles). The blue circles are the same plots after compensating the loss. These plots are compared with their theoretical curves (red and blue dashed lines) and the theoretical curves for the ideal homodyne (black dashed line) and PNRD (black long-dashed line) measurements.
In Fig. 4(a,b), the target SCS amplitude and the relative phase are α = 0.499 and, (a) φ = 0 and (b) φ = π/2, respectively. Experimental results indicate that we can realize the SCS measurements with the fidelity better than both the ideal homodyne measurement and the ideal PNRD in the specific c 0 2 range. Furthermore, by compensating the loss due to non-unit detection efficiency, our experimental results outperform the ideal homodyne and the ideal PNRD in a whole range of c 0 2 except ∼ .
where the photon number resolving capability is required. As shown in Fig. 1(c), the optimal amplitude of the displacement operation varies depending on the coefficient of the target SCS measurement. We use 9 different displacement amplitudes shown in Fig. 4(c) to acquire the experimental data for Fig. 4(a) and the displacement amplitudes are chosen so as to maximize the fidelity under the experimental condition with finite loss. Therefore, the displacement amplitudes in Fig. 4(c) are slightly larger than those in Fig. 1(c). The effect of this stepwise displacement modulation appears in the discontinuity of the fidelity plots in Fig. 4(a). While the optimal fidelity is not obtainable with the stepwise displacement, the discrepancy of the fidelities between the optimal displacement and the experimental displacement condition is less than 0.2% except for = . c 1 0 0 2 , where the optimal displacement amplitude is β = 0, and the degradation of the fidelity due to non-optimal displacement is negligibly small.
A discrepancy between the theoretical prediction and the experimentally obtained fidelity can be explained as follows. Red and blue dashed lines in Fig. 4(a,b) represent the theoretical fidelity between the displaced-photon counting and the SCS measurement with the coherent amplitude α = 0.499, where α = 0.499 is determined by averaging the probe amplitude used to characterize each displacement condition. The probe amplitude cannot be calibrated to the exactly same value due to the technical reasons and the systematic error of the probe amplitude is estimated to α = 0.499 ± 0.011. The error bar in Figs 3 and 4 is evaluated base on the systematic error of the probe amplitude. In addition, the phase of the probe states with respect to the LO cannot be perfectly set to the desired value. Both the finite precision of the amplitude and the phase make the experimental results higher or lower than the theoretical values. Figure 5 depicts a quadrant of a sphere with radius 1 in which experimentally obtained fidelities for various φ and c 0 are plotted. The distance from the sphere origin to the plotted point corresponds to the fidelity between the target SCS and experimentally realized POVM. Rotations in horizontal and vertical plane are equivalent to variation of φ and c 0 respectively. We examine 5 different phase conditions φ = 0, 0.393, 0.787, 1.18, π/2 with the coherent amplitude α = 0.499 and the experimental results show that the arbitrary SCS measurement with weak coherent amplitude is approximately implementable by controlling both amplitude and phase of the displacement operation.
Discussion
In this paper, we propose and experimentally demonstrated the physical implementation of the projection measurement onto the SCS bases. Our theoretical analysis showed that the measurement process consisting of the displacement operation with the photon counter enables us to perform the SCS measurement with arbitrary {c 0 , c 1 , φ} in weak coherent amplitude case. We demonstrated the proof-of-principle experiment for the SCS projection measurement and characterized our measurement by the QDT approach. Although the fidelity between the ideal SCS measurement and the experimentally realized measurement was highly degraded because of the detector's imperfections, our experimental result showed higher fidelity than the ideal homodyne measurement and the ideal PNRD for specific range of c 0 . Furthermore, by optimizing the amplitude and the phase of the displacement operation, we experimentally realized the approximate SCS measurement with arbitrary {c 0 , c 1 , φ}.
An interesting future direction is the physical realization of the projection measurement onto the SCS bases with higher α. In fact it was shown that arbitrary two-dimensional projection measurement is achievable by introducing feedback operation to the displaced-photon counting measurement 21,22 . The measurement strategy, which is often referred to as Dolinar receiver, was first proposed for the BPSK discrimination 19 and generalized for arbitrary two orthogonal optical states discrimination problem 21 . Thus the displaced-photon counting with the feedback operation allows us to perform perfect SCS measurement with large coherent amplitude. Another possible future work is the implementation of the SCS measurement for general input states. Our analysis is concentrated on the two-dimensional space spanned by the SCS bases. However, in principle, it is also possible to realize the SCS measurement in higher dimensional space. Such measurement procedure has not been explored but could be important tool for the optical QIP and communication scenarios.
Methods
Developed method for tomographic reconstruction of the displaced-photon counting in the SCS bases. The POVM of the displaced-photon counting is reconstructed by probing with coherent states and applying the QDT method. In general, the QDT requires a large amount of probe states to cover a whole Hilbert space of interest. In our case, though |C ± 〉 is a continuous variable optical state, the signal space we are interested in is restricted to the two-dimensional space spanned by {|C + 〉, |C − 〉}. Generally, the POVM tomography in a two-dimensional space requires four linearly independent probe states in the space 42 . In our case, while the real part of the POVM is easily probed via two coherent states |±α〉, it is necessary to use a superposition of |±α〉 with imaginary phase to probe the imaginary part of the POVM. An example of such a state is If the probe φ | 〉 + Im is available, its expectation value is given as The first two terms can be obtained by using the probe states |±α〉. The last two terms are expressed as and Φ 1 = Θ 01 , Φ 3 = Θ 03 − Θ 12 , Φ 5 = Θ 05 − Θ 14 + Θ 23 , etc. Note that Φ l is always real. The following restriction on Φ (l) is applied by assuming that experimentally obtained POVMs are always physical, i.e., Π j is positive definite and ∑ Π =Î Therefore, we can obtain the third and fourth terms in Eq. (10) by first characterizing Φ { } i j ( ) from the experimental results with |±iγ k 〉 and then substituting them into Eq. (11).
The experimentally measured count rates corresponding to Eq. (13) is described as, | 4,177.6 | 2017-02-21T00:00:00.000 | [
"Physics"
] |
An open-source framework for neuroscience metadata management applied to digital reconstructions of neuronal morphology
Research advancements in neuroscience entail the production of a substantial amount of data requiring interpretation, analysis, and integration. The complexity and diversity of neuroscience data necessitate the development of specialized databases and associated standards and protocols. NeuroMorpho.Org is an online repository of over one hundred thousand digitally reconstructed neurons and glia shared by hundreds of laboratories worldwide. Every entry of this public resource is associated with essential metadata describing animal species, anatomical region, cell type, experimental condition, and additional information relevant to contextualize the morphological content. Until recently, the lack of a user-friendly, structured metadata annotation system relying on standardized terminologies constituted a major hindrance in this effort, limiting the data release pace. Over the past 2 years, we have transitioned the original spreadsheet-based metadata annotation system of NeuroMorpho.Org to a custom-developed, robust, web-based framework for extracting, structuring, and managing neuroscience information. Here we release the metadata portal publicly and explain its functionality to enable usage by data contributors. This framework facilitates metadata annotation, improves terminology management, and accelerates data sharing. Moreover, its open-source development provides the opportunity of adapting and extending the code base to other related research projects with similar requirements. This metadata portal is a beneficial web companion to NeuroMorpho.Org which saves time, reduces errors, and aims to minimize the barrier for direct knowledge sharing by domain experts. The underlying framework can be progressively augmented with the integration of increasingly autonomous machine intelligence components.
Introduction
Neuroscience is continuously producing an immense amount of complex and highly heterogeneous data typically associated with peer-reviewed publications. When building data-driven models of brain function, computational neuroscientists must engage in the laborious task of reviewing, annotating, and deriving many parameters required for numerical simulations. More generally, the process of curation consists of extracting, maintaining, and adding value to digital information from the literature and underlying datasets [6]. Mature reference management tools exist to aid general-purpose bibliography organization and content annotation, including Zotero [35], Mendeley [40], and EndNote [1]. Moreover, community-sourced terminologies [11,14,21,38] and domain-specific markup languages [16,24,18] provide human-interpretable controlled vocabularies and machine-readable file formats, respectively. Efforts are also underway to generate standardized data models [15,39,36] and to formalize related concepts into robust ontologies [20,23,25]. As a result, full-text information retrieval systems are becoming indispensable research aids [13,22,28,29].
Despite promising progress, neuroscience and related fields lacked until recently a user-friendly tool to annotate a dataset or journal article across a customizable variety of fields with a set of controlled vocabularies. At the same time, a systematic and well-documented extraction process is essential to keep the curated metadata updated over time and portable between different projects [32]. Perhaps the sole example of an open-source, web-based framework for the acquisition, storage, search, and reuse of scientific metadata is the CEDAR workbench [17]. On the one hand, the entirety of neuroscience is too broad and diverse to fully benefit from an all-encompassing metadata annotation tool. On the other, the most useful motivating applications are typically task specific and, consequently, difficult to compare with other developed tools. Meanwhile, several fundamental metadata dimensions, including details about the animal subject, the location within the nervous system, and the experimental condition, are largely common to even considerably distinct subfields of neuroscience. One possible approach is therefore to design a practical solution to a specific problem of interest while adhering to a strictly open-source implementation that may foster broad adoption and custom adaptation throughout the neuroscience community.
Here, we introduce a resource developed to promote and facilitate data sharing and metadata annotation for NeuroMorpho.Org, a repository providing unrestricted access to digital reconstructions of neuronal and glial morphology [2,3]. The acquisition and release of morphological tracings begin with the continuous identification of newly published scientific reports describing data of interest [19,26]. To annotate the reconstructions with proper metadata, the repository administrators have also been inviting data contributors to provide suitable information through a semi-structured Excel spreadsheet [33]. While the ecosystem of neuronal reconstructions has coalesced around a simple data standard for over two decades [30], selection and interpretation of metadata concepts remain highly variable and inconsistent. Thus, for every new dataset, a team of trained curators must validate or reconcile the author-provided information, complemented as needed by the associated publication, with the metadata schema and preferred nomenclature of the database. Many data releases also introduce new metadata concepts, which need to be integrated into the existing ontology and require updating relevant database hierarchies with appropriate terms. Although the described process is time-consuming, labor-intensive, and error-prone, metadata annotation is instrumental to enable NeuroMorpho.Org semantic queries [34] and machine accessibility through Application Programming Interfaces [4].
This article presents the NeuroMorpho.Org metadata portal, a novel, open-source, web-based tool for the efficient annotation and collaborative management of data descriptors for digital reconstructions of neuronal and glial morphology. The main goal of this effort is the gradual automation of the metadata extraction process to reduce the burden on database curators, thus streamlining the data release workflow for the benefit of the entire research community. A related motivation is to bring domain expertise closer to the crucial task of metadata curation by empowering data contributors with direct dataset annotation through a graphical user interface. The longer-term vision is to lay the training data foundation for augmenting neuro-curation with semi-autonomous machine learning components such as recommendation systems or natural language processing tools [8,9,12]. With this report, we freely release the documented code base to date and welcome modifications or improvements by other developers to tailor the metadata management platform for different neuroscience initiatives.
Methods
The metadata portal is designed to match the NeuroMorpho.Org metadata structure. Here first we summarize the organization of reconstruction metadata in this resource and then explain how the architectural design of the portal optimally serves the needs of the project.
Organization of NeuroMorpho.Org metadata
NeuroMorpho.Org stores over 120,000 digital reconstructions of neuronal and glial morphology from nearly 650 independent laboratories and more than 1000 peerreviewed articles. Each reconstruction is associated with detailed metadata across 25 dimensions thematically grouped into five different categories, namely animal, anatomy, completeness, experiment, and source [33].
The animal category specifies the subject of the study: species, strain, sex, weight, development stage, and age.
The anatomy category designates the brain region and cell type. Each of these two dimensions is hierarchically divided into three levels, from generic to specific: for instance, hippocampus/CA1/pyramidal layer and interneuron/basket cell/parvalbumin-expressing. Three considerations are especially important in this regard: first, additional information can be added in multiple entries at the third level. In the above example, the brain region could be further annotated as left and dorsal; and the cell type as fast-spiking and radially oriented. Second, the anatomical hierarchies are loosely rather than strictly organized since the specific details reported in (and relevant for) different studies vary considerably. If another paper describes the brain region of its dataset simply as dorsal hippocampus (without mentioning sub-area and layer), the concept "dorsal" would shift up to the second level. Third, both brain regions and cell types depend dramatically on the animal species, and most substantially diverge at the vertebrate vs. invertebrate taxa. Whenever possible, NeuroMorpho.Org follows the BrainInfo classification and NeuroNames terminology for vertebrates [10], and Virtual Fly Brain for invertebrates [31].
The completeness category provides details on the relative physical integrity of the reconstruction (accounting for tissue sectioning, partial staining, limited field of view, etc.), the structural domains included in the tracing (soma, axons, dendrites, undifferentiated neurites or glial processes), and the morphological attributes included or excluded from the measurement (most importantly, diameter and the depth coordinate).
The experiment category consists of methodological information describing the preparation protocol (e.g. in vivo, slice or culture), condition (control vs. lesioned, treated or transgenic), visualization label or stain, thickness and orientation of slicing or optical sections, objective type and magnification, tissue shrinkage and eventual corrections, and the tracing software.
The fifth category, source, provides details on the contributing laboratory, the reference publication, the original digital file formats, and the dates of receipt and release.
If any metadata dimension is not returned by the author or mentioned in the publication, the corresponding entry is marked as "Not reported" in the repository.
Here we refer to 'dataset' as a collection of reconstructions associated with a single peer-reviewed publication. Many datasets are naturally divided into distinct metadata groups, either as a focus of the study (e.g. control vs. experimental condition) or because of cell-level specification of a particular variable (often animal sex or age). Typically, almost all metadata features are identical across the entire dataset except for specific details varying between groups. NeuroMorpho.Org preserves the same annotation organization at the levels of dataset, groups, and individual cells (Fig. 1). This intuitive yet compact structure conveniently allows both comparative statistical analyses and machine-readable accessibility via APIs.
Design and implementation of the metadata portal
To ensure flexibility, scalability, portability, and efficiency, the metadata portal is designed based on the modelview-controller (MVC) software architecture [7]. This modular approach separates the application into three essentially independent components. The model represents the metadata structure and reflects the constraints, relations, and formats stored in the database through an object-relational mapper (ORM). The view defines the display presented to the operator through the graphical user interface (GUI). The controller mediates the requests of the user, interacts with the model, and generates an appropriate response for the view (Fig. 2). While anchoring the architectural foundation of the metadata portal onto a safe and trusted design pattern, the novelty of this development mostly lies in its goal and features that assist users in the metadata curation process. The entire implementation abides by open-source principles and relies solely on open-source resources.
The relational models of the portal in addition to the data are maintained in PostgreSQL, a fast, secure, and extensible relational database management system. The user interface is formulated by HTML, JavaScript, and Bootstrap, a Cascading Style Sheet (CSS) framework directed at responsive front-end web development. The control back-end is programmed in Django, a Python-based framework emphasizing pluggable and reusable elements, to regulate the interactions between database and users. Such modular yet integrated web-based framework offers rapid, cost-effective, and customizable application Fig. 2 Overview of the system's architecture. The code base of the metadata portal is running on Nginx and Gunicorn webservers. The Django controller handles all requests submitted by the users or received through the application programming interface (API), translates them into machine-readable commands and database queries, and returns the proper results development. The resulting application is effortlessly accessible anytime across different platforms, enhancing interoperability and enabling different classes of users (authors, admins, and curators) to use the system independently while maintaining their work in the database.
The metadata portal encompasses most of the essential components to fulfill the curation needs of Neuro-Morpho.Org. At the same time, it is also continuously evolving as new operational capabilities are prioritized. Recently developed features include: (i) the API (http:// cng-nmo-meta.orc.gmu.edu/api/) enabling data interaction between the metadata portal and NeuroMorpho. Org; (ii) keyword search (http://cng-nmo-meta.orc.gmu. edu/searc h/), a user-friendly search engine allowing users to look for available terms in the database and their hierarchy; and (iii) bulk-modification feature, providing the ability to modify a large portion of terms within datasets.
The user interface of metadata portal offers seamless access to different parts and features of the system. The main page (http://cng-nmo-meta.orc.gmu.edu/) lists all active datasets. Each dataset is annotated with the name of the data contributor, publication identifiers (PMID and URL), and information regarding grant support. Metadata groups and their corresponding labels can be entered manually or are automatically created upon uploading grouped reconstruction files. Next, users select the actual entries for every metadata dimension, and the entire information remains accessible and editable through the web form. A detailed step-by-step metadata annotation protocol follows at the end of the Results.
Results
We deployed the metadata portal for internal usage in the NeuroMorpho.Org curation team in spring 2018 after release v.7.4 of the database, which contained 86,893 reconstructions. The most recent release at the time of this writing (fall 2019), v.7.9, contains 121,578 reconstructions. Thus, we completed five full releases and annotated nearly 35,000 new reconstructions using the novel system described in this article. Moreover, we analyzed the records regarding metadata entry over four releases prior to deployment of the current system, namely, from right after release v7.0 (fall 2016), which contained 50,356 reconstructions. In the next section, we describe the positive impact on the project of switching from offline spreadsheet annotation to the web-based metadata portal.
Metadata complexity, time saving, and error reduction
The metadata form in NeuroMorpho.Org employs more than 40 fields to encompass the details of the experiment, as several dimensions (e.g. animal weight and age) require more than one field (e.g. a numerical value and a unit scale). If treated as free text entry, many terms can be written in multiple equivalent variants, as in 'mouse' , 'Mouse' , 'mice' , 'mus musculus' as well as being prone to semantically deviant typos ('moose'). When considering the combination of all metadata fields, even in the absence of errors, the exact same information can be annotated in more than 10,000 distinct ways. Such an extreme case of combinatorial synonymy raises serious database management issues, in addition to slowing down search queries and requiring substantially inflated curation efforts. While the 'mouse' example may appear innocuous, even professional annotators can rapidly slide outside their zone of comfort when trying to distinguish between terminological equivalence and subtle but important differences in a genetic manipulation, staining process or electrophysiological firing pattern. The metadata portal offers a solution based on a corpus of controlled vocabularies consisting of public NeuroMorpho.Org content practically organized in user-friendly dropdown menus with autocomplete functionality and 'similar hits' suggestions. Moreover, the web form is endowed with hierarchical logic so that, for example, rat strains are not presented if mouse is selected as species.
Another major aspect of metadata annotation is the ongoing necessity to add new terms to describe previously unencountered entries. While certain dimensions, such as developmental stage, sex, objective type, and physical integrity, remain essentially unaltered over time, others, including brain regions, cell types, and experimental conditions, grow continuously at rates of approximately 5% (amounting to hundreds of new entries) per database release ( Table 1). The web-based system facilitates the management of new concepts by enabling submission of free-text entries when needed; these are logged in real time into the database, allowing secondary review and provenance tracking.
Note that the growth of the data has maintained an approximately constant pace throughout the analyzed period, with similar amounts of metadata annotations considered before and after the introduction of the portal. Based on our lab records and analytics reports, the initial manual annotation of datasets in the last four releases (v.7.1-4) prior to deploying the metadata portal took an average of 1 h and 40 min per article (100 ± 10 min, mean ± standard deviation; N = 308 articles). The mean time required for the same operation in the five subsequent releases following the introduction of the portal (v. 7.5-9) dropped to 55 ± 5 min per article (N = 166), corresponding to a net saving of 45 min in the first step of metadata curation for each dataset. Moreover, all new terms need to be identified both to ensure appropriate database updating and synchronization, and to inform users upon release. This operation used to be carried out manually by visually inspecting each form, which normally required 14 ± 1 h of labor per release. The web-based portal automatically logs and reports all new terms, thus completely eliminating the need for this effort.
After the first annotation phase, metadata curation requires a second step of quality check after the preview release on the password-protected server and corresponding review by data contributors and database curators prior to public release. In most cases, this second phase entails at least some corrections and adjustments. When metadata was entered manually through a regular spreadsheet form (through v.7.4), most errors requiring corrections consisted of spelling mistakes ('neocrotex' instead of 'neocortex') or use of non-preferred terms ('isocortex' or 'ctx'). A less common type of corrections involved conventional order of entries, as in "neocortex > medial prefrontal > right" vs. "neocortex > right > medial prefrontal". Altogether, these issues required 100 ± 15 corrections per release in the old system. Use of controlled vocabularies, dropdown menus, smart filters, and autocomplete functionality dramatically reduced these instances to as few as 15 ± 5 per release. Corrections are especially taxing on data curators and database administrators, because mistaken 'new' entries need to be removed post-ingestion to avoid inconsistencies, indices and caches cleared, and synonyms properly linked for searches to work as intended. The drastic reduction in the number of required corrections saved about 18 h of labor per release, from 22 ± 3 prior to portal adoption to 4 ± 1 afterwards. When considering all sources of time saving (annotation, new term extraction, and corrections), the introduction of the web portal reduced the metadata annotation effort from 115.6 ± 35.4 to 48.3 ± 19.5 personhour/release, a 58% effort reduction (Fig. 3).
Usage protocol
In addition to the many advantages of the metadata portal described above, the web-based implementation naturally enables its direct usage by the authors of the articles described the original datasets, namely the data contributors. Considering the greatly improved performance of metadata annotation, with this article we invite all researchers depositing their neuronal and glial tracings into NeuroMorpho.Org to utilize the portal for annotating their submission. In this section, we overview the functionality, features and usage of the system http:// cng-nmo-meta.orc.gmu.edu/.
In order to limit the server susceptibility to automated malicious activities, users must log in via username (nmo-author) and password (neuromorpho) or using a Google account. Using the latter approach, the user's entry remains private (only visible to the contributor and the administrators, but not to other users) until approved for public release by the NeuroMorpho.Org curators. Upon entering the portal (Fig. 4), users can create a dataset by clicking on the 'New!' button in the main view.
The newly opened window prompts the insertion of information related to the reference publication such as PMID, authorship, and grant support. Next, clicking 'Submit & create the dataset' transitions to the next phase, namely uploading reconstruction files and defining the experimental groups (Fig. 5).
To upload reconstruction files, users should click the 'Browse' button to locate the zip folder containing the data. Separate groups with distinct experimental conditions (control vs. treatment, but also different anatomical locations, animal sex/age, etc.) must be organized as corresponding folder in the compressed archive. The 'New' Fig. 6 Metadata form to annotate the details of the reconstruction within each experimental group button in the Neuron group section adds an experimental group and calls a new form window requesting the corresponding metadata details (Fig. 6).
After filling out the entries as completely as possible, the user can click on 'submit the group' . In case of multiple groups, the auxiliary buttons facilitate duplication, propagation, and modification of metadata details (Fig. 7).
Shortly after final submission, the internal NeuroMorpho.Org secondary curation begins, which includes validating the newly added terms. The reconstruction files along with the descriptive metadata are then ready for ingestion and release on a password-protected preview site that mirrors the look-and-feel of NeuroMorpho.Org while allowing extensive review of content, annotations, and functionality by data contributors and curators prior to public release.
Conclusion
Continuous growth of neuroscience knowledge requires a parallel maturation of informatics resources to annotate data for future re-use and interpretation. This report introduced a newly developed metadata portal that leverages web-based technologies to facilitate effective curation of digital reconstructions of neuronal and glial morphologies. All components of this framework are open-source and can thus be adopted for or adapted to the needs of other related projects. Moreover, the metadata portal is ready to be integrated with artificial intelligence modules such as natural language processing or smart recommendation systems to further expedite and improve the critical bottleneck of database curation. Recently, machine learning algorithms have been successfully deployed for metadata extraction [27]. In particular, text mining tools, such as named entity recognition, can learn, identify, and label crucial elements of neuroscience documents like neuron names, brain regions, and experimental conditions [5,37] . Hence, our future aim will be, first, to train and validate a model on the growing set of curated articles in the NeuroMorpho. Org literature database, as well as on the named entities therein; and then to deploy it on the metadata portal in order to facilitate assisted keyword extraction. To be clear, we consider it unrealistic to expect full automation of all metadata extraction tasks in the near future, as too many decisions involve domain-specific expertise and often ad-hoc conventions. Nevertheless, the prospect of a hybrid human-computer interface ergonomically optimized to maximize the breadth, depth, and accuracy of annotation while minimizing time and labor is in our view well within reach. As a first step in that direction, the systematic coding of the prior entirely manual spreadsheet annotation process of NeuroMorpho.Org metadata within a web-form interfaced to a back-end database has already substantially reduced the ongoing curation effort. We are now releasing this system publicly to allow willing data contributors to enter the details of their datasets directly at the time of data submission. While the design of the portal still allows and encourages an iterative process of collaborative review to reduce the risk of ambiguity and inconsistencies, we hope that enabling metadata annotation by the "ultimate experts" who produced the data will bring us closer to a robust, distributed, and dynamic community-based resource. | 5,049.8 | 2020-03-26T00:00:00.000 | [
"Computer Science"
] |
Potential of wind power projects under the Clean Development Mechanism in India
Background So far, the cumulative installed capacity of wind power projects in India is far below their gross potential (≤ 15%) despite very high level of policy support, tax benefits, long term financing schemes etc., for more than 10 years etc. One of the major barriers is the high costs of investments in these systems. The Clean Development Mechanism (CDM) of the Kyoto Protocol provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at lowest cost that also promotes sustainable development in the host country. Wind power projects could be of interest under the CDM because they directly displace greenhouse gas emissions while contributing to sustainable rural development, if developed correctly. Results Our estimates indicate that there is a vast theoretical potential of CO2 mitigation by the use of wind energy in India. The annual potential Certified Emissions Reductions (CERs) of wind power projects in India could theoretically reach 86 million. Under more realistic assumptions about diffusion of wind power projects based on past experiences with the government-run programmes, annual CER volumes by 2012 could reach 41 to 67 million and 78 to 83 million by 2020. Conclusion The projections based on the past diffusion trend indicate that in India, even with highly favorable assumptions, the dissemination of wind power projects is not likely to reach its maximum estimated potential in another 15 years. CDM could help to achieve the maximum utilization potential more rapidly as compared to the current diffusion trend if supportive policies are introduced.
to the IEA, the global power sector will need to build some 4,800 GW of new capacity between now and 2030.
In the 11 th Five Year Plan, the Government of India aims to achieve a GDP growth rate of 10% and maintain an average growth of about 8% in the next 15 years [2]. According to Indian government officials, the growth of Indian economy is highly dependent on the growth on its energy consumption [3]. The 2006 capacity of power plants in India was 124 GW, of which 66% thermal, 25% hydro, 3% nuclear and 5% new renewables [4]. At the same time, Chinese power capacity reached over 600 GW [5], showing India's backlog. Wind energy is an alternative clean energy source and has been the world's fastest growing renewable energy source growing at a rate of 28% in the last decade [6]. Wind power has the advantage of being harnessed on a local basis for application in rural and remote areas [7]. Global wind power capacity reached 74 GW at the end of 2006 [8], 13 countries had more than 1 GW installed. Figure 1 presents the regional distribution of the global installed wind power capacity [8].
The impetus behind wind power expansion has come increasingly from the urgent need to combat global climate change. Most countries now accept that greenhouse gas (GHG) emissions must be drastically slashed in order to avoid environmental catastrophe. Wind energy offers both a power source that completely avoids the emission of carbon dioxide, the main GHG, but also produces none of the other pollutants associated with either fossil fuel or nuclear generation [9]. Wind power can deliver industrial scale on-grid capacity. Starting from the 1997 Kyoto Protocol, a series of GHG reduction targets has cascaded down to a regional and national level. These in turn have been translated into targets for increasing the proportion of renewable energy, including wind. In order to achieve these targets, countries in both Europe and elsewhere have adopted a variety of market support mechanisms [10][11][12][13][14][15]. These range from premium payments per unit of output to more complex mechanisms based on an obligation on power suppliers to source a rising percentage of their supply from renewables. As the market has grown, wind power has shown a dramatic fall in cost [16]. The production cost of a kilowatt-hour of wind power is one fifth of what it was 20 years ago. In the best locations, wind is already competitive with new coal-fired plants. Individual wind turbines have also increased in capacity, with the standard commercial machines reaching 2.5 MW and prototypes for offshore plants even 5 MW.
The successful wind energy business has attracted the serious attention of the banking and investment market, with new players such as oil companies entering the market. Hays and Attwood [17] concludes that Asia is playing an increasingly important role in the global wind industry as the region prepares to invest over $12 billion in wind power generation capacity in the second half of this decade. In India, wind power already occupies a prominent position with regard to installed capacity -reaching 6.2 GW by the end of 2006. In 2006 alone, an aggregate capacity of 1.8 GW has been added [8]. Thus, India is the fourth largest wind market in the world [18]. However, the total installed capacity of wind power projects still remains far below from their respective potential (i. e. <15%). One of the barriers to the large-scale dissemination of wind power projects in India is the high upfront cost of these systems [19]. Other barriers to wind power projects are low plant load factors, unstable policies of the state governments and poor institutional framework.
Wind has considerable amount of kinetic energy when blowing at high speeds [20]. This kinetic energy when passing through the blades of the wind turbines is converted into mechanical energy and rotates the wind blades [21] and the connected generator, thereby producing electricity. A wind turbine primarily consists of a main tower, blades, nacelle, hub, main shaft, gearbox, bearing and housing, brake, and generator [22]. The main tower is 50-100 m high. Generally, three blades made up of Fiber Reinforced Polyester are mounted on the hub, while in the nacelle the major parts are housed. Under normal operating conditions the nacelle would be facing the upstream wind direction [20]. The hub connects the gearbox and the blades. Solid high carbon steel bars or cylinders are used as main shaft. The gearbox is used to increase the speed ratio so that the rotor speed is increased to the rated generator speed [21]; it is the most critical component and needs regular maintenance. Oil cooling is employed to control the heating of the gearbox. Gearboxes are mounted over dampers to minimize vibration. Failure of gearbox may put the plant out of operation for an entire season as spares are often not available. Thus, new gearless configurations have become attractive for Regional distribution of the global installed wind power capacity (Source: [8]) Figure 1 Regional distribution of the global installed wind power capacity (Source: [8] Figure 2[23]. Horizontal axis turbines resemble airplane propellers, with two to three rotor blades fixed at the front of the tower and facing into the wind. This is the most common design found today, making up most of the large utility-scale turbines on the global market. Vertical axis turbines resemble a large eggbeater with rotor blades attached vertically at the top and near the bottom of the tower and bulging out in the middle.
The most dramatic improvement has been in the increasing size and performance of wind turbines. From machines of just 25 kW twenty years ago, the commercial size range sold today is typically from 600 up to 2,500 kW, with 80 m diameter rotors placed on 70-100 m high towers. In 2003, the German company Enercon erected the first prototype of a 4.5 MW turbine with a rotor diameter of 112 m. Wind turbines have a design lifetime of 20-25 years, with their operation and maintenance costs typically about 3-5% of the cost of the turbine. For the share of different wind turbine types in India see Table 1.
At present, efforts are being made to develop a low cost, indigenous, horizontal axis Wind Energy Generator (WEG) of 500 kW rating. The WEG will have a two bladed rotor and the tower will be a tubular tower with guys. The organizations contributing in the development of the WEG are (i) National Aerospace Laboratory (NAL), (ii) Structural Engineering Research Centre (SERC), (iii) Sangeet Group of Companies, and (iv) Center for Wind Energy Technology (C-WET). It will be specially suited for Indian wind conditions i.e. relatively low wind speeds and dusty environment. It is further learnt that this WEG may cost almost 50% as compared to the other WEGs of the same rating commercially available in India. The WEG is nearing completion and likely to be completed by April-2007 [24].
Wind in India is dominated by the strong south-west summer monsoon, which starts in May-June, when cool, humid air moves towards the land and the weaker northeast winter monsoon, which starts in October, when cool, dry air moves towards the ocean. During the period March to August, the wind is uniformly strong over the whole Indian Peninsula, except the eastern peninsular coast. Wind speeds during the period November to March are relatively weak, though higher winds are available during a part of the period on the Tamil Nadu coastline.
In order to tap the potential of wind energy sources, there is a need to assess the availability of the resources spatially. A Wind Resource Assessment Programme was taken up in India in 1985 [19]. Around 1150 wind monitoring/ mapping stations were set up in 25 states and Union Territories (UTs) for this purpose. Over 200 wind monitoring stations in 13 states and UTs with annual mean wind power density greater than 200 W/m 2 at a height of 50 m above the ground level show wind speeds suitable for wind power generation [25]. The wind power density at a height of 50 m above the ground level is depicted in Figure 3. On a regional basis, more detailed assessments have been done. Ramachandra and Shruthi [26] employed a geographical information system (GIS) to map the wind energy resources of Karnataka state and analyzed their variability considering spatial and seasonal aspects. A spatial data base with data of wind velocities has been developed and used for evaluation of the theoretical potential through continuous monitoring and mapping of the wind resources. The study shows that the average wind velocity in Karnataka varies from 0.9 m/s in Bagalkote to 8.3 m/s in Chikkodi during the monsoon season. Agroclimatic zone wise analysis shows that the northern dry zone and the central dry zone are ideally suited for harvesting wind energy for regional economic development.
Onshore wind power potential in the country has been assessed at 45 GW assuming 1% of land availability for wind power generation in the potential areas [27]. However, it is estimated that a penetration (supply fraction) of wind power on a large grid can be as much as 15-20% without affecting grid stability due to requirement of reactive power [28]. Therefore, at present, it is not technically feasible to exploit the full wind power potential in view of total installed power-generating capacities from conventional power generating methods including hydro-electric power plants in different states. Considering a maximum of 20% penetration of existing capacities of the grids through wind power in the potential states, technical potential for grid interactive wind power is presently limited to only 13 GW [25]. Total technical potential for wind power in the country is expected to increase with augmentation of grid capacity in potential states. Table 2 presents a state wise break-up of the estimated technical potential along with wind power installed capacity as on 30 September 2006. One should note that Tamil Nadu has already surpassed the presumed technical potential, indicating that it may be underestimated for India as a whole.
The original impetus to develop wind energy in India came in the early 1980s from the government, when the Commission for Additional Sources of Energy had been set up in 1981 and upgraded to the Department of Non-Conventional Energy Sources in 1982. The setup of these institutions was due to the wish to encourage a diversification of fuel sources away from the growing demand for coal, oil and gas required to meet the demand of the country's rapid economic growth [29]. A market-oriented strategy was adopted from inception, which has led to the successful commercial development of the technology. The broad based national programme included wind resource assessment; research and development support; implementation of demonstration projects to create awareness and opening up of new sites; involvement of utilities and industry; development of infrastructure capability and capacity for manufacture, installation, operation and maintenance of wind power plants; and policy support.
The Ministry of Non-Conventional Energy Sources (MNES) which was set up in 1992 has been providing support for research and development, survey and assessment of wind resources, demonstration of wind energy technologies and has also taken fiscal and promotional measures for implementation of private sector projects [30,31]. India now has a fairly well-developed and growing wind power industry with a number of Indian companies involved in manufacturing of wind turbines. These companies have tied up with foreign wind power industries for joint venture/licensed production in India, for their market shares see Table 1. Wind turbines up to 2 MW are presently manufactured in India [25]. Figure 4 presents the cumulative capacity of wind power installed in India over time.
A notable feature of the Indian programme has been the interest among private investors/developers in setting up of commercial wind power projects. This is due to a range of fiscal incentives provided by the Indian government such as 80% accelerated depreciation, tax holiday for power generation projects, concessional customs and excise duty as well as liberalized foreign investment procedures [25,29,31]. The Indian Renewable Energy Development Agency (IREDA) provides concessional loans. Current interest rates are 9.5% for a maximum repayment period of 10 years and 9.0% for a maximum repayment period of 8 years [25]. Table 3 presents the summary of key central government incentives for wind power projects in India.
The MNES has issued guidelines to all state governments to create an attractive environment for the export, purchase, wheeling and banking of electricity generated by wind power projects. The guidelines include the promotion of renewables including wind energy through preferential tariffs and a minimum obligation on distribution companies to source a certain share of electricity from renewable energy. However, only a subset of states is actually complying with these guidelines. The State Electricity Regulatory Commissions (SERCs)of Andhra Pradesh, Madhya Pradesh, Karnataka and Maharashtra provide preferential tariffs for wind power. Maharashtra, Andhra Pradesh, Karnataka, Madhya Pradesh and Orissahave enacted the renewables obligation on distributors. The problem with incentives on the state level is that they vary erratically and thus cannot be taken for granted by project developers (see Table 4 for the case of Rajasthan).
The main attraction for private investment is the fact that owning a wind turbine assures a profitable power supply compared to the industrial power tariff, which is kept artificially high to cross-subsidize electricity tariffs for farm- Wind power potential in India (Source: Centre for Wind Energy Technology (C-WET), Government of India) Figure 3 Wind power potential in India (Source: Centre for Wind Energy Technology (C-WET), Government of India).
ers. Therefore, clusters of individually owned wind turbines appear to substitute grid electricity. More than 97% of investment in the Indian wind sector is provided from the private sector [25]. However, the impending liberalization under the Electricity Act 2003 may take away this key incentive if industrial power users can procure electricity at competitive rates.
The Clean Development Mechanism under the Kyoto Protocol allows developing countries to generate emission credits (CERs) for industrialized countries by GHG emission reduction projects such as wind power. The sale of CERs could help to accelerate wind power development in India. We assess the theoretical CDM potential of wind power projects in India before discussing whether at the current market situation such projects could become attractive.
Results: CDM Potential of Wind Power Projects in India
Considerable variation has been observed in the reported values of the PLF of the wind power plants in the CDM Project Design Documents (Table 5). Therefore, in this study to estimate the CDM potential of wind power projects in India the PLF of the wind power plants have been taken as 25%. There are five regional grids within the country -the Northern, Western, Southern, Eastern and North-Eastern. Therefore, the CO 2 emissions mitigation potential through wind power projects in India is estimated on the basis of the regional grids, whose emission factors have been calculated by the Central Electricity Authority (CEA) of the Government of India in 2006. Table 6 presents the estimated values of CDM potential through wind power projects in India on the basis of the regional baselines.
We now do a sensitivity analysis with regards to additionality determination. The case of lax additionality assumes that all wind power projects submitted are registered. The median case assumes that the rejection rate remains at the current level (2 out of 18 projects, i.e. 11%). The case of stringent additionality assumes that 50% of the projects are registered. In the lax additionality case, gross annual CER potential of wind power in India reaches 86 million. Similarly, based on the technical potential of wind power projects in India the CDM potential has been estimated as 25 million tonne. Among all the states in India, Gujarat has the largest CO 2 emissions mitigation potential through wind power (19 million tonne) followed by Andhra Pradesh (15.6 million tonne), Madhya Pradesh (10.8 million tonne), Karnataka (12.5 million tonne), Rajasthan (8.9 million tonne), and so on ( Table 6). The annual electricity generation by wind power projects based on the gross and technical potential is also given in Table 6. With 25% PLF of wind power projects the annual gross electricity generation potential has been estimated at 99 TWh whereas the annual technical electricity generation potential has been estimated at 28 TWh. Source: [25] Figure 5 shows the development over time. It may be noted that with the current trend of dissemination of wind power projects in the country, around 22 GW capacity could be installed up to the end of first crediting period in the SS scenario whereas in the OS scenario 36 GW capacity could be installed. Up to the the year 2020, more than 44 GW capacity of the wind power projects are expected to be installed that would generate 87 million CERs.
Discussion: How the CDM could be applied to the Diffusion of Wind Power Projects?
The CDM was slow to take off as after the Marrakech Accords of 2001 it took another three years to define the bulk of the rules. The CDM Executive Board (EB) which is the body defining the CDM rules surprised many observers by taking a rigorous stance on critical issues such as baseline and additionality determination (see below).
Once the key rules were in place, a "gold rush" happened in 2005 and 2006. Over 1500 projects were submitted with an estimated CER volume of about 1.5 billion. However, the volume share of renewable energy projects has been less than expected due to the high attractiveness of projects reducing industrial gases and methane from waste. Figure 6 presents the status of the wind power projects from India. Out of the 89 projects submitted to the UNFCCC, 18 projects had been registered and two projects had submitted the request for registration. 67 projects were at the validation stage whereas 2 projects had been rejected by the EB.
Baseline
The quantification of GHG benefits of a CDM project is done by means of a "baseline". A baseline describes the (theoretical) emissions that would have occurred in case the CDM project was not implemented. The amounts of CERs that can be earned by the project are then calculated as the difference of baseline emissions and project emissions. The CO 2 emissions mitigation benefits associated with a wind power project depend upon the amount of electricity saved. To estimate the CDM potential of wind power project in the country, the approved consolidated baseline methodology for grid-connected electricity generation from renewable sources ACM0002 (Version 06) has been used. For the small scale CDM (SSC) projects, the small scale methodology AMS-I.D. "Grid connected renewable electricity generation" in its version of 23 rd December 2006 [34] can be used which explicitly mentions wind power for electricity generation. In India, most of the wind power projects are grid connected and substitute grid electricity. Therefore, for such systems, the baseline is the kWh produced by the renewable generating unit multiplied by an emission coefficient (measured in g CO 2 eq./kWh) calculated in a transparent and conservative manner. This coefficient is 800 g CO 2 eq./kWh for a grid where all generators use exclusively fuel oil and/or diesel fuel, whereas it is the weighted average of the so-called operating margin (emission factor of all thermal power plants serving the grid) and build margin (emission factor of the most recently built plants that provide 20% of the grid's electricity). For wind power, the weight of the oper- 25% iii) Blades for rotor of wind operated electricity generators for the manufacturers or the manufacturers of wind operated electricity generators. 5% iv) Parts for the manufacturer or the maintenance of blades for rotor of wind operated electricity generation.
5% v) Raw materials for manufacturer of blades for rotor of wind operated electricity generators. 5% ating margin is 0.75 while the build margin is weighted at 0.25. Alternatively, project developers can use the weighted average emissions of the current generation mix but this will always be less than the emission factor derived previously and thus unattractive. For intermittent and non-dispatchable generation types such as wind and solar photovoltaic, ACM0002 allows to weigh the operating margin (OM) and build margin (BM) at 75% and 25%, respectively, however, in this study we have used combines margin by using equal weights for OM and BM as given in CEA document [35].
Additionality
To maintain the environmental integrity of the Kyoto Protocol, CERs are given only for "additional" activities that would otherwise not be expected to occur [36]. Therefore, any CDM project requires careful analysis of additionality. This has probably been the most contentious point in the development of the CDM and also resulted in great confusion amongst project developers [37,38]. The Kyoto Protocol stops short of requiring project proponents to show strict financial additionality -that the CDM revenue makes an uneconomic project economic -and left scope for the CDM EB to refine the demonstration of additionality. The EB subsequently took a fairly strict interpretation of additionality and developed an additionality tool which formally is voluntary but which has become de facto mandatory as it was incorporated in most baseline methodologies. The additionality tool requires an investment analysis and/or a barrier analysis to determine whether the CDM project is the most attractive realistic alternative. This means that the project can be profitable and additional as long as developers can show that another project type was even more profitable.
It is estimated that wind power in many countries is already competitive with fossil fuel and nuclear power if social/environmental costs are considered [28]. However, in India, in terms of costs per kWh in grid-connected areas, costs of wind power are higher than electricity provided by a coal plant projects thus be additional at any rate. The unit cost of electricity generation is 0.05 €/kWh for coal and 0.06 €/kWh for fuel oil based system whereas in case of wind, the unit cost of electricity generation is 0.07 €/kWh in the best locations. The problem with this reasoning is that if wind projects are used to displace expensive grid electricity for industrial consumers (priced at 0.09 €/kWh [39]), they are invariably the most attractive alternative unless they are built in locations with low wind speed. The situation for wind projects that supply to the grid at the state-guaranteed feed-in tariff is less clear; the attractiveness depends on the level of the tariff.
As the investment test will not be passed by most wind projects (or only if they omit the tax incentives, as has been done by a project that achieved registration), project developers will use the barrier test. The barrier of higher capital cost compared to fossil fuel power plants is not really credible due to the abundance of capital for wind power in India and thus is mentioned only rarely. More credible barriers are low capacity utilization factor, and possible reduction in feed-in tariffs. The former depends on the siting of the project. The latter is very important as shown by the policy of Rajasthan (see Table 3 Tariff (ABT) in which the generators with firm delivery of power against commitment will start getting more prices for the generated power, whereas wind power producers cannot guarantee supply of electricity and will be thus receive lower rates. For the projects that substitute grid electricity at industrial tariffs, there is the risk that the wind power benefit will melt down as liberalization permits industrial electricity consumers to choose the supplier in a competitive environment. Some projects have also highlighted the technological risks associated with new types of wind turbines. Lack of familiarity and experience with such new technologies can lead to perceptions of greater technical risk than for conventional energy sources.
Doing the investment test -case study
A 125 MW wind project in Karnataka calculated an IRR of 7.3%. At that rate, the project would clearly be unattractive for an investor. However, the picture changes if one analyzes the project more closely. If one uses industry averages for the investment cost (Rs 5 crore per MW), the IRR is 11%. If one includes the accelerated depreciation of 80% in the first year and the 10 year income tax holiday, the IRR reaches 22% (personal communication by Mr. Sanjeev Chadha). It would be difficult to find serious alternatives that are more attractive. Nevertheless, the project was registered by the EB. Table 5 presents the additionality arguments of Indian wind power projects. 14 projects out of 20 have carried out investment and barrier analysis for the justification of additionality whereas 6 projects carried out the barrier analysis only. An assessment of the PDD's indicates that the investment analysis is not convincing in most of the cases. Two wind projects from India were rejected due to lack of additionality. The rejection was mainly due to the following statement in the annual report of the company that had invested in the projects: "The project is extremely beneficial on a standalone basis and has a payback period of three years with an internal rate of return in excess of 28 per cent. In addition to hedging Bajaj Auto's power costs, this investment also provides sales tax incentives and an income tax shield" [40].
Monitoring
For wind power plants, monitoring is easy -you just meter the electricity generated and sold to the grid.
Conclusion
Our estimates indicate that, there is a vast theoretical potential of CO 2 mitigation by the use of wind energy in India. On the basis of available literature, the gross potential of wind power is more than 45,000 MW. The annual CER potential of wind power in India could theoretically reach 86 million tonnes. Under more realistic assumptions about diffusion of wind power projects based on past experiences with the government-run programmes, annual CER volumes by 2012 could reach 41 to 67 million and 78 to 83 million by 2020. The projections based on the past diffusion trend indicate that in India, even with highly favorable assumptions, the dissemination of wind power projects is not likely to reach its maximum estimated potential in another 15 years. CDM could help to achieve the maximum utilization potential more rapidly as compared to the current diffusion trend if supportive policies are introduced.
CO 2 emissions mitigation potential of a windmill
The power output of a windmill essentially depends on the site/location specific parameters (such as wind speed, air density, etc.) and design parameters (such as coefficient of performance of the wind rotor, swept area of the rotor, cut-in, cut-out and rated wind speed of the rotor, etc.) of the windmill. Therefore, the annual useful energy, AUE wind , delivered by a windmill can be estimated as [20] where γ represents the windmill turbine mechanical availability factor accounting for downtime during maintenance etc., P(v) the power produced by the windmill at wind speed v (in m/s), F(v) the Weibull probability distribution function, v ci the cut-in wind speed and v co the cutout wind speed of the windmill.
The power produced by the windmill at wind speed v may be expressed as [41] where C p represents the coefficient of performance of the wind rotor, ρ a the density of air, A the swept area of the rotor and v the wind speed.
The variation in wind speed at a location is often described by the Weibull probability distribution function F(v). The Weibull probability density function is given by the following expression [20,42] where k represents the shape parameter and c the scale parameter.
Substituting the values of P(v) and F(v) from Eqs. (2) and (3) into Eq. (1) the annual useful energy (in kWh) delivered by the windmill can be expressed as [43] AUE
Rejected 2%
Realistic CDM potential for wind power until 2020 Figure 5 Realistic CDM potential for wind power until 2020. Year Cumulative capacity of wind power projects (MW)
SSwind OSwind
The annual gross CO 2 emissions mitigation potential of a windmill essentially depends upon the annual electricity saved by the windmill and the CO 2 emission factor of the electricity.
where l (in fraction) represents the electrical transmission and distribution losses of the grid and CEF e the baseline CO 2 emission factor.
For a given capacity of a wind power project the CO 2 emissions mitigation potential can be estimated as: where P wind (in MW) represents the capacity of wind power project, PLF wind (in fraction) the plant load factor of the wind power project, CEF e the CO 2 emission factor of electricity. The term inside the second bracket of the right hand side of Eq. (6) is the annual amount of electricity saved by the wind power project.
Estimation of Diffusion of Wind Power Projects in India
The diffusion of a technology measured in terms of the cumulative number of adopters usually conforms to an exponential curve [44] as long as the new technologies manage to become competitive with incumbent technologies. Otherwise, the steep section of the curve would never be reached because technology use falls back to zero at the removal of subsidies [45]. The exponential growth pattern may be of three types -(i) simple exponential, (ii) modified exponential, and (iii) S-curve. Out of these three growth patterns, the simple exponential pattern is not applicable for the dissemination of renewable energy technologies, as it would imply infinite growth [46]. The modified exponential pattern (with a finite upper limit) is more reasonable but such a curve may not match the growth pattern in the initial stage of diffusion [47,48].
Empirical studies have shown that in a variety of situations the growth of a technology over time may conform to an S-shaped curve, which is a combination of simple and modified exponential curves [49,50]. The S-shaped curves are characterized by a slow initial growth, followed by rapid growth after a certain take-off point and then again a slow growth towards a finite upper limit to the dissemination [51]. However, a logistic model is used to estimate the theoretical cumulative capacity of wind power projects at different time periods.
As per the logistic model, the cumulative capacity, P(t), of the wind power projects disseminated up to a particular period (t th year) can be expressed as [49] where P max represents the estimated maximum utilization potential of the renewable energy technology in the country. The regression coefficients a and b are estimated by a linear regression of the log-log form of equation as given below. Figure 7 represents the projected time variation of the cumulative capacity of wind power using the logistic model considered in the study. Two cases such as business as usual or standard scenario (SS) and optimistic scenario (OS) are presented. The values of the regression coefficients using a logistic model have been estimated by regression of the time series data for the installation of wind power (Figure 4) extracted from the annual reports of the MNES [25]. In the optimistic scenario it is assumed that, in the past, if the diffusion of wind power would have been driven by the market forces instead of subsidies then the cumulative capacity of installation of wind power would be three times more than the actual level [52,53]. Our results indicate that in India, even with highly favourable assumptions, the dissemination of wind power projects is not likely to reach its maximum estimated potential in another 15 years. But all these time = + a bt (8) Time variation of cumulative capacity of wind power in India using logistic model Figure 7 Time variation of cumulative capacity of wind power in India using logistic model. | 7,725.2 | 2007-07-30T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
The Intracellular Domain of the Low Density Lipoprotein Receptor-related Protein Modulates Transactivation Mediated by Amyloid Precursor Protein and Fe65*
Low density lipoprotein-related protein (LRP) is a transmembrane receptor, localized mainly in hepatocytes, fibroblasts, and neurons. It is implicated in diverse biological processes both as an endocytic receptor and as a signaling molecule. Recent reports show that LRP undergoes sequential proteolytic cleavage in the ectodomain and transmembrane domain. The latter cleavage, mediated by the Alzheimer-related γ-secretase activity that also cleaves amyloid precursor protein (APP) and Notch, results in the release of the LRP cytoplasmic domain (LRPICD) fragment. This relatively small cytoplasmic fragment has several motifs by which LRP interacts with various intracellular adaptor and scaffold proteins. However, the function of this fragment is largely unknown. Here we show that the LRPICD is translocated to the nucleus, where it colocalizes in the nucleus with a transcription modulator, Tip60, which is known to interact with Fe65 and with the APP-derived intracellular domain. LRPICD dramatically inhibits APP-derived intracellular domain/Fe65 transactivation mediated by Tip60. LRPICD has a close interaction with Tip60 in the nucleus, as shown by a fluorescence resonance energy transfer assay. These observations suggest that LRPICD has a novel signaling function, negatively impacting transcriptional activity of the APP, Fe65, and Tip60 complex in the nucleus, and shed new light on the function of LRP in transcriptional modulation.
LRP, 1 a member of the low density lipoprotein (LDL) receptor family, is a type I integral membrane protein that has a very large extracellular domain and a relatively small cytoplasmic tail. Cleavage by furin (1,2) produces the mature cell surface receptor, which consists of an 85-kDa membrane-bound carboxyl fragment and a noncovalently attached 515-kDa ami-no-terminal fragment. The 105-amino acid cytoplasmic domain interacts with intracellular adaptor and scaffold proteins. LRP is a multifunctional protein that interacts with and mediates endocytosis of a broad range of secreted proteins and cell surface molecules, such as plasminogen activators, plasminogen activator inhibitor-1, ␣ 2 -macroglobulin, amyloid precursor protein (APP), and apolipoprotein E (3)(4)(5).
LRP has a complex set of interactions with APP. They interact both by virtue of an extracellular ligand-receptor interaction and via an interaction between the cytoplasmic tails of APP and LRP, mediated by the adaptor protein Fe65 (6 -11). Like APP, shedding of the extracellular domain of LRP from the cell surface by a metalloproteinase has been reported (12,13). Furthermore, a recent report suggested that LRP undergoes a presenilin-dependent intramembranous proteolysis (14) in a manner analogous to Notch and APP. This cleavage releases a cytoplasmic fragment of LRP (LRPICD) from the membrane. However, the function of LRPICD is not yet known.
In addition to its well characterized role as an endocytic receptor, recent data suggest that LRP (like other LDL receptor family members, VLDLR and ApoER2), could have essential signaling functions that involve interactions of its cytoplasmic tail with the intracellular signaling machinery (15)(16)(17). Transmembrane proteins can be cleaved within the plane of the membrane to liberate cytosolic fragments that enter the nucleus to control gene transcription. This mechanism, called regulated intramembranous proteolysis (Rip), adds molecular diversity in the field of signaling (for a review, see Ref. 18). APP recently has been shown to play a role in gene transcription, because the APP intracellular domain (AICD) (19), in collaboration with the adaptor protein Fe65, can transactivate a Gal4 reporter gene by interacting with the histone acetyltransferase, Tip60 (20).
Our current data show that LRPICD also may interact with Tip60, but in contrast to APP, LRPICD is a negative regulator of transactivation in this system.
EXPERIMENTAL PROCEDURES
Generation of Expression Constructs-A summary of constructs APP and LRP, which were used for this study, is shown in Fig. 1. The generation of the full-length LRP-GFP was described previously (10). The LC-Myc construct, which encodes only the light chain of LRP, tagged with Myc at its C terminus has been used and described previously (21). LRP165-Myc was generated from the LC-Myc plasmid. LC-Myc was digested with PstI, and the band containing the vector and the carboxyl terminus of LRP (coding 165 amino acids) was extracted and self-ligated to make the LRP165-Myc construct. The LRP105-Myc construct, encoding only the cytoplasmic domain of LRP, was amplified by PCR, using two sets of primers, 5Ј-CGCTCGAGGCCACCATGGTGGT-ATTCTGGTATAAGCGG-3Ј and 5Ј-CGAAGCTTGGTGCCAAGGGGTC-CCCTATCTC-3Ј. Then it was ligated into XhoI and HindIII sites of pcDNA3.1-Myc B (Invitrogen) vector.
LRP105-GFP NPXY and LRP105-Myc NPXY constructs with doubly mutated NPXY to APXA, were generated by PCR, using the LC-Myc construct containing doubly mutated NPXY motifs as a template as described above. Substitutions of asparagine and tyrosine to alanines in the two NPXY motifs of the cytoplasmic domain of the LC-Myc were performed using Transformer site-directed mutagenesis kit (Clontech) and confirmed by sequencing. Signal LRP105 encodes the signal peptide (Ig chain leader sequence) at its N terminus to ensure the secretion of expressed protein; the LRP105 fragment amplified by PCR was inserted into the PstI and XbaI sites of pSecTagB (Invitrogen).
LRP105-Gal4 contains a Gal4-binding domain at its C terminus, which was amplified by PCR from pMst-APP695 construct (see below) using two sets of primers, 5Ј-GTCGCTAGCAGGCCACCATGAAGCTA-CTGTCTTCT-3Ј and 5Ј-GACCTCGAGCGATACAGTCAACTGTCTTT-G-3Ј, and then inserted into the NheI and XhoI sites of the LRP105-Myc construct.
pMst-APP695 (APP-Gal4) was generously provided by Dr. T. Sudhof (University of Texas Southwestern Medical Center, Dallas, TX), in which Gal4 was inserted into the intracellular tail of APP695 at the cytoplasmic boundary of the transmembrane region as well as the Gal4 reporter plasmid (pG5E1B), which contains five Gal4 binding sites and the E1B minimal promotor in front of the luciferase firefly gene. pRK-Fe65 has been used and described previously (10).
pcDNA3.1-mDab1 was a generous gift from Dr. J. Hertz (University of Texas Southwestern Medical Center, Dallas, TX). Numb in the expression vector pSG5 was a generous gift of Dr. Y. N. Jan (University of California, San Francisco).
The AICD plasmid, encoding the C-terminal 58 amino acids of fulllength APP, tagged with Myc, has been used and described previously (22). pCMV--Gal was obtained from Stratagene (La Jolla, CA). PGL2promotor vector, containing SV40 promotor upstream of the luciferase gene, was used as a positive control for firefly luciferase production (Promega, Madison, WI). PGL2-Basic vector was obtained from Promega and used as a negative control for firefly luciferase production.
The generation of the wild type Tip60 construct has been described previously (22). Briefly, Tip60 was cut out from the pOZ-Tip60 plasmid (23) with restriction enzymes XhoI and NotI. The expression vector pEGFP-N1 (Clontech) was digested with XhoI and NotI, cutting out the GFP. The wild type Tip60 was digested and ligated into a mammalian expression vector backbone derived from the EGFP-N1 plasmid (Clontech), from which the EGFP coding sequence had been deleted. Antibodies and Reagents-Mouse monoclonal antibody 11H4 against LRP-C-terminal fragment was obtained from American Type Culture Collection (Manassas, VA). Mouse monoclonal anti-Myc antibody was purchased from Invitrogen. The antibody against amino acids 494 -513 of Tip60 was purchased from Upstate Biotechnology, Inc. (Lake Placid, NY).
Rabbit anti-Fe65 was a generous gift from Dr. Buxbaum (Mount Sinai School of Medicine, New York). Anti-APP C terminus C8 was raised against the 8 carboxyl-terminal amino acids of APP, a generous gift from Dr. Selkoe (Brigham and Women's Hospital, Boston, MA).
Cell Culture Conditions and Transient Transfection-H4 cells derived from human neuroglioma cells and HEK293 cells derived from human embryonic kidney cells are used in this study. Both H4 cells and HEK293 cells were cultured in OPTI-MEM I with 10% fetal bovine serum. Transient transfection of the cells was performed using a liposome-mediated method (FuGene 6; Roche Applied Science). For immunocyotochemistry, cells were split into 4-well chambers 1 day before the transfection. First, a mixture of 1 g of plasmid DNA and 3 l of FuGene6 was made in 100 l of Dulbecco's modified Eagle's medium and left for 15-30 min at room temperature. Then 25 l of this mixture was added to the medium in each well. The incubation time was 24 -48 h. Double transfection of LRP and Tip60 was performed in the same way. For the transactivation assay, HEK293 cells were split into 12well plates 1 day before transient transfection. To see the effect of LRPICD on the transactivation induced by APP and Fe65, various LRPICD constructs were co-transfected with pMst-APP and Fe65, together with pG5E1B-Luc. pcDNA3.1 (Invitrogen) was added to make up the equal amount of DNA transfected. PGL3-SV40 was used as a positive control for firefly luciferase, and pGL2-Basic was used as a standard.
In order to assess the role of LRPICD in transactivation, LRP105-Gal4 was co-transfected with pG5E1B-Luc with or without other plasmids encoding LRP-interacting proteins (Fe65, mDab1, Numb, and AICD).
All cells were co-transfected with pCMV--Gal, a constitutive -galactosidase expression vector, to standardize for transfection efficiency.
Immunohistochemistry-Immunostaining was done on the cells 24 -48 h post-transfection. Cells were fixed in 4% paraformaldehyde for 10 min, washed in Tris-buffered saline (pH 7.3), and permeabilized by 0.5% Triton X-100 for 20 min and blocked with 1.5% normal goat serum for 1 h. To detect the localization of LRPICD, cells transfected with LRP105-Myc or LRP105-Gal4 were immunostained by mouse anti-Myc monoclonal antibody (1:1000; Invitrogen) or 11H4, respectively, for 1 h at room temperature. Cells were then washed three times in Trisbuffered saline and labeled by Cy3-conjugated anti-mouse antibody (10 g/ml; Jackson Immunoresearch, West Grove, PA) for 1 h at room temperature. Immunostained cells were stored in Tris-buffered saline at 4°C until imaging.
To detect the interaction of LRPICD and Tip60 in the nucleus, cells co-transfected with constructs of LRP105-GFP and Tip60 were used. Co-transfected cells were fixed and permeabilized and blocked in normal goat serum and then incubated with primary antibody against Tip60, followed by anti-rabbit secondary antibodies conjugated with Cy3 (10 g/ml; Jackson Immunoresearch) to visualize the localization of Tip60.
Reporter Gene Assays-HEK293 cells were harvested for reporter gene assays 24 h after transfection. The culture medium was removed, and the cells were washed once with cold phosphate-buffered saline. After the addition of 100 l of reporter lysis buffer (Promega) per well of the 12-well plate, cells were collected and pelleted, and the supernatant was saved. Luciferase gene expression was analyzed using the luciferase assay system (Promega) in a 96-well plate. The luminescence emission was measured by a Wallac plate reader. -Galactosidase assays for internal control of transcription efficiency were carried out with an aliquot of the cell lysates prepared for the luciferase assay using the -galactosidase enzyme assay system (Promega). Luciferase activity was obtained by dividing the relative luminescence units values by those from the -galactosidase reaction, and then this number was standardized by dividing by that of pGL2-Basic-transfected cells. All transfections were done in triplicate and repeated in at least three independent experiments. Values shown are averages of transactivation assays carried out in triplicate.
FIG. 1. Reagents used in this study.
Shown are the constructs and antibodies used in this study. LRP105 constructs contain only the cytoplasmic portion of LRP, whereas the LRP165 construct starts from the extracellular domain, containing the entire membrane-spanning and cytoplasmic region. All LRP-truncated constructs are tagged with EGFP or Myc at their C terminus (except for LRP-Gal4). In NPXY mutants, both NPXY motifs are mutated to APXA. In the signalLRP105 construct, a signal peptide was added to the N terminus of LRP105.
Fluorescence Resonance Energy Transfer (FRET): Assessment by Fluorescence Lifetime Imaging Microscopy (FLIM)-FRET is observed
when two fluorophores are in very close proximity (i.e. Ͻ10 nm). FRET measurements using FLIM rely on the observation that the fluorescence lifetime (the time of fluorophore emission after brief excitation, measured in picoseconds) decreases in the presence of a FRET acceptor. The decrease in lifetime is proportional to the distance between donor and acceptor (see Refs. 24 and 25). FLIM allows quantitative determination of the distance between a donor and acceptor fluorophore on a scale of nanometers. We recently developed a new technique that can quantitate protein-protein interactions using multiphoton microscopy (25,26). A mode-locked Ti-sapphire laser (Spectra Physics) sends a 1-fs pulse every 12 ns to excite the fluorophore. Images were acquired using a Bio-Rad Radiance 2000 multiphoton microscope. We used a high speed Hamamatsu microchannel plate detector and hardware/software from Becker and Hickl (Berlin, Germany) to measure fluorescence lifetimes on a pixel-by-pixel basis. Donor fluorophore (GFP) lifetimes were fit to two exponential decay curves to calculate the fraction of fluorophores within each pixel that either interact or do not interact with an acceptor (in this case, Cy3-labeled Tip60). These lifetimes were then mapped by pseudocolor on a pixel-by-pixel basis over the entire image. As a negative control, GFP lifetime was measured in the absence of acceptor (i.e. the cells were not stained with Cy3).
Localization of LRP Cytoplasmic Domain in Various Deletion
Mutants-We asked whether the proteolytic intramembranous cleavage of LRP would affect the localization of LRPICD, as is the case for proteins that undergo Rip. H4 cells were transiently transfected with various LRP constructs, and the localization of the LRP C terminus was examined (Fig. 2). The membrane-spanning molecules, full-length LRP, LC, LRP165, and LRP105 construct with a signal peptide are localized in Golgi, endoplasmic reticulum, cell surface, and endosomes (Fig. 2). In contrast, LRP105-transfected H4 cells (tagged with either EGFP or Myc at their C termini) showed the LRP C terminus predominantly in the nucleus. LRP105-Gal4, when stained with 11H4, also showed a predominantly nuclear localization.
LRP binds to diverse cytoplasmic proteins that have been found to interact with the tail of LRP (3). The tetraamino acid motif NPXY is present in two copies in the LRP tail. One or both of the NPXY motifs might interact with other proteins in the cytoplasm, such as Fe65 (7), and mammalian Disabled-1 (mDab1) (27). The LRP105 double NPXY mutant, which has APXA substitution instead of NPXY in both copies of the LRP tail, also showed a predominantly nuclear signal, suggesting that the NPXY motifs are not necessary for LRPICD to be translocated to the nucleus and that LRPICD localization in the nucleus is independent of the interaction via NPXY with adaptor proteins, like Fe65. Thus, these results show that the intracellular domain of LRP is translocated to the nucleus.
The Effect of LRPICD on Transcription-As described above, the cytoplasmic domain of LRP was localized predominantly in the nucleus, suggesting that it may have some function as a transcriptional modulator, like other proteins that undergo regulated intramembranous proteolysis. We hypothesized that LRPICD might be able to modulate gene transcription. To investigate the role of LRPICD in transcription, we employed a luciferase-based reporter gene assay. We fused the cytoplasmic tail of LRP to the Gal4 DNA binding domain (Gal4) at its C terminus (LRP105-Gal4). HEK293 cells were transfected with LRP105-Gal4 along with a reporter plasmid. LRP105-Gal4 had little or no activity on the Gal4-dependent promotor (Fig. 3A), although its localization was found to be predominantly in the nucleus (confirmed by 11H4 staining). This result suggests that LRPICD alone is not sufficient to strongly activate the transcriptional response. We considered the possibility that LRPICD may require an adaptor protein to activate transcription, like AICD requiring Fe65. We transfected LRP105-Gal4 with adaptor/interacting proteins to HEK293 cells, along with a reporter plasmid. Little transactivation (4 -8-fold) was observed with LRP105-Gal4 by itself or in the presence of Fe65, Numb, mDab1, or AICD compared with pGL2-Basic plasmid (Fig. 3A). Thus, the cytoplasmic tail of LRP, when overexpressed in cells as a fusion protein with a heterologous DNA binding domain, does not stimulate transcription robustly in comparison with APP-Gal4 or especially APP-Gal4 plus Fe65 (Ͼ200-fold).
We then tested whether LRPICD could have some effect on transactivation induced by APP and Fe65 (20), since LRP is known to interact with APP via its extracellular and intracel- lular domains (6 -10). Co-transfecting LRPICD greatly inhibited the transcription (e.g. to less than 10%) mediated by pMst-APP and Fe65, suggesting that LRPICD is a potent transcription inhibitor in this system.
To determine whether this effect depends on the nuclear translocation of LRPICD or interaction with Fe65, we used various LRPICD constructs: LRP105, LRP105 double NPXY mutant, LRP105 with a signal peptide (which is membraneassociated in the Golgi/endoplasmic reticulum), and LRP165. As shown in Fig. 3B, LRP105 double NPXY mutant (which translocates to the nucleus but does not interact with Fe65) had the same inhibitory effect as LRP105. By contrast, LRP105 with a signal peptide and LRP165, which do not translocate to the nucleus, do not inhibit AICD-mediated transactivation. Thus, this result suggests that the inhibitory effect by LRPICD depends on its nuclear translocation and that this effect is independent of its interaction with adaptor proteins via NPXY motifs.
We reasoned that this inhibition could be due to inhibition of AICD generation or translocation to the nucleus, inhibition of AICD/Fe65 interaction with Tip60, or nonspecific inhibition of the luciferase reporter. In order to see whether LRPICD nonspecifically inhibited the luciferase reporter system, we cotransfected pGL2-SV40 with LRPICD plasmid. pGL2-SV40 was used as a positive control for the firefly luciferase. Cotransfecting LRPICD with this plasmid did not affect transactivation or the luciferase read-out.
Whether the nuclear translocation of AICD is affected by LRPICD or not was examined by triple transfection of LRP105, APP, and Fe65 into H4 cells. Cells were immunostained by rabbit anti-Fe65 antibody (labeled by Cy5) and mouse 11H4 antibody (labeled by Cy3). APP770-GFP signal is localized in the cytoplasm in the absence of Fe65 (Fig. 4A); however, APP770-GFP signal was found in the nucleus as previously reported in the presence of Fe65 (28), regardless of LRPICD co-transfection (Fig. 4B). To confirm these results, cells were transfected with pMst-APP, Fe65-Myc, and LRP105-GFP. Cells were immunostained by rabbit C8 antibody (labeled by Cy3) and mouse monoclonal anti-Myc antibody (labeled by Cy5). The APP C terminus was also localized in the nucleus (data not shown), confirming that the translocation of APP C terminus was not inhibited by LRPICD.
Interaction of LRPICD with Tip60 by FLIM Analysis-We next tested the possibility that LRPICD may interact with Tip60, thus interfering with transactivation by APP. We first examined the localization of LRPICD and Tip60 in co-transfected H4 cells by confocal microscopic analysis. The LRP105 singly transfected cells showed nuclear localization of LRPICD, with a uniform staining pattern in the nucleus, as shown in Fig. 2. When LRP105 is co-transfected with Tip60, the intranuclear localization of LRPICD changes noticeably, and it becomes localized in subnuclear compartments, showing a perfect match of localization with Tip60 (Fig. 5A).
To further test the hypothesis that LRPICD interacts with Tip60 in the nucleus, we utilized a morphologically based new FRET technique that can reveal protein-protein interactions in intact cells, FLIM.
Fluorescence lifetime is influenced by the surrounding microenvironment and is shortened in the immediate vicinity of a FRET acceptor molecule. The degree of lifetime shortening if inherently a quantitative measure of proximity and changes in this quantity reflect alterations in proximity that can be displayed with very high spatial resolution in a pseudocolor-coded image. If the molecules are close together, the donor fluorescence lifetime will be shorter, and the color will be closer to red. Our negative control (in the absence of an acceptor molecule) showed that the lifetime of GFP alone is 2122 Ϯ 50 ps (mean Ϯ S.D.) (see Table I). FLIM analysis of the co-transfected H4 cells with LRP105-GFP (donor) and Tip60 (labeled by Cy3, acceptor) showed that the average fluorescence (GFP) lifetime decreased to 1560 Ϯ 200 ps in the co-localized areas of the nucleus in the presence of acceptor (Table I and Fig. 5B, red-orange staining in the FLIM image), indicating that LRPICD and Tip60 are in close proximity in some special subnuclear locations. As a negative control, FLIM analysis also revealed that there is no detectable interaction between LRPICD and Tip60 in cytoplasmic compartments; the fluorescence lifetime of GFP in nonnuclear compartments, where it does not colocalize with Tip60, was not different from that seen in GFP alone (shown in bluegreen in the FLIM image of Fig. 5B).
Taken together, the shift in nuclear localization and the FRET results both strongly suggest that LRPICD closely interacts with Tip60. FIG. 3. Transactivation assays for LRP105-Gal4. The assay was repeated at least three times in a triplicate manner, and a typical result is shown here. The error bar shows the S.D. value. A, LRPICD itself does not show transactivation ability in this assay. The addition of other proteins (Fe65, mDab1, Numb, and AICD), which are known to interact with the cytoplasmic domain of LRPICD, does not enhance the transactivation. Transfection efficiency was corrected by a -galactosidase assay in all results. B, LRPICD markedly inhibits the transactivation induced by APP and Fe65. This transactivation is also inhibited by LRP105 NPXY mutant but not by LRP105 containing a signal peptide or LRP165, neither of which localized to the nucleus.
DISCUSSION
In this study, we report a novel function of the cytoplasmic tail of LRP, the proteolytic product of LRP intramembranous cleavage (14). LRP, like the Notch family members, undergoes cleavage by furin in a late secretory compartment (1) and can be cleaved by a metalloproteinase (13). A recent report sug-gested that a third site cleavage in the membrane releases the cytoplasmic domain of LRP, like other members that undergo Rip. This cleavage is probably due to ␥-secretase activity, because a potent ␥-secretase inhibitor, DAPT, strongly inhibited this intramembranous cleavage (14). Cleavage of LRP results in the release of this domain into the cytoplasm, where it is further translocated to the nucleus and may modulate cellular signaling.
As a Rip protein, how could LRPICD modulate cellular signaling? The intracellular domain of LRP contains various binding sites for adaptor and scaffold proteins that may recruit other biologically active proteins. As we demonstrate, LRPICD is translocated to the nucleus, suggesting that LRPICD might recruit transcriptional activators that could potentially stimulate the expression of target genes. Thus, it is likely that release of LRPICD from the membrane could translocate this complex to the nucleus, where it may modulate signaling. How- ever, we could not find a function as a potent transcriptional activator when LRPICD was used by itself. This is consistent with the report by May et al. (14). In their study, they found that transcription was moderately enhanced by LRPICD only in serum-deprived cells and not in cells growing in serumcontaining media. The significance of this condition is not yet well understood. LRP and APP both bind the scaffold protein Fe65, a transcriptional activator, via their cytoplasmic tails. Cao and Sudhof (20) recently reported that APP mediates the transcription activation in the presence of Fe65, using a luciferase reporter gene assay, and that this activation was dependent on the release of the AICD. Therefore, we investigated whether LRPICD may have some effect on the transactivation mediated by AICD and Fe65. As shown in Fig. 3, LRPICD had a potent inhibitory effect on the transcriptional activation mediated by APP and Fe65, which was dependent on its nuclear translocation.
What mechanism underlies this phenomenon? In the report by Cao and Sudhof (20), Fe65 played a critical role in transactivation by AICD. In order to activate transcription, AICD presumably interacts with DNA-binding proteins, histone acetyltransferases, and general transcription factors. By yeast two-hybrid screening, they found that Fe65 strongly interacts with Tip60, a histone acetyltransferase, and that the interaction of AICD-Fe65 complex with Tip60 is required for the transactivation. We demonstrate that LRPICD does not affect the first few steps of this process; in the presence of LRPICD, APP-Gal4 is cleaved, interacts with Fe65, and is translocated to the nucleus.
Therefore, we hypothesized that LRPICD may interact with Tip60, interfering with the interaction between AICD and Tip60. We observed that co-transfection of LRPICD with Tip60 leads to a change in the localization of LRPICD, so that it co-localized with Tip60 in specific subnuclear compartments. To test the hypothesis that LRPICD and Tip60 interacted, we utilized a novel technique to detect protein-protein interactions in intact cells. Our FRET result demonstrated close proximity between LRPICD and Tip60 in the nucleus of cells, suggesting that the potent inhibitory effect of LRPICD on AICD-mediated transactivation may be due to interference of LRPICD with the interaction with APP-Fe65-Tip60 complex.
LRP is remarkably tightly linked to APP metabolism. Cleavage of APP by -secretase leads to a truncated membranebound 99-amino acid transmembrane carboxyl terminus fragment, which is subsequently cleaved by a presenilin-dependent ␥-secretase activity to release amyloid- (which accumulates as senile plaques in Alzheimer's disease) and AICD. These events appear to occur, at least to a great extent, in early endosomal compartments (29), and APP endocytosis is modulated by LRP (6,8,29,30). Moreover, LRP also mediates endocytosis and clearance of amyloid- when bound to apolipoprotein E or ␣ 2macroglobulin, giving LRP a role in both generating amyloid- (via endocytosis of APP (6, 31) and clearance of amyloid- (via endocytosis of amyloid- complexes (32)(33)(34)). Extensive direct interactions between APP and LRP have also been demonstrated. In the extracellular compartment, APP isoforms containing the alternatively spliced Kunitz protease inhibitor domain interact with the ligand binding domains of LRP (8 -10, 30). The intracellular cytoplasmic tails of APP and LRP also interact, forming a heterotrimeric complex with the adapter protein Fe65 (7-10). Both APP and LRP appear to undergo regulated intramembranous cleavage by ␥-secretase, and the released cytoplasmic tails both translocate to the nucleus and interact with Tip60 (20,22). However, here APP and LRP show opposite effects, with AICD demonstrating robust transactiva-tion of a Gal4 reporter gene and LRP showing dramatic inhibition of this same assay. The function of AICD and LRPICD for other, physiological target genes remains unknown. Thus, LRP and APP share parallel, interacting metabolic pathways that lead to complementary roles in signal transduction.
Interestingly, one recent report describes the inhibition of transcriptional activity of AICD and Fe65 through the NF-B pathway (35). The NF-B pathway regulates transcription by external stimuli, including proinflammatory cytokines, through protein kinase cascades. AICD-and Fe65-mediated transactivation was decreased both by co-transfection of NF-B pathway plasmids and by the treatment of cells with cytokines (35). Our current data show that LRPICD can regulate AICD/ Fe65 transactivation. Thus, several pathways may regulate transcriptional activation mediated by AICD.
LRP has been reported to have two roles in modifying transcription. One role is as an endocytic receptor that internalizes the ligand, such as HIV-Tat protein (a transactivator for viral genes) (36), which leaves the endosomes after LRP-mediated endocytosis by a process that is poorly understood and enters the nucleus, where it stimulates transcription. In the second role, LRP itself is cleaved and enters the nucleus as an active transcriptional modulator. Two other members of the LDL receptor family, very low density lipoprotein receptor and ApoER2, act as co-receptors for the signaling ligand Reelin (27). It is not yet known whether LRP ligands under physiological circumstances can induce ␥-secretase cleavage of LRP or initiate signal transduction in other ways, but our current results strongly support the hypothesis that LRP should be considered as a signaling molecule in addition to its role as an endocytic receptor. | 6,260.8 | 2003-10-17T00:00:00.000 | [
"Biology"
] |
How does symbolic success affect redistribution in left-wing voters? A focus on the 2017 French presidential election
Redistribution preferences depend on factors such as self-interest and political views. Recently, Deffains et al. (2016) reported that redistributive behavior is also sensitive to the actual experience of success or failure in a real effort task. While successful participants (‘overachievers’) are more likely to attribute their success to their effort rather than luck and opt for less redistribution, unsuccessful participants (‘underachievers’) tend to attribute their failure to external factors and opt for more redistribution. The aim of the present study was to test how the experience of success (symbolic success) and political views interact in producing redistributive behavior in an experimental setting. The study was conducted during the 2017 French presidential election. Our sample was biased towards left-wing, and most participants reported voting for Mélenchon, Hamon or Macron. Our findings reveal that 1) Macron voters redistribute less than Hamon voters who themselves redistribute less than Mélenchon voters, 2) overachievers redistribute less than underachievers only among Mélenchon voters. This suggests that redistributive behavior is governed primarily by political opinions, and that influence by exogenous manipulation of symbolic success is not homogenous across left-wing political groups.
Introduction
Support for redistribution varies greatly across individuals within a society, and is a major component of their political positioning. Political parties put forward different redistributive policies in their respective agendas. Accordingly, understanding the determinants of support for redistribution has been a topic of major interest for researchers in economics and political sciences.
One can distinguish two main factors contributing to this support, namely self-interest and fairness considerations [1]. On the one hand, the individual attitude towards a more redistributive or a less redistributive system is shaped by the economic self-interest of the individual, i.e. the effect that the redistributive system has upon the individual's net income. Obviously, self-interest pushes wealthy individuals to support redistribution less than poor individuals. On the other hand, support for redistribution is also dependent on fairness considerations [2,3]. The redistributive policy chosen in a society reflects the beliefs about the determinants of income inequality and the main causes of poverty [1]. If wealth is primarily determined by a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 chance or by factors that are not under the control of individuals, then support for redistribution increases [4,5], in accordance with the accountability principle [6].
Surveys have shown that such beliefs about the determinants of inequality are not homogeneous across the population [e.g . 7]. Relatedly, support for redistributive policies varies across social groups defined by race, gender, age or socioeconomic status [8]. In the United States, whites are more averse to redistribution than blacks, even after controlling for individual characteristics such as income, education, etc. [e.g., 9,10]. Past upward mobility also decreases the support for redistribution [e.g., 10,11]. Some of these observations have been confirmed by experimental data. For instance, when participants are presented with mock news articles reporting high (vs. low) rates of social mobility, their tolerance for inequality increased [12]. Providing American adults with factual information about the rise of inequalities in the United States (vs. control information) increased their beliefs that economic inequalities are due to structural rather than individual factors and increased support for redistribution [13,14].
The present work follows up on a recent study by Deffains, Espinosa, and Thöni [15] who introduced an exogenous manipulation of status and found this manipulation to affect the redistributive behavior of participants, even when self-interest was not at stake. After a real effort task, each subject was randomly given a status of either 'overachiever' (performance above the median) or 'underachiever' (performance below the median). In a subsequent disinterested dictator game, participants were asked to reallocate money between two randomly chosen individuals in their session, from the richest to the poorest individual. It turned out that on average, overachievers redistribute less than underachievers. The information provided to the subjects about the determinants of task performance (i.e. luck or effort) was very vague, and the authors found that overachievers also emphasized more the role of effort in their outcome than underachievers. Noteworthy, Deffains et al. suggested that participants exhibit a self-serving bias [16] by adopting beliefs favorable to them. More precisely, successful individuals attribute their own success to effort and others' failure to a lack of effort, and in accordance with the accountability principle, they believe that no redistribution should take place. On the contrary, unsuccessful individuals attribute their own failure to bad luck and others' success to favorable circumstances, so they support redistribution towards the most disadvantaged.
Since beliefs about the role of luck can be affected both by exogenous manipulations [15] and political opinions [e.g., 17], one could anticipate that these two factors may interact in their influence on redistributive behavior. The goal of the present study is to evaluate this interaction. To do so, we tested an exogenous manipulation of status much like Deffains et al., while evaluating political opinions of participants, in the context of the French 2017 presidential election.
Is the effect of status uniform across the different voters? More precisely, we hypothesized that the exogenous manipulation of Deffains et al. would have an effect on redistributive behavior for subjects who hold moderate political views but no effect for subjects who hold extreme political views. The 2017 French presidential election provided a unique opportunity to compare extreme voters to moderates. Indeed, in 2017, most electors moved away from the candidates of the two major traditional parties (Hamon for the left-wing "Parti Socialiste" vs. Fillon for the right-wing party "Les Républicains"), who together gathered only 25% of the votes in the first round. Instead, electors supported the moderate candidate Macron (who eventually won the election) and the candidates of radical parties (Mélenchon for the far-left and Le Pen for the far-right). In other words, as was seen in other western democracies in the last decade, this election moved away from the traditional left-right opposition towards a center-extreme polarization.
Participants
A total of 649 unpaid participants completed the experiment (see "Description of our sample" below). Participants were essentially French people who responded to an announcement we posted on the Parisian Experimental Economics Laboratory (LEEP) portal, the Paris School of Economics portal, and the main social networks (Facebook and Twitter) inviting them to participate in an online survey on the presidential election. The website that hosted the experiment provided participants with all information about the research (the purpose and nature of the study, the voluntary nature of participation, and the possibility of withdrawing from the experiment at any time without any penalty or consequences). This research was reviewed and approved by Institutional Review Board-Ecole d'économie de Paris (approval number: IRB00010601). Written informed consent was obtained from all participants.
Procedure and measures
The experiment took place during the two weeks separating the two rounds of voting in the 2017 French presidential election (April 23-May 7). Participants first performed a computerized effort task without monetary reward linked to performance. This task was an Implicit Association Test (IAT) aimed at measuring their implicit attitude towards France (in its preliminary version, this study was intended to examine to what extent implicit and explicit attitudes predict participants' voting intention. Because of our skewed sample, however, we could not really evaluate properly the voting intention towards Marine Le Pen. Thus, the variable "voting intention for the second round" was not taken into account in the analysis. We then focused our analysis essentially on the determinants of redistributive behavior). Participants were asked to respond as fast and accurately as possible, and they were informed that their performance would be their mean reaction time over the task. After completion of the task, participants were given a (fake) feedback on their performance and were randomly assigned to the overachiever or underachiever groups (status). Then, they completed a disinterested dictator game in which they were asked to reallocate money between two fictive individuals, a rich and a poor individual. The game was scripted as follows: "Imagine that 100 euros were allocated to two participants A and B based on their performance on the previous speeded-response task. A received 80 euros based on her good performance, B received 20 euros based on her weak performance. If you could reallocate the 100 euros to A and B, how would you reallocate them?" Participants chose the amount of money (between 50 and 100) they would allocate to A, B receiving the rest. Next, participants responded to five self-report items on a 7 points Likert scale measuring fatalism ("to what extent do you relate you performance to 1: chance or 7: effort); their views on income inequality (1: egalitarian, 7: liberal); their attitudes towards economic patriotism ("Do you think that the French government should take more patriotic measures in the economic and the social domain?", 1: unfavorable, 7: favorable); their attitudes towards France ("Do you like France?" 1: positive, 7: negative); and their political position on the left-right continuum (1: extreme left, 7: extreme right). Then, participants reported their vote in the first round. Here, the response modalities included the 11 candidates involved plus the two options "I did not vote in the first round" and "I voted blank or null in the first round". Finally, participants reported their voting intention for the second round. At this stage, four response modalities were presented: "I will vote for Macron", "I will vote for Le Pen", "I will vote blank or null", and "I will not vote".
Description of our sample
Participants were 357 females and 292 males (mean age 33.62 years, SD = 15.44 years) (due to a technical error in the data collection, redistribution choices could only be analyzed for 626 participants). Regarding the socio-professional category, it turned out that managers and white-collar professions (34.36%) and students (41.60%) were overrepresented in our sample (both categories representing 75% of the sample). Regarding reported votes for the first round, our sample was clearly left-wing oriented, and voters for the two main right-wing candidates (Fillon and Le Pen) were underrepresented, whereas voters for Mélenchon, Hamon, and Macron were overrepresented (Fig 1). Therefore, in subsequent analyses we focus on participants who reported having voted for Mélenchon, Hamon, or Macron in the first round of the election (N = 506, 78% of the initial sample), given the lack of data for the other cases. Accordingly, in what follows the variable "First-round vote" is a categorical variable with 3 possible values, namely Mélenchon, Hamon, and Macron. Table 1 reports the age, gender, socio-economic category, and average status for the different group of voters in our sample.
As this selection resulted in a restriction of variance of the Political position variable (Fig 2), we considered the vote reported for the first round (hereafter First-round vote) as the only measure of political opinions in the analysis. Note that at the time of this experiment, Mélenchon, Hamon, and Macron were all considered left-wing candidates. Specifically, Mélenchon was considered as the main candidate of the radical left, Hamon was the official candidate of the major French left-wing party ("Parti Socialiste"), and Macron was associated both with a left-wing government under former president Hollande and with a social-liberal position with a pronounced liberal component. Table 2 reports age, gender, socio-economic category and First-round vote of overachievers and underachievers, showing that our random manipulation of status did not create an unwanted bias between overachievers and underachievers. Table 3 indicates the descriptive statistics (means and standard deviations) and the correlations between the different behavioral and personality measures. We note that almost all pairwise correlations measures were significant, except for the correlation between Fatalism and Political position. In particular, the share given to the richer player in the disinterested dictator game, which quantifies participants' attitude towards income inequality in a simple behavioral test, was correlated positively with the explicit attitude towards inequality (r = 0.24, p<0.001), which was most correlated with the stated political position (r = 0.60, p<0.001).
Effect of status on redistribution
The main point of interest of the analysis was how the amount of money (between 50 and 100) reallocated to the "richer agent" A in the disinterested dictator game was affected by the manipulation of Status and by the vote reported by participants. To assess this, we conducted a 2 (Firstround vote) × 2 (Status) ANOVA for independent samples on the share given to A as a dependent variable. This analysis yielded a significant main effect of In other words, we found the opposite pattern to that expected. Noteworthy, to evaluate whether our results were robust to changes in model specification, we conducted a new regression analysis, in which we added gender and age as covariates ( Table 5). This regression revealed that gender affected redistribution, with women redistributing more than men, replicating previous findings [e.g., 2,18]. This analysis also indicated a main effect of the First-round vote, and confirmed the interaction between Status and First-round vote. When examining the effect of Status separately for the 3 groups of voters, again adding gender and age as covariates, we found that redistributive behavior was affected by Status only for replicating our main finding. For completeness, we also report in S1 Appendix the results of a regression over all participants, including those who reported a different First-round vote. Finally, we evaluated the effect of Status on the fatalism measure, that is, the extent to which participants related their performance to chance or effort. We found no evidence that fatalism was affected by Status, F(1, 504) = 0.40, NS. In other words, we found no evidence for a selfserving bias in our participants, unlike Deffains et al. [15]. Note that the task used in the present study was an IAT, which was objectively less effortful than the counting task used by Deffains et al. Therefore, unlike their participants, participants in our study might have not believed that their performance could be impacted by the amount of effort deployed.
Discussion
The present study capitalized on a major political election (the 2017 French presidential election) in order to investigate how redistributive behavior is affected by political views and- [15]. We found an overall effect of First-round vote on redistribution such that the mean amount redistributed by the three main groups of voters in our sample was coherent with their respective positions on the left-right continuum. While participants who reported voting for Mélenchon (presumably the most leftists) were the most redistributive, Macron voters (the most liberal) were less redistributive, with Hamon voters falling in between. This finding confirms previous research reporting that preferences for redistribution and progressive taxation are coherent with vote choice: during the French 2012 presidential elections, strong supporters of redistribution voted for the left-wing candidate Hollande, while supporters of a flat rate tax voted for the right-wing candidate Sarkozy [11].
Condition N Age (SD) Gender (% women) First-round vote Mélenchon / Hamon / Macron
Our main result is that redistributive behavior is influenced by the exogenous manipulation of Status only in a subgroup of participants, specifically those who reported voting for Mélenchon. Therefore, our study partially replicated the findings of Deffains and colleagues [15]. This partial discrepancy between our study and that of Deffains might be due to incentives. In Deffains' study, participants' redistribution choices in the dictator game had real consequences on the payoffs of other players, whereas in our paradigm redistribution choices were only hypothetical. It is possible that incentives might have influenced our results independently of the desirability bias. Participants who reported voting for Hamon or Macron might be more sensitive to the presence of real life incentives than Mélenchon voters. Thus, incentivizing redistribution choices might be a necessary feature to obtain the effect of Status in Hamon or Macron voters, whereas Mélenchon voters would exhibit the effect of Status even in the absence of incentives. To evaluate these possibilities, further research would need to compare redistribution choices with and without incentives, for the different groups of voters.
It has been proposed [e.g., 19] that in the absence of incentives, participants might try to please the experimenter or conform to some social norms, e.g. by being generous in dictator games. Could this desirability bias explain our results or the difference between our study and Deffains' study? We believe that such an explanation is unlikely for several reasons. First, if a desirability bias was more present our study than in Deffains' study, then we should have observed more redistribution in our participants. However, in our experiment, participants redistributed less than in Deffains' study: our mean allocation to A was 60.08 while the corresponding value in Deffains' study would be 57.56. Second, and more generally, it is not clear to us why this desirability bias would lead to the specific interaction between Status and First-
PLOS ONE
round vote. Third, the instructions given to participants (see S2 Appendix) did not refer to the aim of our experiment, so participants were naïve about our hypothesis. Had they tried to guess our expectations, we would have found an effect of status on fatalism, which we did not observe either in the full sample (p = .52) nor in Mélenchon voters (p = .35), whose redistributive behavior was affected by status however. Finally, our experiment was conducted online and responses were anonymous, so participants have no pressure to please the experimenter or conform to social norms. Our study provided a nuanced picture of how redistributive behavior is jointly influenced by political views and the actual experience of individuals (here, the experience of success or failure in a simple decision task). In fact, we hypothesized that the exogenous manipulation of Status would have an effect on redistributive behavior for subjects who hold moderate political views (Hamon or Macron voters), but no effect for subjects who hold extreme political views (Mélenchon voters) who would be more likely to resist any experimental manipulation. Our findings revealed a significant interaction between Status and First-round vote but the pattern we found is the opposite of our expectation, as the only group of voters who were significantly affected by status were Mélenchon voters. Being the most left-wing voters in our sample, endorsing pronounced egalitarian views of society, these voters were supposed to be the most redistributive overall (which was actually observed) but also the least sensitive to the information regarding Status (which was the opposite of what we observed). That result is even more surprising since they reported the most egalitarian views on income (M = 2.07) compared to Hamon voters (M = 2.60) and Macron voters (M = 3.86), F(2, 503) = 69, p < 0.001. Explanations of this finding in terms of age, sex, or socio-economic status are unlikely in our dataset as Mélenchon voters and Hamon voters did not differ significantly on these variables (Table 1). In addition, we verified that our Status manipulation was truly random with respect to age, sex, or socio-economic status, which did not differ between overachievers and underachievers (Table 2).
Here, we suggest one explanation for our finding that Mélenchon voters were the most affected by Status manipulation. It is worth noting that these voters were also the most versatile at the end of the electoral campaign. Indeed, the dynamics of voting intentions as measured by the polls during the month preceding the first round revealed that voting intentions for Macron remained stable around 23%, those for Hamon collapsed from 12% to 6%, while those for Mélenchon jumped from 11% to 18%. As a candidate, Mélenchon also used a communication strategy based on social influence, with a strong presence on social media, and a populist attitude that emphasized the proximity to his base ("the people"). Individuals that are highly susceptible to social influence were then more likely to become Mélenchon voters, and in our study they were also more likely to be influenced by the Status manipulation. Thus, our result could be explained by susceptibility to social influence as a common cause of voting behavior and of the effect of the Status manipulation.
Before concluding, we must acknowledge several important limitations of our study. First, our sample was limited and was not representative of the French population. In particular, our data could not allow us to investigate the sensitivity of redistribution behavior to an experimental manipulation of success for right-wing voters. One main reason for this limitation was probably the recruitment procedure employed, which was based on social media, local networks, and word of mouth. As a result, according to their reports, our participants were mostly young, left-wing supporters, and closely related to the academic sector. In particular, we did not have enough right-wing or far-right supporters to perform meaningful analyses on this part of the political spectrum. By contrast, analyses of Twitter activity at that period revealed the emergence of three main communities in the French political environment, namely supporters of Macron and Hamon, supporters of Le Pen, and supporters of Mélenchon [20]. It is possible that redistributive behavior and its sensitivity to our manipulation would have been different for right-wing and far-right voters.
We note that although our results may not be representative of right-wing voters, one could envision that they would generalize for left-wing voters in some other countries. Indeed, in the last decade, western democracies have seen a polarization of opinions, with a crisis of the traditional parties and a rise of support for extreme populist parties. Examples of populist far-left parties are die Linke in Germany, Podemos in Spain, Siriza in Greece, La France Insoumise (Jean-Luc Mélenchon's party) in France. According to Rooduijn and Akkerman [21] these radical left parties have in common that "They do not focus on the 'proletariat', but glorify a more general category: the 'good people'" , contrary to former communist parties and that"they do not reject the system of liberal democracy as such, but only criticize the political and/ or economic elites within that system". Our results regarding the susceptibility of Mélenchon voters to status manipulation could thus be evaluated and replicated in other countries.
In addition, one could argue that a second limitation of the present work is related to the specific timing of the study, which took place during the French presidential election. This specific timing was chosen on purpose for two reasons. One reason was to benefit from the increased interest towards political topics at this time. The other reason was to probe voters' redistributive behavior at a time that constitutes an important step in the democratic process. However, we acknowledge that it is possible that voters' behavior in our study is unusual, because of this unusual timing. Voters may receive more information in the context of an election, and they may react more strongly to information delivered in this context. Whether our results would generalize to another context unrelated to a particular election thus remains an open empirical issue.
The third limitation relates to the possible discrepancy between actual votes and reported votes in our participants. Poll estimates (based on self-reported votes) and actual votes can indeed differ, as famously illustrated in the 2016 US presidential election, the 2016 "Brexit" referendum, or the 2002 French presidential election, amongst others. However, we note that in the case of the election under study here (2017 French presidential election) the last polls were very accurate. One reason for the discrepancy between self-reported votes and actual votes might be a social desirability bias by which right-wing or far-right votes are expressed less easily and therefore under-estimated in opinion polls [see e.g . 22]. Critically, polling institutes use adjustment procedures to take into this bias when producing their estimates, but we did not. Therefore, right-wing opinions/votes in our sample might have been under-estimated. In sum, although we followed the common practice in studies of voting behavior, and used the terms "Mélenchon voters", "Hamon voters" or "Macron voters", one should bear in mind that our data is about self-reported votes, which might have differed from actual votes.
To conclude, our findings revealed that self-reported far-left voters turned out to be the more sensitive to the exogenous manipulation of symbolic success. This leads to three remarks. Firstly, we need further research to better understand to what extent, and in which groups, redistributive behavior can be manipulated through exogenous manipulations of experience of success. In particular, further studies are needed that shall use a proper manipulation of symbolic success and representative samples in terms of political and socio-economic features. Secondly, our findings suggest that the various political groups process information differently, that is, they are not cognitively homogeneous [e.g., [23][24][25]. Finally, and more broadly, the fact that Mélenchon voters displayed a different behavior than Hamon and Macron voters extends recent findings showing that supporters of extreme political groups have different characteristics from those with more moderate views, although they are not necessarily different on socio-demographic variables such as age or level of education [e.g. 26]. For instance, Hanel, Zarzeczna, and Haddock [27] reported that extreme (left-wing or right-wing) supporters are usually more heterogeneous than moderate ones in terms of human values and politics-related variables such as attitudes toward immigrants and trust in institutions. In the current social and political context, we believe that understanding further these differences, especially whether some groups are more susceptible to influence than others, appears a worthwhile subject for future research. Using controlled experiments during political elections can be a useful tool in such research. | 5,891.2 | 2020-03-16T00:00:00.000 | [
"Psychology",
"Economics"
] |
The ExtremeX global climate model experiment: investigating thermodynamic and dynamic processes contributing to weather and climate extremes
. The mechanisms leading to the occurrence of extreme weather and climate events are varied and complex. They generally encompass a combination of dynamic and thermodynamic processes, as well as drivers external to the climate system, such as anthropogenic greenhouse gas emissions and land use change. Here we present the ExtremeX multi-model intercomparison experiment, which was designed to investigate the contribution of dynamic and thermodynamic processes to recent weather and climate extremes. The numerical experiments are performed with three Earth system models: CESM, MIROC, and EC-Earth. They include control experiments with interactive atmosphere and land surface conditions, as well as experiments wherein the atmospheric circulation, soil moisture, or both are constrained using observation-based data. The temporal evolution and magnitude of temperature anomalies during heatwaves are well represented in the experiments with a constrained atmosphere. However, the magnitude of mean climatological biases in temperature and precipitation are not greatly reduced in any of the constrained experiments due to persistent or newly introduced biases. This highlights the importance of error compensations and tuning in the standard model versions. To show one possible application, ExtremeX is used to identify the main drivers of heatwaves and warm spells. The results reveal that both atmospheric circulation patterns and soil moisture conditions substantially contribute to the occurrence of these events. Soil moisture effects are particularly important in the tropics, the monsoon areas, and the Great Plains of the United States, whereas atmospheric circulation effects are major drivers in other midlatitude and high-latitude regions.
Introduction
Weather and climate extremes strongly affect society, human health, and ecosystems; therefore, they need to be accurately simulated in numerical weather predictions and climate projections (e.g., Seneviratne et al., 2012). However, substantial biases remain in their representation in weather and climate models (e.g., Angélil et al., 2016;Maraun et al., 2017;Merz et al., 2020;Moon et al., 2018;Wehrli et al., 2018). For climate models used in the fifth phase of the Coupled Model Intercomparison Project (CMIP5), consistent biases can be found across models in the mean climatology of the lower atmosphere and land surface, for example for temperature and precipitation (Flato et al., 2013;Mueller and Seneviratne, 2014). These biases originate to some extent from the representation of the underlying processes driving evapotranspiration at the land surface , extratropical cyclones (Zappa et al., 2013), or the simulated sea ice and sea surface temperatures (SSTs) (Turner et al., 2013;C. Wang et al., 2014). The difficulties in representing the mean climatology translate to biases in the representation of extreme events and impede their projection into the future. Therefore, an important question in the investigation of changes in extremes in a warming climate is the identification of the respective contribution of thermodynamic (thermal structure, water vapor, precipitation, land-atmosphere interactions) and dynamic (large-scale circulation) processes to their changes in occurrence and intensity (e.g., Pfahl et al., 2017;Shepherd, 2014;Trenberth et al., 2015;Wehrli et al., 2018Wehrli et al., , 2019Zappa et al., 2015). Better isolating these contributions would help inform further model development as well as research on the attribution and projection of changes in weather and climate extremes (Vautard et al., 2016).
In the past, the number of record-breaking hot extremes has increased and it is expected to increase further if anthropogenic emissions continue to rise (e.g., Rahmstorf and Coumou, 2011;Shiogama et al., 2016;Power and Delage, 2019). Changes in the frequency, intensity, and duration of various types of extremes can be seen in different regions of the world (e.g., Seneviratne et al., 2012). The most extreme events show the highest sensitivity to climate change Sillmann et al., 2013) and new, not yet seen extreme intensities are anticipated. These changes are related to a number of physical processes, their interactions with each other, and their response to climate change.
The processes driving a specific extreme event and their relative importance can be examined in observation-based studies using linear regression (e.g., Arblaster et al., 2014;Wang et al., 2016;Dirmeyer et al., 2021) or forecast sensitivity experiments (e.g., Hope et al., 2016;Petch et al., 2020). In climate model simulations the role of the drivers can be studied by constraining the processes in the ocean, the atmosphere, or at the land surface, which enables the study of drivers in isolation (e.g., Fischer et al., 2007;Hauser et al., 2016;Jaeger and Seneviratne, 2011). Recent work using these methods has shown that both soil moisture and atmospheric circulation play an important role in driving heatwaves (e.g., Dirmeyer et al., 2021;Petch et al., 2020;Suarez-Gutierrez et al., 2020). In this study, we present the new "ExtremeX" multi-model experiment in which, among other possible applications, the contribution of thermodynamic and dynamic processes to recent extreme events can be investigated in three Earth system models (ESMs). The models used in ExtremeX are the Community Earth System Model version 1.2 (CESM1.2), the Model for Interdisciplinary Research on Climate version 5 (MIROC5), and the European Community Earth System Model version 3 (EC-Earth3). The purpose of this study is (1) to introduce the ExtremeX experiments and (2) to apply the introduced framework to study the drivers of heatwaves as well as to identify regions where warm spells are generally dominated by processes at the land surface or by atmospheric circulation.
The ExtremeX experiment builds on the study of Wehrli et al. (2019). This previous study introduced a framework to disentangle the role of the ocean, atmospheric circulation, soil moisture conditions, and recent climate change in extreme events. Therefore, experiments were carried out with prescribed SSTs and sea ice as well as additionally constraining the land surface and/or the atmosphere using observationbased conditions. Atmospheric variability was constrained using a grid-point nudging approach (Jeuken et al., 1996) to relax the horizontal winds toward reanalysis. This nudging approach has been verified for CESM1.2 (Wehrli et al., 2018), also analyzing biases for the nudged vs. non-nudged model climatologies, showing only minor changes to total biases. In Wehrli et al. (2019), soil moisture in the upper soil layers was constrained to control the role of land surface feedback in extreme events. The experiments with atmospheric nudging and/or prescribed soil moisture were then used to study recent heatwaves, highlighting the combined role of dynamics (i.e., atmospheric circulation) and thermodynamics (in that case referring to land-atmosphere interactions) in driving these events.
Here, building further upon the studies with CESM1.2, we present the results of the same experiments but carried out for three ESMs that were each contributed by one of the collaborating modeling groups. The models do not show high interdependence and are thus an optimal selection for a small ensemble (Brunner et al., 2020;Knutti et al., 2013). EC-Earth3 is the most recent of the three models and hence has the highest horizontal and vertical resolution of the three models used here. Since EC-Earth3 was used for CMIP6, it also has the most recent forcing data. Among CMIP5 and CMIP6 models, EC-Earth3 is one of the bestperforming models with regard to the representation of atmospheric circulation in the Northern Hemisphere middle to high latitudes (Brands, 2022;Fernandez-Granja et al., 2021). MIROC5 has contributed to CMIP5 as well as the 1.5 • C versus 2.0 • C global warming experiments (e.g., Hirsch et al., 2018;Mitchell et al., 2017;Shiogama et al., 2019).
The presented work expands on previous work in Wehrli et al. (2018Wehrli et al. ( , 2019 by quantifying biases of the near-surface climatology for different constraining experiments and three models. Near-surface temperature anomalies during four heatwaves evaluated in Wehrli et al. (2019) are compared in the three ESMs of ExtremeX. Additionally, warm spells are examined grid-point-wise to identify the role of the atmosphere and soil moisture for different spell lengths. The research questions asked are the following.
-Are model deficiencies in atmospheric circulation and the land surface contributing to climatological model biases? Are model biases reduced when these processes are constrained?
-Are the ExtremeX ESMs able to reproduce temperature anomalies of past heatwaves when constrained with observation-based data?
-What is the role of the physical climate drivers and climate change for four observed heatwaves?
-What is the relative contribution of the land surface and the atmospheric circulation to warm spells globally, and how do the contributions vary regionally?
The ExtremeX experiments could also be used to examine types of events other than heatwaves. They are suitable for more in-depth analysis of model biases by examining, for example, the atmospheric moisture and heat budgets or the surface energy balance. In Luo et al. (2022) the ExtremeX experiments are used to study the origin of model biases in the anomaly of upper-level winds and near-surface climatology during certain summertime Rossby wave events in the Northern Hemisphere by constructing composites. Other applications have not been tested so far and will be left to explore in future studies.
In Sect. 2, we introduce the experimental design and methods used. In Sect. 3 we describe the three models composing ExtremeX, and in Sect. 4 we evaluate the constraining of the atmospheric circulation and land surface. Then the framework is applied to investigate the contribution of atmospheric circulation patterns vs. soil moisture conditions for selected heatwaves between 2010 and 2015 (Sect. 5.1) and for the occurrence of warm spells (Sect. 5.2). The conclusions and outlook are given in Sect. 6.
Design of the model intercomparison project
The ExtremeX experiment was conducted in collaboration with three modeling groups running three ESMs, two of which (CESM and MIROC) were run in the CMIP5 configuration and one (EC-Earth) in the CMIP6 configuration. Five experiments were designed to unravel the source of model biases and to separate the contribution of atmospheric circulation and land surface conditions to extreme events. This is done by constraining the ocean, land surface, and atmosphere using observation-based data. The experiments and methods used are described in the following and overall follow the setup of Wehrli et al. (2019).
Experimental design
Five experiments are included in ExtremeX. All experiments prescribe SSTs, sea ice cover fraction, and land use (i.e., vegetation) but differ in the simulation of the atmospheric circulation and soil moisture that are either interactive or constrained. See Table 1 for an overview of the experiments. The control experiment (AI_SI: atmosphere interactive, soil interactive) uses the standard setup wherein the atmosphere and soil moisture are both interactive. In the soil moisture experiment (AI_SF: atmosphere interactive, soil forced) the atmosphere is interactive and soil moisture is constrained, and vice versa for the nudging experiment (AF_SI: atmosphere forced, soil interactive), with the latter only used to validate the atmospheric nudging in this study. Finally, in the fully constrained (AF_SF: atmosphere forced, soil forced) and soil moisture climatology (AF_SC: atmosphere forced, soil climatological) experiments both components are constrained by prescribing soil moisture varying over time or prescribing soil moisture using climatological soil moisture (but including the seasonal cycle), respectively.
For each experiment, one or five simulations are initialized in 1979. The ensembles for AI_SI and AI_SF are enlarged from 5 to 100 members for 2009-2015/16. The number of simulation runs for the other experiments (AF_SI, AF_SC and AF_SF) is constant. The small ensembles of five members for AF_SI from MIROC and CESM were used to confirm that variability between the members is highly reduced for winds at the surface by nudging the higher model levels, even though in the ExtremeX setup, winds are unconstrained in the lowest model levels (Sect. 2.2.1, see also Sect. 4 and Fig. A1). The analysis of the simulations starts in 1982 because the first 3 simulation years are regarded as spin-up. Likewise, 2009 is regarded as the spin-up for the additional members to diverge, and therefore analysis of the years 2010-2015/16 is recommended, for example, for event-based analysis of extremes, as is done here.
Methods
The ocean, land surface, and atmosphere are constrained to follow observation-based data, either time-varying or climatological. All experiments are conducted with SSTs and sea ice concentration prescribed. In the following, the methods to constrain the atmospheric circulation and the land surface are described. We use the term "constraining" to generally refer to the applied method of nudging the atmospheric large-scale circulation and prescribing soil moisture.
Atmospheric circulation nudging
Nudging the atmospheric circulation in a climate model strongly reduces the dynamic variability in a simulation. For ExtremeX, all modeling groups use a grid-point nudging approach (Jeuken et al., 1996) to constrain the atmospheric large-scale circulation by adding a tendency term to the prognostic equations of the zonal and the meridional winds: The term on the right-hand side of Eq. (1) is computed from the difference between a reference data set U target and the computed model value (U ). It is weighted by a relaxation timescale τ (Kooperman et al., 2012), which controls the strength of the constraint. A very short relaxation timescale means a strong constraint of the dynamics, while a long relaxation timescale allows larger deviations from the reference. The length of the relaxation timescale is chosen such that it does not dominate model physics but guarantees good agreement with the reference data set. All three models use a 6 h relaxation timescale following Kooperman et al. (2012). The reference data are given by 6-hourly wind fields from the ERA-Interim reanalysis (Dee et al., 2011), which are linearly interpolated to the model time step and regridded to the model resolution. At each model time step, the nudging term is used to update the horizontal wind variables. Additionally, a height-dependent weighting K(z) is introduced, enabling a free evolution of the boundary layer, while the nudging strength increases with height and controls the large-scale circulation (mostly above 700 hPa). The exact implementa-tion of the height-dependent profile was chosen by the groups to individually fit their respective models (Fig. 1).
Prescribing soil moisture
Prescribing soil moisture in a climate model enables the isolation of the main effects of land surface conditions and feedback on climate. In an experiment with an interactive atmosphere but prescribed soil moisture (AI_SF), the circulation will adapt to the given land surface conditions, but there is no feedback in the opposite direction. Hence, the land is decoupled from the atmosphere. In the present setup, soil moisture is prescribed to reflect observed conditions, similar to the SST-driven and nudged circulation simulations. However, directly prescribing soil moisture from observations or observation-based products (e.g., reanalyses) in a climate model can lead to inconsistencies due to differences in model climatologies and soil parameterizations (Koster et al., 2009). Instead, soil moisture reconstructions are generated by driving the land surface module of the respective ESM with meteorological fields from reanalysis data (e.g., air temperature, humidity, wind, precipitation, radiation). The generated daily or 6-hourly soil moisture constitutes the modelspecific soil moisture reconstruction. This allows soil moisture in the prescription experiments to follow observed-based soil moisture states, while still being in balance with the model climatologies and land model parameterizations. The method to constrain the land model with soil moisture reconstructions is inspired by the approach developed by Hauser et al. (2017b) for CESM. Not all models followed the approach in every aspect, as it was adapted to the respective model and tools available (see details in the individual model descriptions in Sect. 3). The common idea is that the model hydrology is active, but at the end of each time step the modeled soil moisture is replaced with the target soil moisture from the reconstruction.
Reference data sets
The atmospheric nudging is validated using winds from ERA-Interim (Dee et al., 2011) as a reference. Monthly nearsurface temperature is retrieved at 0.5 • resolution from the Climatic Research Unit, University of East Anglia (Harris . As a reference for evapotranspiration the long, merged synthesis product (based on all data set categories) from the LandFlux-Eval data set at 1 • horizontal resolution is used (Mueller et al., 2013). Total cloud cover information was retrieved from the International Satellite Cloud Climatology Project D1 (ISCCP-D1; Rossow and Schiffer, 1999) at 2.5 • resolution. All reference data sets are regridded to the original resolution of each model for the comparison.
Disentangling approach
In Sect. 5, we disentangle the contribution of the physical drivers (atmospheric circulation, land surface conditions, and the ocean state) and of recent climate change to temperature extremes. The method is briefly explained below, and readers can refer to Wehrli et al. (2019) for more details. The disentangling method only takes anomalies of a variable with respect to the experiment climatology into account. In this study, the disentangling is applied to anomalies of daily mean temperature and daily maximum temperature (TX). The different contributions are assumed to be additive, and hence differences between the experiments are computed as shown in Fig. 2. The fully constrained experiment (AF_SF) is taken as the "model truth" because it is as close to observations as the model can get. Therefore, AF_SF is set to 100 % of the event and the disentangling method determines what fraction of the event anomaly is explained by the other experiments. First, the contribution of recent climate change is computed as the anomaly of the years 2010-2015/16 during the same time of the year the event took place (but excluding the event year) in AI_SI with respect to its 1982-2008 climatology.
Note that a small fraction of the 2010-2015/16 anomalies related to the prescribed SST conditions could also be due to decadal variability. The anomaly of AI_SI at a specific point in time (e.g., during an extreme event) compared to its 1982-2008 climatology is a combination of recent climate change (i.e., warming since the climatology period, which is estimated in the first step using the anomaly of non-event years from AI_SI), the observed SST pattern, and natural variability. The natural variability is controlled by using a large ensemble of 100 members. Hence, following the additive assumption, the remaining anomaly of AI_SI corresponds to the contribution of the ocean. To estimate the contribution of the land surface state (i.e., soil moisture) and the atmospheric circulation, the disentangling method by Wehrli et al. (2019) follows two approaches. The first approach (A) is to quantify the contribution of soil moisture as the anomaly in AI_SF minus the anomaly in AI_SI and the contribution of the atmospheric circulation as the anomaly in AF_SF minus the anomaly in AI_SF. The second approach (B) is to quantify the contribution of soil moisture as the anomaly of AF_SF minus the anomaly of AF_SC and the contribution of the atmospheric circulation as the anomaly of AF_SC minus the anomaly of AI_SI. Wehrli et al. (2019) show that the two approaches give similar results, which is confirmed here (Sect. 5). Hence, in this study the results from approaches A and B are averaged, giving equal weight to both. The results for the single approaches are also documented in the Appendix. The individual data analysis carried out for four recent heatwaves in Sect. 5.1 and warm spells in Sect. 5.2 is described in the respective sections.
Community Earth System Model (CESM)
The Community Earth System Model is run in version 1.2 (CESM1.2; Hurrell et al., 2013). Coupled are the Community Atmosphere Model version 5.3 (CAM5; Neale et al., For the experiments, the anomaly during the event (or during the same time of the year but for non-event years) is considered relative to the 1982-2008 climatology (turquoise rectangles). The magnitude of the anomaly in AF_SF is taken as the total effect, and the anomaly of AI_SI during non-event years is taken as the recent climate change (CC) effect (grey boxes). Further effects are disentangled by computing differences between the experiments along the orange arrows as indicated by the minus sign (orange rectangles). Two approaches (black letters a and b) are followed, differing in how soil moisture (SM) contributions are separated from atmospheric circulation contributions. 2012) and the Community Land Model version 4 (CLM4; Lawrence et al., 2011;Oleson et al., 2010). Both are run on a horizontal resolution of 0.9 • × 1.25 • . The atmosphere model, CAM5, uses hybrid sigma-pressure coordinates and has 30 vertical layers. CLM4 has 15 soil layers, from which active hydrology is computed in the upper 10 layers (down to 3.8 m).
Natural forcings as well as forcings from greenhouse gases (GHGs), aerosols, and land use change follow the setup in Wehrli et al. (2019). Major GHGs (CO 2 , N 2 O, and CH 4 ) are prescribed to observed global values, whereas other anthropogenic forcings follow RCP8.5 after 2005. A merged product of the Hadley Centre sea ice and SST data set version 1 (HadISST1) and the NOAA weekly optimum interpolation SST analysis version 2 (OI2) was used to prescribe transient monthly observations of SSTs and sea ice concentrations (Hurrell et al., 2008). For prescribing soil moisture, the prescription method developed by Hauser et al. (2017b) was used, which replaces the model-calculated soil moisture value by a target value at the end of each model time step. The prescribed target soil moisture is computed by running CLM4 offline driven by reanalysis data from ERA-Interim. Below 0 • C soil temperature in the model, soil moisture is computed interactively, whereas at warmer temperatures the soil liquid water is prescribed to the total (liquid + ice) soil moisture of the target data set. This prevents artificial creation of ice, which can produce unrealistic heat fluxes (Hauser et al., 2017b). Takata et al., 2003), which predicts the temperature and water in six soil layers down to a depth of 14 m, one canopy layer, and three snow layers. The SST and sea ice concentration data from HadISST1 were used (Rayner et al., 2003). See Shiogama et al. (2013) for the setup of natural (solar irradiance and volcanic activity) and anthropogenic (GHGs, sulfate, black and organic carbon aerosols, ozone, land use land cover change) forcing agents. The anthropogenic forcing agents after 2005 were based on RCP4.5.
Model for
For prescribing soil moisture in MIROC5, the model replaces the calculated soil moisture with a target value at the beginning of each model time step. The prescribed target soil moisture was simulated by the land scheme in offline mode driven by atmospheric fields from ERA-Interim. To remove negative values of liquid soil moisture content, the replacing procedure also limits ice content so that the total soil moisture does not exceed the prescribed one.
European Community Earth System Model version 3 (EC-Earth3)
The (N128). All experiments were produced with the SSP3-7.0 CMIP6 scenario and prescribed monthly ocean fields from the merged HadISST1 and NOAA OI2 data set with pre-applied SST and sea ice consistency checks (Hurrell et al., 2008). More information regarding the model can be obtained from https://www.ecmwf.int/ (last access: 26 July 2022), http://www.ec-earth.org/ (last access: 26 July 2022) and Döscher et al. (2022) for greenhouse gases, aerosols, and land use prescribed in EC-Earth3. For prescribing soil moisture in EC-Earth, the simulated soil moisture is replaced by the respective target values for each of the four soil layers at the end of each model time step. As target values, 6-hourly soil moisture data from ERA-Interim/Land (Balsamo et al., 2015) were used. ERA-Interim/Land uses the H-TESSEL land surface model (Balsamo et al., 2009), which has four soil layers covering 0-7, 7-28, 28-100, and 100-255 cm of the soil.
Validation of the constrained atmosphere and soil moisture experiments
In the setup used for this study, the atmospheric nudging is stronger in the upper atmosphere and close to zero at the surface (Fig. 1). Hence, it can be expected that the variability between ensemble members is strongly reduced, especially at higher atmospheric levels, and that the simulated winds closely follow the winds in ERA-Interim with increasing nudging strength. This is confirmed by evaluating the wind fields of the three models at the grid-point level. For MIROC (and less visibly also for CESM; Fig. A1) there is some variability in wind speed and direction between the five members of the AF_SI ensemble at near-surface levels, whereas at 500 hPa and above all members are nearly identical with an almost exact representation of the reference wind speed and direction. For EC-Earth, only one simulation of AF_SI was run. Although the nudging profile of EC-Earth is shifted to higher altitudes compared to the other two models (Fig. 1), the horizontal winds represent the reference equally well for the selected pressure levels ( Fig. A1; see also Fig. A2). For all models, nudging the large-scale atmospheric flow also has a strong control on near-surface winds. In the following, the climatological model biases are compared between the experiments. First, regional and global root mean square errors (RMSEs) are examined in Sect. 4.1. Then, the sign and location of seasonal biases are discussed in Sect. 4.2.
Global and regional biases in surface temperature and precipitation
Intuitively, one might expect that biases with respect to observations are reduced when soil moisture, atmospheric circulation, or both are constrained. Near-surface temperature, for example, is driven by radiative processes and surface turbulent fluxes. The incoming radiation is related to the abundance, thickness, and composition of clouds, which are parameterized in the models and driven by weather systems (e.g., Bony et al., 2015). Surface turbulent fluxes are driven by soil moisture availability, which affects the partitioning in sensible and latent heat fluxes, especially in transitional climate regimes (e.g., Miralles et al., 2019;Santanello et al., 2018;Seneviratne et al., 2010). Similarly, precipitation is affected by both soil moisture and atmospheric circulation. The location and intensity of rainfall and snow are driven by the passage of low-and high-pressure systems as well as by soil moisture conditions (e.g., van der Ent et al., 2010;Guillod et al., 2015;Moon et al., 2019). In the following, we quantify the near-surface temperature and precipitation biases in the experiments. Therefore, the model climatologies are compared to a reference by computing the root mean square errors (RMSEs) for the seasonal and annual averages of the 1982-2008 climatology. The mean over all simulation runs was computed when multiple members were available. Only land grid points are considered (except Antarctica).
In the global and annual average, the RMSEs in the experiments with a nudged atmosphere and/or prescribed soil moisture are nearly equal to the RMSE in AI_SI (Fig. 3). For EC-Earth the temperature bias increases when soil moisture is prescribed in AI_SF and AF_SF (Fig. 3a). For MIROC a large precipitation bias is introduced when nudging the atmospheric circulation in AF_SI (Fig. 3b). However, the bias is reduced in the fully constrained experiment (AF_SF). Temperature biases are largest during the December-January-February (DJF) season in CESM and EC-Earth. In MIROC they are largest during the June-July-August (JJA) season, except for AF_SF wherein biases are largest in DJF. Precipitation biases are largest in JJA for all models. For the annual regional averages, large temperature biases can be found in regions with sparse observational coverage such as GIC (Greenland-Iceland), NEN (northeastern North America), and RAR (Russian Arctic; Iturbide et al., 2020, for an overview of the AR6 reference regions see Fig. A3). Large temperature biases can also be found in small regions, which are only represented by a few grid points in the models used, such as NWS (northwestern South America), SWS (southwestern South America), and NZ (New Zealand). Additionally, the complex topography of the Himalayas (TIB) and Andes (SWS and NWS) is also a likely source of temperature biases. Regional precipitation biases are generally larger in wet regions such as the Amazonian regions SAM (South American monsoon) and NSA (northern South America), the regions in central Africa, namely CAF (central Africa), SEAF (southeastern Africa), and NEAF (northeastern Africa), and the monsoon regions SEA (southeastern Asia) and SAS (southern Asia). Precipitation biases are also larger for small regions.
In general, the experiments with a nudged atmosphere and/or prescribed soil moisture do not show a significant reduction of the surface climatology RMSEs in any of the models or for any region of the world. In many cases, constraining the components of the model leads to even larger biases. This contradicts the initial intuitive assumption and suggests that no sole component of the model is responsible for the biases. Hence, the climatological biases that are discussed here cannot be corrected by improving the representation of the model components in isolation.
Location and sign of seasonal biases
In the Northern Hemisphere midlatitudes, the control simulations (AI_SI) for the CESM and MIROC models are systematically too hot (Fig. 4a) and in some regions also too dry (Fig. 5a) during boreal summer (JJA). North America, central Asia, and eastern Europe show the largest biases. This agrees with the findings by Wehrli et al. (2018) for CESM. For EC-Earth, only central Asia and parts of the midwestern United States (US) are too hot and dry, while other regions are mostly too cold and wet. In all models, the regions where JJA temperature is overestimated coincide with regions where cloud coverage is underestimated (Fig. A4). Especially in MIROC, a large negative cloud cover bias can be found for the Northern Hemisphere midlatitudes. In MIROC, large areas in central and eastern North America, eastern Europe, and Asia show a negative evapotranspiration bias as well (Fig. A5). In CESM and EC-Earth evapotranspiration is underestimated in central Asia and parts of western North America. This indicates that the warm temperature biases are related to underestimated evapotranspiration and cloud coverage in JJA.
Nudging the atmosphere in AF_SI reduces some of the JJA temperature biases in the Northern Hemisphere in CESM and MIROC (Fig. 4a). For MIROC, large precipitation biases are introduced with nudging (Fig. 5a). Some of the midlati-tude regions change from too little precipitation to too much and vice versa. Hence, correcting the atmospheric circulation seems to lead to an overcompensation of biases in MIROC. In EC-Earth, nudging does not strongly affect the JJA temperature and precipitation climatology. Constraining the soil moisture in AI_SF leads to a reduction of the hot and dry bias in the Northern Hemisphere midlatitudes in MIROC. For CESM, the changes are smaller but there is a reduction of the hot and dry bias in Europe and the US Midwest. For EC-Earth, constraining the soil moisture, however, introduces or increases the cold and dry bias nearly everywhere. The fully constrained experiment (AF_SF) is the experiment with the smallest temperature and precipitation biases for CESM and MIROC, suggesting that at least for these models, a correct representation of atmospheric circulation patterns and soil moisture conditions can improve the models' overall performance. Nonetheless, for EC-Earth AF_SF shows larger climatological temperature biases than AI_SI (Fig. 4), and for precipitation, the biases remain of similar magnitude (Fig. 5). This indicates that even if some of the temperature and precipitation biases are reduced by constraining the atmospheric circulation and soil moisture in the model using observationbased data, other biases can be enhanced or change sign, resulting in a worse overall performance.
The results for the austral summer (DJF) confirm the findings for the Northern Hemisphere in JJA. For EC-Earth, AI_SF introduces and increases a cold (Fig. 4b) and wet bias (Fig. 5b) in the entire Southern Hemisphere. MIROC shows large precipitation biases for AF_SI, which in certain places are of the opposite sign but of similar or larger magnitude than in AI_SI. For CESM there are only very small differences between the experiments.
All models show substantial biases in total cloud cover fraction (Fig. A4) and evapotranspiration (Fig. A5), which match the biases found in temperature (e.g., negative cloud cover bias and negative evapotranspiration bias for areas that are too warm in the model) and sometimes also for precipitation. Both variables rely heavily on parameterizations. An alternative explanation for why biases are still prevalent after correcting the atmospheric flow and the land surface is that ESMs are tuned to match, e.g., the radiation balance at the top of the atmosphere and global mean values of variables like near-surface temperature, clouds, or sea ice (Mauritsen et al., 2012). When single components of the models are constrained using more realistic fields from observational products, the model components are no longer in balance with each other. This can result in an overcompensation of biases, as can be seen, for example, for precipitation in MIROC (Fig. 5). It is known that MIROC5 shows biases in the North Atlantic storm-track activity compared to ERA-Interim (e.g., Brands, 2022;Zappa et al., 2013). Correcting this circulation bias in AF_SI leads to even larger precipitation biases, which are only reduced when the soil moisture is constrained as well. The seasonal precipitation climatology from the models was also compared to MSWEP (Fig. A6), confirming the above findings. The results for DJF for the Southern Hemisphere are very similar to the biases using GPCC-FD as a reference. For JJA for the Northern Hemisphere, the models are more on the dry side when compared to MSWEP than if GPCC-FD is used as a reference. The hemisphereaveraged RMSEs are in both cases very similar. Overall, the biases are not substantially reduced in any of the models when nudging the atmospheric circulation and/or prescribing soil moisture using observation-driven reconstructions. This shows that model biases are not primarily caused by the misrepresentation of large-scale atmospheric motion or soil moisture conditions. Instead, the biases might be caused by other processes such as radiation and cloud processes, convection, precipitation, land surface properties (e.g., land cover and land use, topography), and processes unresolved due to the model grid scale such as mesoscale circulations and sub-grid surface heterogeneity. The results also suggest that the models are tuned to have low temperature and precipitation biases in the interactive setup (with prescribed ocean).
Disentangling the contribution of physical drivers and climate change to recent heatwaves
In the previous section it was shown that large biases remain in the model climatology even if observation-based conditions are used to constrain the models. Nevertheless, nudging the large-scale atmospheric circulation and prescribing the soil moisture results in simulations that can accurately reproduce the temporal evolution and relative magnitude of events. This was shown by Wehrli et al. (2019) using CESM for five recent heatwaves, considering anomalies of TX. Hence, the presented set of experiments can be used to analyze extreme events if anomalies are used or a more elaborate biascorrection method is performed (e.g., Wehrli et al., 2020). In Sect. 5.1, four recent heatwaves are examined, and it is shown that all three models accurately reproduce TX anoma- lies during and prior to heatwaves when constrained with observation-based data. Then, the contribution of physical drivers and climate change is disentangled for the four heatwaves. The events chosen are the 2010 Russian heatwave, the 2015 European heatwave, the 2012 heatwave in the US (also known as the Midwest heatwave), and the Australian heatwave of 2012/13. All events had drastic consequences for the local communities and economies due to, e.g., damage to agriculture, wildfires, and increased mortality. They were investigated in numerous previous studies including Wehrli et al. (2019). The events were chosen due to their severity and impact as well as to ensure consistency and comparability with Wehrli et al. (2019). In Sect. 5.2 warm spells (during the warm season) are analyzed grid-point-wise to identify the relative contribution of atmospheric circulation vs. soil moisture.
Driving processes of recent heatwaves
For four heatwaves, the relative contributions of atmospheric circulation, soil moisture, ocean conditions, and climate change (since 1982-2008) to TX anomalies are determined. As in Wehrli et al. (2019), spatial averages are taken over the event region, and daily mean near-surface temperatures from ERA-Interim are used to identify the events. The hottest 15 d period defines the event, and TX during this period is examined. Ocean grid points are excluded from the analysis. TX anomalies (with respect to 1982-2008) for the heatwaves and previous months are shown in Fig. 6. Overall, the fully constrained experiments (AF_SF) from all models agree well with temperature anomalies from reanalysis and among each other. Small deviations are found during the 2010 Russian heatwave for MIROC, which underestimates the temperature anomaly (Fig. 6a), and during the 2012 US heatwave for which CESM overestimates the temperature anomaly (Fig. 6c). TX anomalies for the same events from the nudging experiment (AF_SI) already compare well to ERA-Interim (Fig. A7), capturing the temporal evolution of TX anomalies similarly well as AF_SF. Correlation of near-surface temperature anomalies between the experiments with atmospheric nudging and with ERA-Interim is very high. This confirms that observed surface anomalies can be accurately reproduced when nudging the atmospheric circulation. For MIROC and CESM, which both have five AF_SI simulation runs, it can also be seen that nudging the atmosphere strongly constrains variability between ensemble members (Fig. A7). In the following, the four heatwaves are analyzed separately.
The Russian heatwave of 2010 was characterized by extremely high temperatures over a long time period from late June to mid-August. A persistent blocking anticyclone was associated with the heatwave (e.g., Barriopedro et al., 2011;Trenberth and Fasullo, 2012). Due to early snowmelt in the year and a deficit of precipitation, water scarcity was exacerbating the heatwave (Barriopedro et al., 2011). For the analysis of the Russian heatwave of 2010, regional averages are computed over 50 to 60 • N and 35 to 55 • E (see Fig. A3 for the region outline). The hottest 15 d period lasts from 27 July to 10 August 2010 (Fig. 6a). TX anomalies from ERA-Interim exceed 10 • C during the event, which is captured well by CESM and EC-Earth. In MIROC the anomaly is somewhat weaker. In general, the three models agree on the contributions of the drivers to the TX anomaly (Fig. 7a). They estimate that recent climate change explains around 7 %-10 % of the event anomaly. CESM is the only model which shows a negative ocean contribution of around −7 %, whereas the role of the ocean is negligible in the other models. This result is supported by the studies by Dole et al. (2011) using initialized forecasts andHauser et al. (2016) using an ESM, which both found a weak role of the ocean in explaining the Russian heatwave of 2010. In contrast, observation-based studies like Martius et al. (2012) and Trenberth and Fasullo (2012) linked the driving atmospheric circulation conditions to SST anomalies, identifying the ocean as an important driver. In all three models the event is mostly driven by atmospheric circulation and soil moisture, which agrees with existing literature. The ratio of the circulation contribution to the soil moisture contribution is 70 : 30 for EC-Earth and CESM and 80 : 20 for MIROC. Assessing the two approaches to disentangle atmospheric circulation from soil moisture contributions separately gives very similar results (Fig. A8a).
The European heatwave of 2015 consisted of four hot spells that were intensified by drought conditions through land-atmosphere feedbacks (Dong et al., 2016;Hauser et al., 2017a;Orth et al., 2016). The event is analyzed over the western and central Europe (WCE) AR6 reference region (Iturbide et al., 2020, see Fig. A3; same as central Europe -CEU -from previous assessment reports). The hottest 15 d period is from 2 to 16 August 2015 (Fig. 6b). The magnitude of the TX anomaly before and during the heatwave is represented well by all models. However, there are differences in the attribution of the drivers. In EC-Earth, climate change contributes around 12 % to the event anomaly, whereas in CESM climate change is estimated to contribute 22 % and in MIROC 34 % (Fig. 7b). EC-Earth and CESM agree that there is a small negative contribution by the ocean of −9 % and −7 %, respectively. In MIROC the ocean is negligible for the event anomaly (around −1 %). This is in contrast to the modeling study by Dong et al. (2016) and the observation-based study by Duchez et al. (2016), finding that the SST patterns set important preconditions for the 2015 European heatwave. The three ExtremeX models agree on the magnitude of the atmospheric circulation contribution, which is around half of the total event anomaly. However, the role of soil moisture depends on how much of the event anomaly is attributed to recent climate change. EC-Earth estimates the highest relative soil moisture contribution with a ratio of 60 : 40 between circulation and soil moisture. The ratio is 70 : 30 for CESM and 75 : 25 for MIROC. The results for the two disentangling approaches differ most notably for EC-Earth; the ratios are 65 : 35 for one approach (A) but nearly balanced for the other (B ; Fig. A8b).
The US heatwave of 2012 evolved concurrently with a severe drought after an unusually warm winter and spring (Dole et al., 2014;Hoerling et al., 2014;H. Wang et al., 2014). The event is assessed for the region from 35 to 50 • N and 55 to 110 • W (see Fig. A3) and for 23 June to 7 July (Fig. 6c). The TX anomaly is represented well by MIROC and EC-Earth and a bit overestimated in CESM. The models agree well on the relative contribution of the drivers. In EC-Earth and CESM, recent climate change explains around 15 % of the event anomaly, whereas for MIROC it is slightly more (23 %, Fig. 7c). All models agree that the role of the ocean is very small, even if the sign is negative for EC-Earth and CESM (both −1 %) but positive for MIROC (6 %). This agrees with H. and Hoerling et al. (2014), who find the contribution by SSTs to be small. The three models agree that the role of soil moisture conditions is about equal to the role of atmospheric circulation. This is supported by earlier studies finding an important contribution by both the weather patterns and soil moisture deficit PaiMazumder and Done, 2016;H. Wang et al., 2014). The ratio of circulation to soil moisture contribution is 50 : 50 for EC-Earth, 55 : 45 for CESM, and 60 : 40 for MIROC. The individual results for the disentangling approaches show that for all models soil moisture dominates for one approach (A), while atmospheric circulation dominates for the other (B; Fig. A8c).
At the time, the summer of 2012/13 was the warmest summer observed in Australia, but it has since then been surpassed by the 2018/19 and 2019/20 summers (Bureau of Meteorology, 2020). The Australian heatwave of 2012/13 is analyzed for the region from 18 to 30 • S and 133 to 147 • E (see Fig. A3). The hottest consecutive 15 d occur just at the beginning of 2013 from 1 to 15 January 2013 (Fig. 6d). The models represent the TX anomaly from ERA-Interim mostly well, except that in MIROC it is underestimated during the first half of the event period. While the contribution by recent climate change to the event anomaly is very small and negative in EC-Earth (−2 %), CESM and MIROC agree on a small but positive contribution (7 % and 10 %, respectively, Fig. 7d). All models show a negative contribution of the ocean, which is most notable in CESM (around −25 %), while in EC-Earth it is smaller (−7 %) and almost negligible in MIROC (−2 %). This is in line with the La Niña conditions that prevailed from mid-2010 to early 2012 and then remained neutral for the rest of 2012 and during 2013 (NOAA Climate Prediction Center, 2022) as well as with the findings by Lewis and Karoly (2013). For EC-Earth and CESM the contribution by the atmospheric circulation is larger than by soil moisture, whereas for MIROC it is the other way around. It was also found by King et al. (2014) that the dry conditions were an important driver of the heatwave. The ratio of atmospheric circulation contribution to soil moisture contribution is 60 : 40 for EC-Earth, 55 : 45 for CESM, and 40 : 60 for MIROC. For MIROC, the individual ratios from the two disentangling approaches both agree that the contribution of soil moisture is larger than the contribution of the atmospheric circulation to the event anomaly, whereas for the other two models the individual ratios reflect the fact that the contribution by the two main drivers of the Australian heatwave is equal to slightly circulation-dominated according to the experiments (Fig. A8d). This may reflect the fact that the warm bias simulated by MIROC is significantly alleviated with the soil moisture constrained experiment (Fig. 4).
Overall, the three models mostly agree on the relative contribution of atmospheric circulation vs. soil moisture to the TX anomaly during four recent heatwaves. For the heatwaves of 2010 in Russia and 2015 in Europe, all models show that the atmospheric circulation plays the most important role.
For the US heatwave of 2012, the models agree that soil moisture conditions are about as important as atmospheric circulation for driving the TX anomaly. For the Australian heatwave of 2012/13, two models show that atmospheric circulation is more important, whereas one model shows that the soil moisture contribution was largest. All models agree on a small role of climate change in driving the TX anomaly during the 2010 Russian heatwave and the 2012/13 heatwave in Australia. However, for the 2015 European heatwave and the US heatwave in 2012, the role of climate change differs between the models, being largest for MIROC and smallest for EC-Earth. The role of the ocean is small for the heatwaves of 2010 in Russia, 2015 in Europe, and 2012 in the US. For the 2012/13 heatwave in Australia, all models agree that the role of the ocean is negative, thus not enhancing the heatwave; however, the models disagree on the magnitude, with CESM being the only model displaying a notable contribution by the ocean.
Relative contribution of atmospheric circulation and soil moisture to episodes of anomalously warm temperatures
In the following, we analyze the role of atmospheric circulation and soil moisture in driving the occurrence of warm spells during 1982-2015/16 (2015 for MIROC and 2016 for the other two models). The disentangling method is the same as used previously in Sect. 5.1. Warm spells are defined gridpoint-wise as time periods during the local summer season when daily mean temperature anomalies in ERA-Interim exceed 1.5 SD (standard deviation) of the 1982-2010 climatology for at least 3 consecutive days. A 7 d running mean is applied to the years 1982-2010 from ERA-Interim before computing the daily climatology and standard deviation. The local summer season is defined as the hottest consecutive 3 months (from ERA-Interim) for each grid point. The threshold of 1.5 SD was chosen such that most regions of the world actually experience events. However, using 1 or 2 SD as the threshold leads to very similar results (not shown).
The identified warm spells are categorized into events of 3-5 d, 6-13 d, and 14 d or longer. The choice of categories was made to separate events lasting a few days from week-long events and very long-lasting events of 2 weeks or more. Also, the choice was made to obtain a reasonable sampling size for each category. The warm spells based on ERA-Interim are analyzed by taking the same dates (calendar year and days of the year) in the experiments. First, the same dates are analyzed for the fully constrained (AF_SF) experiment to determine the model truth for each event and model (using the ensemble mean for MIROC, which has five simulation runs). Then, the contribution of the drivers is disentangled according to Fig. 2. One or five simulation runs (over the years 1982-2015/16) are used, depending on how many were available per model and experiment. The mean temperature anomaly of each event category and experiment (averaged over all events and simulation runs) is used to disentangle the relative contribution of the atmospheric circulation and soil moisture conditions grid-point-wise. The agreement among models is very high for all spell lengths (Fig. 8). The grid points for which soil moisture contributes one-third or more to the warm spells agree well with the regions of high soil moisture-temperature coupling (Koster et al., 2004;Miralles et al., 2012). With increasing spell length, the contribution of soil moisture becomes more important, for example in the US Midwest, Eurasia, and northern Australia. Further, with a longer spell length there is a growing proportion of total soil moisture contribution, as can be seen by the increasing percentage of soil moisture dominance for all models in Fig. 8. This shows the growing relative importance of the land surface-atmosphere coupling for long-duration events.
The analysis also reveals that warm spells of 14 d or longer with a magnitude of more than 1.5 SD do not occur often or in many regions of the world. The result for eastern Europe, for example, can be traced back to the Russian heatwave in 2010. For tropical regions like Amazonia or very dry regions like the Sahara, it is not always possible to disentangle the relative contributions of atmosphere and soil moisture. This occurs because the computed differences can become negative if the less constrained experiments have a higher temperature anomaly than the more constrained experiments on average. The affected grid cells are masked out in white in Fig. 8.
It has to be noted that the analysis method takes into account the temporal persistence of warm spells but not temporal correlation such as the time-lagged effect of dry springs on hot summers (e.g., Hirschi et al., 2011;Quesada et al., 2012). Furthermore, the events are only identified grid-pointwise and not as spatially coherent patterns, as they would occur naturally. This is responsible for some of the noise in the patterns.
Conclusions and outlook
The ExtremeX experiment is a multi-model intercomparison project designed to study processes contributing to the occurrence and intensity of extreme events. ExtremeX currently consists of simulations with three ESMs: EC-Earth3, MIROC5, and CESM1.2. Five experiments were run with all models, with one or more of the models' components being constrained. SSTs and sea ice coverage fractions are prescribed in all experiments. A grid-point nudging approach is used to constrain the modeled horizontal winds in the atmosphere, and soil moisture prescription is used to constrain the land surface.
Although the constrained experiments capture the temporal evolution and magnitude of temperature anomalies well during recent heatwave events, climatological biases in temperature and precipitation remain in the experiments. This is the case for experiments with either nudged atmospheric circulation patterns and/or prescribed soil moisture conditions. In some cases, biases are enhanced or even change sign in the constrained experiments . Comparing the location and magnitude of the climatological biases reveals that the patterns and sign of the biases often remain and the magnitude is only marginally reduced. This agrees with findings by Wehrli et al. (2018) for atmospheric nudging in CESM. The results suggest that the biases are caused by other processes such as cloud and precipitation formation, convection, interactions of the land surface and the ocean with the atmosphere, or also land surface parameters. It is also likely that none of these other elements is the sole explanation for the Figure 8. Contribution of atmospheric circulation (ATM) vs. soil moisture (SM) to warm spells during the local summer season when daily mean temperature anomalies exceed 1.5 SD (standard deviation) from the ERA-Interim 1981-2010 climatology. The local summer season is defined as the hottest consecutive 3 months (from ERA-Interim) for each grid point. The two approaches to compute the SM vs. ATM contribution are merged, giving equal weight to both. Events are categorized into spells lasting 3-5 d, 6-13 d, and 14 d or longer. Ocean grid points, Antarctica, Greenland, and Iceland are masked out in grey using the Greenland-Iceland (GIC) region from the AR6 reference regions for the latter two. Grid points for which no events were identified are also masked out in grey. Grid points for which the contributions could not be determined (see text) are masked out in white. In the lower left corner the grid points for which the SM contribution dominates over the ATM contribution (> 50 %) are given as an area-weighted percentage with respect to all valid grid points. climatological biases, but rather their interaction, including atmospheric circulation and soil moisture (dynamics) interactions.
Despite the biases in mean climatology, the experiments with constrained atmosphere and soil moisture can accurately reproduce temperature anomalies during and prior to heatwaves (Fig. 6). This is found for all models and supports the results by Wehrli et al. (2019) for CESM. This result implies that bias correction could alternatively be used to improve the representation of extreme events in the models instead of analyzing anomalies as is done here. The presented set of experiments can be used for extreme event analysis as long as the atmospheric circulation and/or soil moisture are major drivers of the event. The experiments are not ideal if the focus is on the role of the ocean because the ocean is prescribed. For events that are mainly ocean-driven, we would recommend a setup with interactive ocean experiments to compute the ocean contribution more accurately. This would apply, for example, to extreme events (i.e., droughts, heatwaves, floods) that are strongly driven by the El Niño-Southern Oscillation and other coupled ocean-atmosphere phenomena such as the Indian Ocean Dipole or the Pacific Decadal Oscillation. Nevertheless, here we derive the potential ocean contribution by comparing the anomaly to non-event years. The present study disentangles the role of atmospheric circulation vs. land surface processes for temperature anomalies. Therefore, additivity of the different contributions is assumed. This is inspired by the study by Kröner et al. (2017) for summer climate in Europe. Following the disentangling in Wehrli et al. (2019), experiments with constrained soil moisture (AI_SF) and with a nudged atmosphere and soil moisture constrained to climatological values (AF_SC) are used along with the control (AI_SI) and fully constrained (AF_SF) experiments. The experiment with a nudged atmosphere and interactive soil moisture (AF_SI) leads to similar temperature anomalies during heatwaves like the AF_SF experiment. Atmospheric nudging strongly constrains land surface conditions due to the control on available moisture, and because in AF_SF ERA-Interim is used to derive the target soil moisture that is prescribed, similar land surface conditions result in both experiments. Hence, AF_SI is not considered in the disentangling procedure (see Fig. 2). To have more robust results, two disentangling approaches are considered like in Wehrli et al. (2019). Both approaches tend to produce similar results, indicating that in a first-order assumption the contributions can be treated as additive. Nevertheless, it has to be noted that disentangling causality in a coupled system always comes with limitations. Differences between the approaches show nonlinearities in the responses due to feedbacks.
TX anomalies during four recent heatwaves are attributed to their physical drivers and to climate change. The four events considered are the 2010 Russian heatwave, the 2012 heatwave in the US, the Australian heatwave of 2012/13, and the European heatwave in 2015. Overall the models show good agreement on the role of the drivers. Recent warming (since 1982-2008) is found to positively contribute to the event anomaly for all events and nearly all models (not for the Australian heatwave of 2012/13 and EC-Earth). The largest contribution by recent warming is found for the US heatwave of 2012 (15 %-23 %) and for the European heatwave of 2015. However, for the latter event the three models agree less on the relative role of climate change (12 %-34 %). In the presented setup the ocean was not found to have a substantial role in driving any of the events considered. This could be due to the limited interaction between the ocean and the atmosphere due to the prescription of SSTs and sea ice or because the ocean was indeed not a driver of the events considered. For the Australian heatwave of 2012/13, the ocean is found to influence the temperature anomaly negatively in CESM (−23 %). This is in accordance with the cool to neutral phase of the El Niño-Southern Oscillation (NOAA Climate Prediction Center, 2022). For all four heatwaves the models show that both atmospheric circulation and land surface conditions significantly contribute to the event anomaly. For the Russian heatwave of 2010 and the European heatwave of 2015, atmospheric circulation is the dominant driver, with land surface conditions playing a secondary but still important role. Yet, for the US heatwave 2012, soil moisture is about as important as atmospheric circulation. For the Australian heatwave of 2012/13, one model shows that soil moisture is the most important driving factor, and the other two models show that the two physical drivers are about equally important. Note that, by design, the ExtremeX framework does not give information on which of the drivers is the initial source of the heatwaves since the constraining of the model components was carried out for the whole simulation period and events were analyzed using contemporaneous anomalies from the experiments.
The ExtremeX experiments also allow a general assessment of the respective contributions of circulation anomalies vs. soil moisture conditions for warm spells. The results are very similar for the three ESMs, showing that the models generally agree on the representation of extreme events and the driving processes behind these events. Warm spells of at least 3 d are assessed grid-point-wise and show that soil moisture is responsible for around one-third to half of the temperature anomalies in transitional and tropical climate zones (Fig. 8). The regions identified resemble the regions of strong soil moisture-temperature coupling highlighted by Miralles et al. (2012) for observational data and Seneviratne et al. (2006) for global climate models. Both studies additionally identify southern Europe and Eurasia as regions of strong soil moisture-temperature coupling, which is, however, not confirmed by the results presented here. Nevertheless, in regions where spells of at least 2 weeks can occurlike in Eurasia -soil moisture is more important for these longer events than for shorter events, driving up to one-third of the temperature anomaly.
This study expands the mechanistic analysis of recent heatwaves by Wehrli et al. (2019) using three Earth system models. The results for warm spells at the grid-point level and for the four heatwaves suggest that both circulation patterns and soil moisture anomalies substantially contribute to the occurrence of heat extremes, which is consistent with Wehrli et al. (2019). Soil moisture effects are particularly important in the tropics, monsoon regions, and the US Great Plains, while circulation anomalies tend to dominate in other regions of the extratropics. These results can help to shed light on processes that need to be better taken into account in weather predictions and climate projections. For instance, the important role of soil moisture conditions for extremes suggests that soil moisture monitoring and initialization could substantially improve forecasting of weather extremes in several regions. Figure A2. Bias of the zonal (U ) and meridional (V ) wind components at 850 and 500 hPa for the experiments with a nudged atmosphere (AF_SI) and free atmosphere (AI_SI), showing the ensemble mean when multiple simulations are available. Shown is the average of 6-hourly wind fields for 1 month (June 2000). The winds from the models were interpolated to 500 and 850 hPa. The winds from ERA-Interim were interpolated to the same pressure levels and the model resolution for each model. Figure A5. Bias in evapotranspiration with respect to LandFlux-Eval. Average over (a) JJA for the Northern Hemisphere and (b) DJF for the Southern Hemisphere. Masked out are grid points with a seasonal average of less than 0.1 mm evapotranspiration per day in the reference data set. Additionally, ocean grid points, grid points north of 75 • N, Antarctica, Greenland, and Iceland are masked out using the Greenland-Iceland (GIC) region from the AR6 reference regions for the latter two. The RMSE averaged over all valid grid points of the respective hemisphere is given in millimeters per day in the upper right corner of each experiment and model. Figure A8. Same as Fig. 7 but showing the separate effects for the two approaches to compute SM vs. ATM contributions (left A, right B).
Code availability. The analysis code can be made available by the authors upon request.
Author contributions. KW, MH, and SIS designed the experiments with input from OM and RV. FL ran the EC-Earth3 simulations with technical help from FS, WM, and PLS. HS ran the interactive and atmosphere nudged simulations with MIROC5. DT and HK ran the soil moisture nudged simulations with MIROC5. KW ran the CESM1.2 model simulations with technical support by MH. KW analyzed the results from all models. KW, FL, MH, HS, DT, HK, DC, WM, OM, RV, and SIS contributed to the discussion of results. KW prepared the paper with contributions from all co-authors. the project. This study uses the LandFlux-EVAL merged benchmark synthesis products of ETH Zurich produced under the aegis of the GEWEX and ILEAPS projects (http://www.iac.ethz.ch/url/ research/LandFlux-EVAL/, last access: 26 July 2022). Review statement. This paper was edited by Laurens Ganzeveld and reviewed by Paul Dirmeyer and one anonymous referee. | 14,117.6 | 2021-07-13T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
MODELLING AND FORECASTING VOLATILITY IN THE GOLD MARKET
We investigate the volatility dynamics of gold markets. While there are a number of recent studies examining volatility and Value-at-Risk (VaR) measures in financial and commodity markets, none of them focuses on the gold market. We use a large number of statistical models to model and then forecast daily volatility and VaR. Both insample and out-of-sample forecasts are evaluated using appropriate evaluation measures. For in-sample forecasting, the class of TARCH models provide the best results. For out-of-sample forecasting, the results were not that clear-cut and the order and specification of the models were found to be an important factor in determining model’s performance. VaR for traders with long and short positions were evaluated by comparing failure rates and a simple AR as well as a TARCH model perform best for the considered back-testing period. Overall, most models outperform a benchmark random walk model, while none of the considered models performed significantly better than the rest with respect to all adopted criteria.
Introduction
The recent global financial crisis has highlighted the need for financial institutions to find and implement appropriate models for risk quantification. Hereby, in particular Value-at-Risk (VaR) and volatility estimates were subject to significant changes during 2007-9 financial turmoil in comparison to normal market behaviour. Further, as the risk in equity and bond markets was increasing, there was a particular interest of investors to increase their positions in the gold market. This study evaluates the effectiveness of various volatility models with respect to forecasting market risk in the gold bullion market. While there is a stream of literature examining performance of models for volatility and VaR, this is a pioneer study to particularly focus on the gold market. Despite the important role gold plays for risk management and hedging in financial markets, there has been relatively little literature on the estimation of volatility of gold. Exceptions include the studies by Mills (2003), Tully and Lucey (2006), Canarella and Pollard (2008), Morales (2008) and Jun (2009).
Generally, the gold market has a significant and unique role in financial markets as a safe haven that is also used for hedging and diversification. While there is no theoretical reason why gold is referred to as a safe haven asset, historical evidence suggests that investments in the gold market spikes during times of turmoil in other financial markets. One explanation could be that it is one of the oldest forms of money and was traditionally used as an inflation hedge. Moreover, gold is often uncorrelated or even negatively correlated with other types of assets. This is an important quality that allows gold to act as a diversification asset in portfolios, since a more globalised market has led to the increase in correlation among other assets. This became also evident during the financial crisis of [2007][2008][2009] where the negative effect of one market readily flowed into other markets, yet the gold market remained relatively unscathed during this period of turbulence. So far there has been no study using volatility and VaR modelling in the spot gold and gold futures markets. Gold market research has concentrated on the role of gold as a hedging or diversification tool, in particular as a safe haven during market crashes.
This study examines various models that can be used in forecasting volatility, to evaluate their respective performance. Finding appropriate models for volatility is of interest for several reasons: firstly, it is an integral factor of derivative security pricing, for example, in the classic Black-Scholes model or alternative option pricing formulas.
Secondly, as a representation of risk, volatility plays an important role in an investor's decision making process. Volatility is not only of great concern for investors but also policy makers and regulators who are interested in the effect of volatility on the stability of financial markets in particular and the whole economy in general. Finally, volatility estimation is an essential input in many VaR models, as well as for a number of applications in a firms market risk management practices.
The remainder of the paper is set up as follows. Section 2 provides a brief review on the global gold market and studies on volatility modelling of financial markets in general and gold markets in particular. Section 3 provides an overview on the data and techniques used in this study. In particular various models for volatility forecasting and evaluating model performance are reviewed. Empirical results of the study are reported in Section 4 while section 5 concludes.
The gold market
Gold has been used throughout history as a form of payment and has been a standard for currency equivalents to many economic regions or countries. In spite of its historical monetary significance, a free functioning world market only came of age in recent times. Before 1971, the gold standard was mostly used in various times in history, where domestic currencies have been backed by gold. The system existed until 1971, when the US stopped the direct convertibility of the United States dollar to gold, effectively causing the system to break down. Since then, a global market for gold in its own right developed, remaining open around the clock and open to a range of derivative instruments.
The market for gold consists of a physical market in which gold bullions and coins are bought and sold and there is a paper gold market, which involves trading in claims to physical stock rather than the stock themselves. Physical gold is generally traded in the form of bullions. The bullion market serves as a conduit between larger gold suppliers such as producers, refiners and central banks and smaller investors and fabricators. The bullion market is essentially a spot market, but is complemented by the use of forward trading for the hedging of physical positions.
Since 1919, the most widely accepted benchmark for the price of gold is known as the London gold fixing, a twice-daily (telephone) meeting of representatives from five bullion-trading firms. 1 Furthermore, there is active gold trading based on the intra-day spot price, derived from gold-trading markets around the world as they open and close throughout the day. The key prices in the London bullion market are the spot (fixings) price, the forward price and the lease rate. The spot (fixings) price is a daily clearing or fix price obtained by balancing purchases and sales ordered through its members. The forward price (GOFO) is the simultaneous purchase and sales price of gold forward contracts of various lengths. Generally, the GOFO rate is expressed as an annual percentage. Finally, the lease rate refers to short-term loans denominated in gold and is expressed as an annualized interest rate.
Since 1971 the price of gold has been highly volatile, ranging from a high of
Factors influencing gold prices
As mentioned above, gold has a unique place in financial markets. Of all the precious metals, gold is the most popular as an investment. Investors generally buy gold as a hedge or safe haven against any economic, political, social or currency-based crises. These crises include investment market declines, burgeoning national debt, currency failure, inflation but also scenarios like war or social unrest. As in any commodities, the price of gold is ultimately driven by its supply and demand.
However, unlike other resources, hoarding and disposal plays a much bigger role in price formation because most of the gold ever mined still exists and is potentially able to enter the market for the right price. Given the huge quantity of stored gold, compared to the annual production, the price of gold is mainly affected by changes in sentiment, rather than changes in the actual annual production.
Also macroeconomic factors such as low real interest rates can have an effect on gold price. If the return on bonds, equities and real estate is not adequately compensating for risk and inflation, then the demand for gold and other alternative investments such as commodities increases. An example of this is the period of stagflation that occurred during the 1970s which led to an economic bubble forming in precious metals.
Financial market declines such as the 2007-9 global financial crisis usually leads investors to look for alternative and less volatile investment opportunities for their funds. It will also increase the need for investors to hedge their portfolios to minimise their risk in case of further decline. The demand for gold and, thus, its price increase, empirically is due to the role of gold as a safe haven in times of crises. This is one of the major reasons to drive gold prices to new highs throughout the post-financial crisis period.
Central banks and the International Monetary Fund (IMF) also play an important role in determining the gold price. At the end of 2004 central banks and official organizations held 19 percent of all above-ground gold as official gold reserves. Thus, they have a significant influence on the gold market not only as a major buyer and seller. Also, speculation on their future gold holding levels can also be a driving factor.
Recently, the assumption that central banks around the world will increase their gold reserve levels as a hedge against the falling US dollar has also contributed to the rise of gold prices.
The performance of gold bullion is often compared to stocks. However, they are fundamentally different asset classes. Gold is regarded by some as a store of value (without growth) whereas stocks are regarded as a return on value. Stocks and bonds perform best in periods of economic stability and growth, whereas gold is seen as the asset to hold in times of uncertainty and crisis. Throughout history there has been a cyclical run with long periods of stock outperformance followed by long periods of gold outperformance. Over the long term, equity markets have been able to outperform gold overall.
Volatility Models
Within the last three decades various approaches to volatility modelling have been suggested in the econometric and financial literature. In the following we will provide a brief overview of developments in the literature starting with the autoregressive conditional heteroskedasticity (ARCH) models (Engle, 1982). Bollerslev (1986) introduced the generalised ARCH (GARCH) model. The latter is often utilised in financial market studies. The general idea is to predict the current period's variance by forming a weighted average of a long term average, the forecasted variance from last period, and information about volatility observed in the previous period. If the return is unexpectedly large either in the upward or the downward direction, then the trader will increase the estimate of the variance for the next period. This model is also consistent with the volatility clustering often seen in financial returns data, where large changes in returns are likely to be followed by further large changes.
Since the introduction of these models, they have been widely used in volatility modelling and forecasting. Researchers such as French et al. (1987) and Akgiray (1989) utilised GARCH models to capture the behaviour of stock market price volatilities. Argiray (1989) compared the GARCH (1,1) model to other historical estimation methods and found that the GARCH (1,1) model outperformed its competitors. Many extensions of the GARCH model have been introduced in the literature since: e.g. GARCH-in-mean (GARCH-M) models (Engle et al., 1987), EGARCH models (Nelson, 1991), Threshold ARCH (TARCH) and Threshold GARCH (TGARCH) (Glosten, Jaganathan, and Runkle, 1993;Zakoïan, 1994) and Power Arch (PARCH) models (Ding et al., 1993) just to name a few.
A number of studies have focused on optimal model specification and the performance of various GARCH models in financial markets providing no clear-cut results. Hansen and Lunde (2005) carried out comprehensive testing of 330 variants of ARCH type models on their performance in estimating volatility in exchange rates and stock returns. The study found that the GARCH (1,1) model outperforms other models in estimating exchange rate volatilities but underperforms in estimating stock returns. McMillan et al. (2000) tested a set of ten volatility estimation models including random walk, moving average and GARCH models in forecasting UK stock market returns at different frequencies. They found that the performance of each model varied depending on the length of frequencies, the series as well as the type of loss function being applied. The random walk model outperformed others at the monthly frequency, while GARCH and moving average models were superior using daily forecasts. Brooks andPersand (2002, 2003) examine various ARCH and GARCH type models with respect to volatility forecasting. They report that, while the forecasting performance of the models depended on the considered data series and time horizon, the overall most preferred model is a simple GARCH(1,1). This is also consistent with many other studies such as e.g. Bollerslev et al. (1992). On the other hand, Braisfold and Faff (1996) evaluate volatility models in forecasting stock returns, and find that none of the models significantly outperforms the others.
Recently, also a stream of literature has emerged focusing on modelling and forecasting volatility with respect to the quantification of Value-at-Risk (VaR). As pointed out by Jorion (1996), VaR plays a substantial role in managing risks for financial institutions. The importance of the VaR measure is further highlighted by regulators in the Basel Committee on Banking Supervision. 2 The performance of volatility models with respect to appropriate quantification of VaR has been investigated by Danielsson and De Vries (2000): conditional parametric methods such as the GARCH model significantly underpredict the VaR of U.S. stock returns. Laurent (2001, 2003) investigate volatility models for both negative and positive returns, with the latter representing risk for short position holders. They find that skewed asymmetric ARCH models using the Student t distribution perform best with respect to risk quantification. Sadorsky (2006), investigating oil price volatility, tested a great variety of volatility models by evaluating the forecasting performance using different VaR measures. His findings suggest that while no model could consistently outperform the others, a GARCH model as well as a TGARCH performed quite well for modelling and forecasting the volatility and risk of oil prices. Tully and Lucey (2006) examine various macroeconomic influences on gold using models including the asymmetric power GARCH model (APGARCH) for spot and futures prices over a 20 year period, paying special attention to periods of stock market crashes. Their results suggest that the price of gold is significantly influenced by the U.S. dollar while during periods of financial crises an APGARCH model performs best with respect to volatility. Mills (2003) investigates the statistical behaviour of daily gold prices, and finds that price volatility scaling with long-run correlations is important while gold returns are characterised by short-run persistence and scaling with a break point of 15 days. Canarella and Pollard (2008) apply power GARCH model to the London Gold Market Fixings to investigate long memory features as well as conditional volatility behaviour of the returns. They find that APGARCH models were able to adequately capture long memory in returns and that market shocks have strong asymmetric effects: conditional volatilities of gold prices are affected more by good news (positive shocks) than bad news (negative shocks).
Morales (2008) discusses volatility spill-over effects between precious metal markets using GARCH and EGARCH techniques. Gold was found to be influenced by prices of other precious metals, but there was little evidence to suggest other precious metals influencing gold prices.
The Data
The data for this study are daily PM gold fixing prices on the London Bullion Market available from the official The London Bullion Market Association website (www.lbma.org.uk). The market is a wholesale over-the-counter (OTC) market for gold and silver. The fixings are the internationally published benchmarks for precious metals. The Gold Fixing is conducted twice a day by five Gold Fixing members, at 10:30 am and 3:00 pm. This study will use the daily PM fixings price released at 3:00 pm as quoted in USD. The data cover 2508 observations from 4 January 1999 to 30 For the observed gold fixing prices p t , the daily log-returns are calculated as r t = ln (p t /p t-1 ). Table 1 provides a summary of descriptive statistics for the considered return series. We observe that the mean and median of daily returns are positive indicating that overall gold prices were increasing during the considered time period. The magnitude of the average return (0.044%) is very small in comparison to its standard deviation (1.14%). Further, the large kurtosis of 8.53 indicates the leptokurtic characteristics of daily returns. Obviously, the series has a distribution with tails that are significantly fatter than those of a normal distribution. This indication of non-normality is also supported by the Jarque and Bera (1980) test statistic, which rejects the null hypothesis of a normal distribution at all levels of significance. Figure 1 provides a plot of the time series for the daily log-returns as well as a histogram of the return distribution. The figures indicate heteroscedasticity and volatility clustering for the return series that also exhibits a number of rather isolated extreme returns caused by unforeseen events or shocks to the gold market. We further test for stationarity of the return series using the Augmented Dick Fuller (1979) (ADF) and Phillips Perron (1988) (PP) unit root tests.
The ADF test is set to a lag length 0 using the Schwarz Information Criterion (SIC) and the PP test is conducted using the Bartlett Kernel spectral estimation method.
Results are reported in Table 2, and indicate that for both tests the null hypothesis of a unit root is rejected. So the return series gold fixing prices can be considered to be stationary.
Considered Models
In the following, a variety of models is introduced for volatility modelling and forecasting of the daily returns. We will follow several studies in the literature, see e.g. Sadorsky (2006), and measure the volatility of gold by its squared daily return: Thus, most of the models will be evaluated with respect to their ability to model and forecast the volatility measured by the squared return of the gold fixings price.
The first model to be considered in the empirical analysis is a random walk model (RW). If the volatility of gold market returns follows a random walk, the best forecast for the next period's volatility is the volatility observed in the current period: This random walk model will be used as a benchmark model for the out-of-sample performance of the estimated models.
The second standard class of models to be considered are historical mean (HM) models. In these models, the forecast for the volatility of the next period is the average of all previous volatilities. In particular, if � 2 is a random variable, which is uncorrelated with other observable variables and if � 2 is uncorrelated with its own past values, then the population mean can be considered as the optimal forecast.
Defining 2 = 2 , the HM model can be denoted by A popular alternative to the HM model is the m-period moving average (MA) model. The forecast for the next period is based on the average of the last m observations. A value for m has to be determined. We decided to use moving averages of length m=20, 40 and 120 days, corresponding to about one month, two months and six months. The MA(m) model can be denoted by:.
The next model we consider is the exponentially weighted moving average model (EWMA). It forecasts the future volatility by applying weighting factors which decrease exponentially. That is, the method gives higher weights to more recent observations while still not discarding older observations entirely. It is calculated as the weighted average of the estimated volatility � 2 for day t (made at the end of day t-1) and the value of volatility 2 observed on day t: The smoothing parameter α governs how responsive the forecast is to the most recent daily percentage change. Generally, α lies between 0 and 1, and the process becomes a RW for α =0. A popular choice for the parameter α is based on J.P. Morgan's RiskMetrics (1995) where it is suggested that α = 0.94 provides forecasts of the variance rate closest to the actual variance rate for a range of different market variables.
An alternative is an ordinary least squares (OLS) model. The relationship between volatility on day t and day t+1 is described based on a linear relationship: The parameter estimates are then determined by OLS estimation. The model can be extended to an autoregressive (AR) model of order p where the current volatility is a linear function of the last p observations for the volatility. We implement a model of order p = 5 such that we estimate an AR(5) model that can be described by the following equation: We also consider a weighted moving average of disturbance terms model (MAD) where the volatility in period t+1 is modelled as a function of the lagged values of the disturbance term ε t . Similar to the AR model, we decided to use a MAD model of order 5 that can be described by the following equation: We decided to also use an autoregressive moving average (ARMA) or Box-Jenkins model that includes both an autoregressive (AR) and a moving average (MAD) component. A simple ARMA(1,1) can then be described by the following equation: Since the introduction of autoregressive conditional heteroscedasticity (ARCH) models by Engle (1982), the ARCH and even more the related GARCH (Bollerslev, 1986) model have become standard tools for examining the volatility of financial variables. The model has proven to be very useful in capturing heteroskedastic behaviour or volatility clustering without the requirement of higher order models in various financial markets, see e.g. Choudhy (1996) or Sadorsky (2006). In a GARCH (1,1) model the conditional variance equation can be denoted by while the equation for the conditional mean is such that the one day ahead variance forecast can be expressed as: A popular extension of the GARCH (1,1) model is also the GARCH in mean (GARCH-M) model that was first proposed by Engle et al. (1987). The GARCH-M model includes the conditional variance in the specified equation for the conditional mean. This allows for so-called time varying risk premiums. Chou (1988) suggests that the dynamic structure of the conditional variance can be captured more flexibly by a GARCH-M model, using the following specification for the conditional mean: Another extension of standard ARCH and GARCH models has been suggested by Glosten et al. (1994) and Hentschel (1994): threshold ARCH (TARCH) and GARCH (TGARCH) models, which are popular in describing return asymmetry.
Large negative returns are often followed by a substantial increase in volatility such that the TARCH and TGARCH models distinguish between negative and positive returns. The TGARCH model that will be considered in the empirical analysis treats the conditional standard deviation as a linear function of shocks and lagged standard deviations (Hentschel, 1994) and is denoted by: where −1 is equal to 1 if ε t < 0, and zero otherwise. Obviously, in this model, −1 2 > 0, and −1 2 < 0 will have different effects on the conditional variance. If ≠ 0, there is asymmetry in the model. If > 0, the occurrence of bad news will increase volatility and there is evidence of a leverage effect.
Performance Evaluation Measures
To evaluate the performance of the considered models, we apply a variety of measures such as mean squared error (MSE), root mean squared error (RMSE), mean absolute deviation (MAD), mean absolute percentage error (MAPE) and the Theil U statistic. The MSE quantifies the difference between predicted and actually observed values by considering the squared difference between these two quantities: The RMSE is simply the root of MSE and has the advantage of being measured in the same unit as the forecasted variable: The MAE is also measured in the same unit as the forecast, but gives less weight to large forecast errors than the MSE and RMSE: We also investigate the forecasting performance using the Theil U statistic that examines the RMSE measure of a forecast against a naïve one step ahead forecast. If the Theil U statistic is smaller than 1, the tested forecast model outperforms the naïve model: if the U statistic is larger than 1, the naïve forecast is the better model. Note that in our analysis we decided to use the RW model as the naïve benchmark model for forecasting.
While the above forecasting quality measures are useful for providing different performance measures on applied models, they do not statistically test if the models are significantly different or better from another. Therefore, we will also apply the Diebold-Mariano (1995) test (DM) to compare the predictive ability between two forecasting models. The null hypothesis of the test is that the predictive ability of two forecasting models is the same. In our empirical analysis, we are particularly interested whether our forecast models are able to significantly outperform a simple RW model such that the considered models are tested against the RW model using a simple t-test, see e.g. Diebold (1998). Thus, the null hypothesis of equal performance of the models is rejected when the test-statistic yields significant values. In the empirical analysis we will restrict ourselves to oneperiod-ahead forecasts only. Note that the test could also be applied to k-step-ahead forecasts, see e.g. Diebold and Mariano (1995). The authors point out that the test tends to be less accurate for small sample sizes and k-step-ahead forecasts. However, these issues are unlikely to affect our empirical analysis due to a comparably large sample size and the use of one-period-ahead forecasts only.
In-sample forecasting performance
In this section, we compute the one-step-ahead volatility forecasts using the models described in the previous section. For the in-sample analysis, the data are divided into three sub-periods: sub-period 1 from 28 th Jun 1999-Dec 2004 In the first sub-period, the price fluctuations were relatively low with a general upward trend. Only one structural break occurred after the 11 September 2001 attack. U also indicate that the EWMA, AR(5) and ARMA models perform well.
The RW is once again the worst performing model, ranking last for all statistics except MAPE. The DM values for this period are all highly significant even at the 0.01 level, indicating that most models are able to significantly outperform the RW benchmark in this period. This is also confirmed by U statistic where all models yield lower values than in the first sub-period. The U values range from 0.26 to 0.34 indicating that even the worst performing model (HM) is still significantly better than the RW benchmark. Overall, the results for the second sub-period suggest that predictive models with conditional volatility like TARCH, GARCH and GARCH-M seem to perform quite well during this period of significant increases in the gold price.
The third sub-period from January to December 2008 also includes the advent of the global financial crisis, when various financial markets as well as the gold market exhibited a long period of extreme volatility. Generally, one would expect this period being the most difficult for volatility prediction. This is confirmed by both MSE and MAE-based criteria yielding clearly higher values than for the previous two subperiods. For example, the MSE is five times higher than during the first and second sub-period while the MAE increases by roughly 200 percent. Also for the third subperiod, MSE, RMSE and U favour the TARCH model as yielding the best predictions, while the AR(5) and MAD(5) rank second and third. For these criteria, the random walk model is the worst performing model, followed by the HM model. Also the MAE measure gives indication of superiority of the TARCH model over the others.
However, for this criterion, the AR and MAD models perform rather poorly and only rank ninth and tenth. Again, the two worst performing models are the RW and HM model.
The DM test show that for the third sub-period all models were able to significantly outperform the RW model at the 0.01 level. Results for Theil's U are similar to the second sub-period indicating that the models provide substantially smaller RMSE than the RW model for the volatile third sub-period. Overall, we conclude that for in-sample fit, the TARCH model can be considered as the most appropriate, ranking first for almost all of the examined performance measures and sub-periods.
Out-of-sample forecasting results
In the following we report the results for an out-of-sample analysis of the models by comparing one-step-ahead volatility for the most volatile period from July 1, 2008 to December 30, 2008. A recursive window approach is used. For the recursive window approach, the initial estimation date is fixed and the models are estimated using all observations available up to the initial estimation date. It is an iterative procedure, where in each time step, the estimation sample is augmented to include one additional observation in order to re-estimate the volatility forecast for the next day.
Again, results are benchmarked against a RW model. Note that despite its simplicity, particularly in out-of-sample forecasting the random walk model is often considered as a benchmark model that is difficult to beat: for example, Stock and Watson (1998) examine various US macroeconomic time series and suggest the RW model to perform best amongst a number of competing models.
The out-of-sample results for the different models are provided in Table 6. Our results for the MSE criterion suggest that the MA(40) model provides the most accurate forecasts while the EWMA model ranks seconds. Interestingly, similar to the considered in-sample periods, the RW model proved to be the worst amongst the examined models also for out-of-sample forecasting. It ranked last with respect to the MSE criterion and provided predictions significantly less accurate than most of the considered models. Another feature of the results is that there are only relatively small differences with respect to MSE among the ten best models: the MSE for the MA(40) model is 83.29 while the MAD(5) model provides a MSE of 90.08.
With respect to MAE, we observe the smallest error for the MA(120) model.
The HM and MAD models, also perform well, ranking second and third, respectively.
The benchmark RW model is substantially less accurate than the other models. Again the marginal difference between the first and tenth ranked model is comparably small. The ARMA models rank second to eleventh across the different measures indicating the importance of the right choice of the order of the coefficients.
In summary, we conclude that there are only small differences with respect to the out-of-sample forecast performance between the considered models. The MA (40) could be considered the best model based on the MSE and U measures. Other models that have performed well are the ARMA(1,1) and the EWMA model. Furthermore, despite their generally good performance in the in-sample periods, for the considered out-of-sample period the GARCH models did not perform that well. In particular the TARCH model, that was the clear winner when in-sample volatility predictions were considered, only ranked between 9 and 13 across the measures. Overall, there are no significant differences between the models and the rankings based on each performance measure are quite different.
We conclude that, for the out-of-sample forecasting, it is hard to choose an overall winner. We will now extend our analysis by examining the different models with respect to risk quantification. In particular, we investigate and report their performance in forecasting Value-at-Risk (VaR).
Value at risk Analysis
In this section, we examine the proposed models with respect to adequate VaR quantification in an out-of-sample forecasting study. For a given portfolio, probability and time horizon, VaR is defined as a threshold value of the probability that the markto-market loss of the portfolio over the given time horizon exceeds this value at a given probability level. In or analysis, following Laurent (2001, 2003), Kupiec (1995);Christoffersen (1998); Christoffersen and Diebold (2000) or Hull (2007). The results for the calculated VaR forecasts for long and short positions in the gold market are provided in Table 7 and 8. We apply a test that is based on the actual number of observed exceptions versus the expected number of exceptions, see e.g. Hull (2007). The test uses a binomial distribution such that given a true probability p of an exception, the probability of the VaR level being exceeded m or more days is: Based on these quantities it is easy to derive p-values for a correct VaR model specification given the number of exceptions that were actually observed.
We find that the random walk model performs rather poorly both for the 95% and 99% VaR. For the long position, we observe 18, respectively 16 VaR exceptions corresponding to a failure rate of 14.2% and 12.6% that is substantially higher than the expected 5% and 1% under the assumption of a correct model specification.
Similar results are obtained for holding a short position where the fraction of VaR exceptions is approximately 11% and 9.4%, respectively. Thus, as indicated by the pvalues, for both 95% and 99% VaR levels, the model is significantly rejected.
While most of the models provide clearly less VaR violations than the RW model, only few of them are not rejected by the test for at least one of the two considered confidence levels. The HM and OLS model also significantly underestimate the risk, and yield too many exceptions for both long and short positions in particular at the 0.01 level. On the other hand, the three MA models yield a very small number of VaR violations, but the estimates are too conservative. As indicated in Table 7, for the long position, each MA model only yields one exception at the 95% VaR level leading to a rejection of the models even at the 0.10 significance level. Almost the same results are obtained for holding a short position in the gold market where the 95%-VaR estimates are also too conservative, so all MA models are rejected. Note however, that the models are not rejected for the 99%-VaR level since only a very small number of exceptions are expected at this level. Similar results are obtained for the ARMA, EWMA and two models with conditional variance GARCH(1,1) and GARCH-M model.
These models only yield two exceptions at the 95% level and zero or one exception at the 99% level for a long position: for a short position, only the GARCH(1,1) model yields one exception at the 95% confidence level. The VaR estimates of these models are too conservative for the considered time period such that all models are rejected at the 5% significance level. The MAD(5) model gives too many exceptions at the 95% confidence level for a long position in gold, while it performs reasonably well at the 99% level for short positions.
The best results -at least for long positions -are obtained for the AR(5) model and again for the threshold conditional volatility TARCH model. These models seem to provide adequate one-day-ahead risk forecasts for long positions and cannot be rejected for any of the considered confidence levels. Considering short positions, the models seem to provide estimates that are overly conservative and yield only one exception at the 95% and no exception at the 99% confidence level. Still, given the reasonable performance of the AR(5) and GARCH models for long positions, they could be considered as being most appropriate in terms of providing VaR forecasts.
Overall, we conclude that there was no clear winner with respect to providing one-day ahead Value-at-Risk forecasts.
Summary and Conclusions
In this paper we investigate the modelling of volatility dynamics of gold market returns in London. Gold markets are usually considered as a safe haven and investments into this class of assets have been very popular, in particular, since the global financial crisis. Therefore, appropriate models for volatility dynamics in these markets are of great interest to both investors and hedgers. While there are a number of recent studies examining volatility and Value-at-Risk (VaR) measures in financial and commodity markets, none of them focuses in particular on the gold market.
Compared to the numerous studies on volatility modelling and forecasting focused on equity and commodity markets in general, we provide a pioneering study on the volatility of this important market. We contribute to the literature by using a large number of statistical approaches in order to model and forecast the daily volatility and Value-at-Risk in the gold spot market. Hereby, we distinguish between different time horizons including a sub-period of continuously but only slightly increasing gold prices, a sub-period of substantially increasing gold prices and, finally, a sub-period of high volatility in the gold market. Both in-sample and out-of-sample forecasts are evaluated using appropriate forecast evaluation measures.
For in-sample forecasting, the class of TARCH models provided the best results among the tested models. Interestingly, the performance of a GARCH (1,1) model, that is generally supported by empirical studies for volatility modelling in financial markets (Akgiray, 1989;Franses and van Dijk, 1996), was only ranked in the middle of all models in our study. For out-of-sample forecasting, results were not that clearcut and the order and specification of the models was found to be an important factor in determining the model's performance. VaR for traders with long and short positions were evaluated by comparing actual VaR exceptions to theoretical rates. For this task a simple AR as well as a TARCH model performed best for the out-ofsample period. We also find that most models were able to significantly outperform a benchmark random walk model both in the in-sample and the out-of-sample forecasting. However, none of the considered models performed significantly better than the rest with respect to all of the considered criteria.
The out-of-sample period from July to December 2008 that has been tested in this study was one of the most volatile periods in the history of financial markets. As a result, the behaviour of the daily returns might be significantly different to previous periods and, also, possibly future periods. Thus, models that perform well in the considered out-of-sample period may well underperform in future periods, particularly when market conditions change. Second, though the study attempts to comprehensively investigate the volatility in the gold market by the means of using various models, it still only covered a small number of models available in this area.
For example, for models with conditional volatility, only three of the most widely used GARCH models were considered, leaving out a huge number of other GARCH model extensions. The flaws of VaR as a measure of risk along with the effectiveness of alternative risk measures such as expected shortfall, have been pointed out in the literature by e.g. Artzner et al. (1999). We leave the investigation of these issues to future work.
Author information: Stefan Trück is a Professor of Finance in the Department of Applied Finance and Actuarial Studies and Co-Director of the Centre for Financial Risk at Macquarie University. Email<EMAIL_ADDRESS>Kevin Liang is a graduate student of Macquarie University. He has extensive professional experiences in the finance industry and is currently working as a credit risk analyst. Email: kzyliang@yahoo.com.au. | 9,074.8 | 2012-03-22T00:00:00.000 | [
"Economics",
"Mathematics"
] |
Lignocellulosic Biomass Waste-Derived Cellulose Nanocrystals and Carbon Nanomaterials: A Review
Rapid population and economic growth, excessive use of fossil fuels, and climate change have contributed to a serious turn towards environmental management and sustainability. The agricultural sector is a big contributor to (lignocellulosic) waste, which accumulates in landfills and ultimately gets burned, polluting the environment. In response to the current climate-change crisis, policymakers and researchers are, respectively, encouraging and seeking ways of creating value-added products from generated waste. Recently, agricultural waste has been regularly appearing in articles communicating the production of a range of carbon and polymeric materials worldwide. The extraction of cellulose nanocrystals (CNCs) and carbon quantum dots (CQDs) from biomass waste partially occupies some of the waste-recycling and management space. Further, the new materials generated from this waste promise to be effective and competitive in emerging markets. This short review summarizes recent work in the area of CNCs and CQDs synthesised from biomass waste. Synthesis methods, properties, and prospective application of these materials are summarized. Current challenges and the benefits of using biomass waste are also discussed.
Introduction
The increasing demand for food as a result of population growth has resulted in an increase in agricultural production, which has consequently led to the acceleration of agricultural waste generation. This is accompanied by increased energy demands globally, depletion of fossil fuels, and climate change. Growing research interest has emerged concerning the use of biomass waste material to produce value-added products, due to its potential to form inexpensive and environmentally friendly materials without conflicting with food stock [1,2] Lignocellulosic biomass (LCB) is highly considered as a viable source for renewable energy and an important factor in sustainable economies. The three major building components of LCB are cellulose, hemicellulose, and lignin, with varying percentage composition. These components together with many other products can be extracted as primary or secondary products from LCB, as shown in Figure 1.
Studies on the use of biomass waste for the fabrication of carbon-based materials have emerged recently, such as the use of corncob residue for the fabrication of: porous carbon materials for supercapacitor electrodes [3], hollow spherical carbon materials for supercapacitors [4]), carbon nanosheets for lithium-sulphur batteries [5], carbon nanospheres for use as a high-capacity anode for reversible Li-ion batteries [6], and carbon quantum dots for metal ion detection [7]. Carbon quantum dots (CQDs) are the newest members of the carbon family. Since their discovery in 2004 by Xu et al. [8] and in 2006 by Sun et al. [9], they have gradually become a rising star in the 'carbon nanomaterials' family. CQDs are a subclass of zero-dimensional nanoparticles that consist of a carbon core and constitute different functional groups at the surface [10]. They are characterised by quasi-spherical morphology composed mainly of amorphous carbon with sp 2 -hybridised structure and a size less than 10 nm [11]. They exhibit attractive properties such as tuneable photoluminescence, functionalizability, dispersibility, multicolour emission associated with excitation, biocompatibility, size-dependent optical properties, facile synthesis, and low toxicity as compared to their counterparts (semiconductor quantum dots (QDs)) [12]. These extraordinary features make them suitable for potential applications in sensors, catalysis, healthcare, and energy storage devices [13]. In this review, we look into the most recent developments in the extraction of CNCs as well as the fabrication of CQDs from LCB waste.
Cellulose from Biomass
Cellulose is the most abundant renewable natural biopolymer on Earth. It is a polysaccharide that contains D-glucose units linked together via the β-1,4-glycosidic linkage, and has a general formula of (C 6 H 10 O 5 ) n , where n is the number of repeated monomeric β-dglycopyranose units. Cellulose serves as a dominant reinforcing phase in plant cell-wall structures, and its structural details vary depending on the source. Additionally, cellulose is also synthesized by algae, turnicates, and some bacteria [14][15][16]. Naturally occurring cellulose does not occur as isolated molecules, but it is found as assemblies of individual cellulose-chain-forming fibres. These fibrils pack into larger units called microfibrils, which are in turn assembled into fibre. Cellulose has both crystalline (highly ordered) and amorphous (disordered) regions. In the crystalline region, the molecular orientations and hydrogen bonding network vary, giving rise to cellulose polymorphs [17]. Several polymorphs of cellulose exist, namely cellulose I, cellulose II, cellulose III, and cellulose IV [17][18][19][20]. Cellulose I and cellulose II are the most common polymorphs of cellulose. Cellulose I is the native cellulose while cellulose II is obtained via an irreversible mercerization or regeneration of cellulose I [17,20]. The different polymorphs have different properties, such as hydrophilicity, oil/water interface, mechanical properties, thermal stability, and morphology, which contributes to their diverse applications [18,19]. Due to its crystallinity, cellulose I has been used in the synthesis of hydrogels, while cellulose II has been used as a bioethanol feedstock [18].
Efficient methods for the isolation of cellulose from LCB such as agricultural waste have recently sparked interest due to the growing interest in developing environmentally friendly and biodegradable materials from waste [21,22]. Cellulose has a wide variety of applications in food, construction materials, paper production, biomaterials, and pharmaceuticals [23]. In recent years, it has attracted a great deal of attention owing to its low cost, biodegradability, high surface-to-volume ratio, good mechanical strength, low environmental impact, abundance, easy functionalization, and versatility in nanoscale processing to form cellulose nanomaterial (nanocellulose) [14,24]. With its diameter in the nanoscale, nanocellulose has drawn a lot of research interest for a variety of applications [25]. Nanocellulose can be further classified into three main groups depending on the size and preparation methods. These three groups are cellulose nanocrystals (CNCs), cellulose nanofibrils (CNFs), and bacterial nanocellulose (BNCs) [26]. Both CNCs and CNFs can originate from LCB, while BNCs can be produced from microorganisms such as Gluconacetobacter xylinus [27]. The nanocellulose field has experienced major developments with reference to its preparation, functionalization and applications in various fields such as nanocomposite membranes, textiles, reinforcing agents, biomedical applications, wood adhesives, adsorbents, and so on [14,25,28].
Cellulose Nanocrystals
Cellulose has both highly ordered crystalline and amorphous regions in varying proportions, depending on its source. Removing the amorphous region influences the structure and crystallinity of the cellulose, resulting in the formation of CNCs [16,17,29]. CNCs are needle-like particles made up of cellulose chain segments that have been organized in an almost defect-free crystalline structure with at least one dimension less-than-or-equal-to 100 nm [16,30]. CNCs are also known as cellulose nanowhiskers, cellulose whiskers, and nanocrystalline cellulose, but CNCs is the most used term [16,25,30]. CNCs have a high thermal stability, surface area, and crystallinity compared to bulk cellulose, which has more amorphous fractions [31]. Different types of LCB waste have been used to extract CNCs such as cotton [32], pineapple leaf [33], sugarcane bagasse [34], walnut shell [35], soy hulls [30], bamboo fibre [36], and many more. Despite comprehensive research into a variety of biomass wastes, some of the potential natural sources for the development of cellulose nanocrystals, such as corncob, are yet to be widely explored. Figure 2 shows the number of publications with the term "extraction of cellulose nanocrystals" for the past decade (data extracted from Web of Science). More than 160 papers have been published each year for the past three years. The pie chart in Figure 2 also shows diverse fields that find relevance in the extraction of CNCs, although some of the fields overlap (data extracted from Web of Science).
Various techniques have been employed to prepare CNCs from LCB, which include chemical and mechanical techniques [36]. The two classical chemical treatments are acid hydrolysis and enzymatic hydrolysis, while the mechanical techniques include ultrasonication, high-pressure homogenization, microfluidization, high-speed blending, grinding, and cryocrushing [16,[36][37][38][39]. Chemical methods are some of the most commonly used methods for the extraction of CNCs owing to their ease of use, short preparation time, and relatively high yield, whereas mechanical methods require a lot of energy and produce nanocrystal products with a wide range of particle sizes [38,40]. Among the chemical methods, acid hydrolysis is the most common method for the extraction of CNCs [41].
Pre-Treatment of Agricultural Waste
LCB does not only consist of cellulose (30-50%), but also hemicellulose (19-45%) and lignin (15-35%) by weight, with the other components including chlorophyll, waxes, ash, and resins [24]. Xu et al. reported that raw corn stover consists of cellulose (44.4 ± 0.4%), hemicellulose (27.8 ± 0.3%), and lignin (19.6 ± 0.2%) [42], while Slavutsky and Bertuzzi reported that sugarcane bagasse consists of cellulose (40.3 ± 1.6%), hemicellulose (21.4 ± 1.6%), and lignin (23.84 ± 0.9%) [43]. Hence, the extraction of CNCs from biomass requires much effort to overcome the crucial pre-treatment stage. It is important to select adequate pretreatment methods to remove the non-cellulosic material (hemicellulose, lignin, ash, etc.) The recently reported pre-treatment methods for the extraction of CNCs are summarized in Figure 3. These methods are usually selected based on the type of feedstock. For instance, Santos et al. prepared CNCs from pineapple leaves, which contained several non-cellulosic materials [33]. The pre-treatment was conducted with a sodium hydroxide aqueous solution of 2% (w/w) to disrupt the hemicellulose and lignin bonds, and a bleaching step with an acetate buffer solution (27 g sodium hydroxide (NaOH) and 75 mL glacial acetic acid, diluted with 1 L of distilled water, and 1.7 wt% sodium chlorite (NaClO 2 ) in water) to remove excess non-cellulosic residue. Jiang and Hsieh used two methods to pre-treat tomato peels before the extraction of CNCs [44]. The first method involved the use of acidified-sodium-chlorite delignification, followed by a highly effective alkali treatment using potassium hydroxide (KOH). An alternative chlorine-free route involving alkaline hydrolysis and peroxide bleaching was also developed for comparison using NaOH and 4% hydrogen peroxide (H 2 O 2 ). In general, several steps are involved in the pre-treatment stages, including washing and cutting the raw materials into small pieces [45]. To cleave the ester linkages and glycosidic side chains of the lignin, leading to disruption, the source is subjected to alkali pre-treatments at specific conditions. Different alkali solutions have been employed for this process, such as KOH and NaOH [46,47]. This is followed by bleaching (delignification), whereby excess non-cellulosic components are eliminated using sodium chlorite and hydrogen peroxide [31,48]. Extra steps are usually required to dewax the source and clean up the chemical residues [26].
Acid Hydrolysis
The extraction of CNCs from cellulosic fibres usually involves an acid-induced disruption process whereby the glycosidic bonds in the amorphous region are cleaved under controlled-reaction conditions [49] as shown in Figure 4. Various strong acids have been employed to degrade bulk cellulose effectively to release CNCs, such as sulphuric acid (H 2 SO 4 ), phosphoric acid (H 3 PO 4 ), hydrochloric acid (HCl), nitric acid (HNO 3 ), and a mixture of mineral and organic acids [50]. Kassab et al. [51] compared the effects of three different acids on the extraction of CNCs from tomato plant residue (H 2 SO 4 , H 3 PO 4 , and HCOOH/HCl) to form sulphated CNCs (S-CNC), phosphorylated CNCs (P-CNC) and carboxylated CNCs (C-CNC). The produced CNCs exhibited high aspect ratios (up to 98) and high crystallinity (up to 89%), and formed stable suspensions in organic solvents compared to previously reported CNCs from other sources. Wang and colleagues [52] attempted to add phosphate groups to CNCs by phosphoric-acid hydrolysis to improve thermal stability and synthesis conditions. Their results showed that the use of phosphoricacid medium to obtain CNCs decreased the degradation temperatures; however, thermal stability was still comparable to CNCs obtained from other biomasses that were treated with H 3 PO 4 and H 2 SO 4 . The acid-hydrolysis treatment with H 2 SO 4 to prepare CNCs has been widely investigated and appears to be used extensively when compared to other acids. This is because H 2 SO 4 has been proven to be effective in the elimination of the amorphous components of cellulosic fibres and produces stable CNC suspensions. Figure 4 illustrates the process flow diagram for the extraction of CNCs using conventional acid hydrolysis. As mentioned earlier, depending on the LCB source, cellulose fibres acquired from the pre-treatment stage are then used as a source for CNCs during this step. H 2 SO 4 hydrolysis introduces sulphate groups to the surface of the extracted CNCs due to the reaction with surface hydroxyl groups of the cellulose through an esterification process, allowing for the formation of anionic sulfate groups [50]. These anionic sulfate groups induce electrostatic repulsion between CNC molecules and promote their dispersion in water [33]. However, the sulfate groups compromise the thermal stability of the CNCs and may contribute to lower yields [33,50]. The thermal stability of the sulfuric-acid-prepared CNCs can be increased by neutralizing the CNCs through dialysis [53]. Overall, the acidhydrolysis method is simple and can be used to extract CNCs from several agricultural residues. Different agricultural residues that have been used for the extraction of CNCs within the past decade are shown in Table 1. The pre-treatment and extraction processes described above do not differ much; however, the source of cellulose plays a huge role in the dimensions of CNCs as well as related properties and overall yield. While this method is extensively used for CNC extraction, it contributes to high chemical waste disposal, as a result, more strategies for effective, fast, low cost, and environmentally friendly procedures are highly desired.
Oxidation
Oxidation is useful in introducing anionic groups to the cellulose molecules; briefly, it can be separated into two steps. The first step is to oxidize the surface hydroxyl group (-OH) of the pre-treated source and remove the amorphous regions [49]. This results in a structure with negatively charged carboxyl groups (-COOH), which can facilitate the dispersion of CNCs in aqueous solutions and allow further modifications on the surface of CNCs [54]. The most common type of oxidation is TEMPO oxidation. TEMPO (1oxo-2,2,6,6-tétraméthylpipyridine 1-oxyle) is a stable radical that selectively mediates the oxidation of primary alcohols into carboxylic acids through an aldehyde intermediate [55]. Usually, TEMPO-mediated oxidation is cooperative with mechanical disintegration and selectively oxidizes C6-primary hydroxyl groups of cellulose to sodium C6-carboxylate groups [56]. Zhang et al. [57] used TEMPO oxidation to prepare carboxylated CNCs from sugarcane bagasse pulp with further assistance of ultrasound. Previous studies have used TEMPO-mediated oxidation to prepare carboxylated CNCs; however, this method consists of several steps as well as multiple radical-generating chemicals (sodium hypochlorite (NaClO), sodium bromide (NaBr), and TEMPO reagents), which limit the sustainability of the approach [58].
Other oxidation agents such as ammonium persulfate (APS), H 2 O 2 , and nitro-oxidation (using HNO 3 and NaNO 2 ) have also been used to prepare CNCs [59][60][61]. Zhang et al. [61] compared the effects of the preparation methods using TEMPO and acid hydrolysis. Lemon seeds were utilized to extract CNCs by H 2 SO 4 (S-LSCNC), APS (A-LSCNC), and TEMPO oxidation (T-LSCNC). The results demonstrated that all CNCs maintained cellulose Iβ structure and had a good dispersion regardless of extraction methods, but the T-LSCNC had a higher yield. This is because TEMPO oxidation is also advantageous due to its ability to produce high oxidized yields of up to 90%. Khoshani et al. [62] prepared carboxylated CNCs through one-step catalyst-assisted H 2 O 2 oxidation. Similar to TEMPO, these two methods require several pre-treatment steps before the extraction of CNCs, while nitro-oxidation decreases the need to consume multiple chemicals, greatly improving the recyclability of the used chemicals [58]. Sharma et al. [59] used one-step nitro-oxidation to prepare carboxylated CNCs from jute fibres, while Chengbo et al. [14] compared the extraction of CNCs using both nitro-oxidation and TEMPO-oxidation from jute fibres. TEMPO oxidation was performed on pre-treated jute, while nitro-oxidation was performed on untreated jute, and both oxidation methods were effective and resulted in carboxylated CNCs with good dispersion and high transparency. The nitro-oxidation extraction process is much less expensive, faster and more environmentally friendly than the acid hydrolysis process. The elimination of the pre-treatment step reduces the amount of chemical waste disposal. This method is also much more sustainable than the TEMPO oxidation process.
Other Methods
Other extraction methods, including but not limited to ionic liquid (ILs) hydrolysis and enzymatic hydrolysis, have been utilized to extract CNCs from agricultural waste [49]. ILs hydrolysis has been highly explored in the field of biomass processing due to its low vapour pressure, low energy consumption, and sustainability [63]. This hydrolysis involves two main steps, the pre-treated cellulose is immersed in an IL for a known period to allow swelling, and water is added to initiate the hydrolysis [49]. During the reaction, the hydrogen and oxygen atoms of amorphous cellulose are easily accessible by the dissociated IL to form the electron donor-electron acceptor. The -OH groups break, leading to the selective removal of the amorphous region [64]. The above-mentioned extraction methods require the use of chemicals, while enzymatic hydrolysis uses cellulolytic enzymes known as cellulases (mixtures of endoglucanases, exoglucanases, and cellobiohydrolases); these are an interesting class of enzymes possessing the ability to act as catalysts for the hydrolysis of the cellulose [65]. These enzymes have specific functionalities that can selectively depolymerize the amorphous region of cellulose to prepare CNCs with high crystallinity [65,66]. This is the most environmentally friendly and low-cost process, as it eliminates the use of toxic chemicals and consumes relatively little energy. However, this process is relatively slow.
The preparation of CNCs through physical/mechanical processes, such as highpressure homogenization, grinding, and steam explosion are discussed. During mechanical treatment, pre-treated cellulose pulp is subjected to high shear force, which helps extract the CNCs along the longitudinal direction [67]. High pressure homogenization has been widely employed for the mass production of CNCs in industries due to its simplicity, high efficiency, lack of a requirement for organic solvents, and production of uniform CNCs with high yield [37]. During the homogenization process, pre-treated cellulose pulp is diluted with water and passed through a tiny gap between the homogenizing valve and an impact ring under high pressure and high velocity at room temperature [67,68]. However, high-pressure homogenization involves several passes, which results in high energy consumption [69]. Other mechanical methods include grinding, whereby the cellulose pulp is passed between static and rotating grinding disks, which can be adjusted to reduce clogging [68]. The above-mentioned mechanical methods require mechanical degradation tools; on the other hand, steam explosion facilitates chemical treatments and improves the removal efficiency of non-cellulosic materials through pressure [70]. Steam explosion is a process which consist of heating biomass using saturated steam with high pressure followed by rapid decompression; in this process, the high pressure results in hydrolysis of the glycosidic and hydrogen bonds to produce CNCs [71].
Agave tequilana and barley
The ground fibres were dispersed in an acid solution (0.2 wt% of acetic acid) of 0.27 wt% of NaClO 2 and 0.7 wt% NaOH kept at 70 • C and stirred for 1.5 h. The sample was then treated with 17.5 wt% NaOH for 30 min.
CNCs from Corncobs
Corn is a staple food in many countries, with about 1 billion tons of global annual production. In South Africa, approximately 16 million tons of corn were produced during the 2019/2020 period [78]. About 80% of the weight of the corn ear is attributed to corncob, and, together with the other components of the corn plant besides the corn kernels, are regarded as corn residue. Common uses of corncob residue include animal bedding, animal feed, and fertilizers. The majority of generated waste is burned in farms and/or landfills. Globally, various applications for corncobs are being developed, such as detergents, adsorbants, bioenergy feedstock, and composites. They are LCB waste with high cellulose content. Table 2 shows a few selected examples of CNC extraction processes using corncobs as a source. It is evident from the presented data that cellulose is present in varying amounts within the corn family and the isolation conditions contribute to the yield and physical properties (where CI* represents crystallinity index). Louis and Venkatachalam [79] demonstrated in their report that NaOH concentration, reaction time, and temperature all affect the yield of cellulose during the pre-treatment stage. The concentration of H 2 SO 4 also affects the physical properties of CNCs, as shown in the table and in many reports in the literature. Recently, Adejumo et al. demonstrated the use of corncob, functionalised corncob, and CNCs as methyl orange dye adsorbants [80]. In their report, under optimum condition, CNCs had a calculated adsorption capacity of 206.67 mg/g, which was about 11.6 times greater than that of the corncob and 3.4 times higher than that of functionalized corncob [80].
Prospective Applications of CNCs
Due to the abundance of biomass waste, various pre-treatment and extraction methods, outstanding and unique nanoscale structure, excellent mechanical properties, thermal stability, biocompatibility, biodegradability, and easy surface modification, CNCs have attracted rapidly growing scientific and technological interest, and have reported application prospects in many fields, such as health care, environmental protection, and chemical engineering [82]. Grishkewich et al. [83] summarized the recent applications of CNCs in biomedical engineering (tissue engineering, drug delivery, biosensors, and biocatalysts), wastewater treatment (adsorbents), energy, and electronics (supercapacitors, conductive films, substrates, sensors, and energy-storage separators). Figure 5 demonstrates the summarized recently reported applications of CNCs. CNCs have also found applications in the monitoring and improvement of food quality. Dhar et al. [84] fabricated a poly (3-hydroxybutyrate) (PHB)/CNCs based nanocomposite films with improved gas-barrier and migration properties for food packaging applications. Peng et al. [85] incorporated CNCs into different food-based systems containing polymers as a thickening agent. The CNCs improved the viscosity enhancement at lower particle loading. Besides these promising applications, CNC-based materials have also been applied in the fabrication of carbon-based nanomaterials. Dhar et al. used CNCs to prepare graphene with tuneable dimensions, while Souza et al. [86] prepared luminescent nanocarbon structures. Magagula et al. prepared luminescent-nitrogen-doped spherical carbons (NCSs) and used them for the detection of Fe 3+ in aqueous solutions [77]. The following section focuses on the recent research regarding the fabrication of CQDs from LCB waste.
Carbon Quantum Dots (CQDs)
Over the past decade, extensive research has been conducted to explore synthesis methods for CQDs. These methods can be categorised into two main approaches known as the top-down and the bottom-up routes ( Figure 6). The top-down method involves breaking down large carbon structures such as coal, activated carbon, graphite, and carbon nanotubes into the desired carbon nanostructures through electrochemical oxidation, acidic oxidation, arc discharge, and laser ablation [87]. The bottom-up route includes the polymerization and carbonization of small molecule precursors, such as citric acid, phenylenediamines, glucose, and aldehydes under a range of different reaction conditions through various chemical methods [88]. Some of these small-molecule precursors are found in varying amounts in biomass waste, making biomass waste an easily available and low-cost precursor. Despite the development of various fabrication strategies, the production of CQDs still requires complicated instrumentation, expensive precursors, and rigorous experimental conditions that present risks to the natural environment and human health. This further contributes to high production costs and commercialization constraints [89]. At present, the development of various fabrication strategies and mass production at low cost from renewable and green sources is of great interest. Figure 7a shows the number of publications with the phrase "carbon dots" for the past decade. From just over one thousand in 2012 to over six thousand in 2021, there is clearly a growing interest in research related to these materials. LCB waste presents carbon (C) sources that are rich in elements such as nitrogen (N), hydrogen (H), and oxygen (O), in addition to C. These sources are renewable, cost-effective, and environmentally benign compared to other carbon sources. The production of CQDs from LCB waste converts low-value biomass waste into valuable and useful materials. Zhou et al. [90] proposed a green synthesis method by utilizing watermelon peel as the carbon precursor for the first time, starting a new trend towards using biomass waste materials for CQD preparation. Following this, researchers have utilized different types of agricultural waste, animal waste, fruit waste and vegetable waste. Figure 7b shows a publication trend for the past decade where the phrase "carbon dots from agricultural waste" was used. There is a clear jump in the amount of research done in this topic in 2021, which is more than twice as much as the number of appearances in previous years. This sudden increase is in line with current government policies on waste management across the world as well as the United Nation's 2030 sustainable development goals.
Due to differences in biomass composition, CQDs derived from different agricultural residues and synthesis techniques show different luminescent properties, size distributions, and quantum yields (QYs) [91]. Conventional methods involved in preparation of agricul-tural waste-based CQDs require complicated equipment, catalysts, several post synthesis purification steps, longer synthesis times, and harsh experimental conditions, which results in expensive production costs [12]. Therefore, exploration of green synthesis methods with fewer synthesis steps, minimum use of toxic chemicals, and reduced synthesis time is necessary. At present, microwave-assisted synthesis is highly desirable due to its simplicity, short synthesis time, low cost, and homogeneous heating [92]. In our recent study, spherical carbons (CSs) were successfully fabricated from corncob via alkaline treatment, acid hydrolysis, and microwave synthesis, and were subsequently applied in the fluorescent detection of Fe 3+ in aqueous solution [77].
Properties of CQDs
CQDs are the most-desired alternative to toxic, heavy-metal based QDs for fluorescencerelated applications due to their high fluorescence stability, environmental friendliness, good biocompatibility, facile synthesis, and low toxicity [10]. These properties strongly depend on several factors, including synthesis technique, chosen precursors, post-synthesis treatments, time and temperature of the synthesis, pH, surface passivation or functionalization, heteroatom doping, and so on [13]. Not only do these factors affect the microstructure of CQDs, but also the optical properties, and QY. In the following section, the physical, chemical, and optical properties of CQDs are discussed in greater detail.
Structural Properties
CQDs are typically nanoparticles smaller than 10 nm, composed of a core-shell structure with sp 2 /sp 3 carbon cores functionalized with polar oxygen groups [11]. The existence of the surface groups of CQDs depends mainly on the type of precursor used in the synthesis. When the used precursor is heteroatom-rich, the surface tends to have modifiedfunctional groups such as carboxyl, amine, carbonyl, and ether groups [93]. The surface functional groups impart CQDs with excellent water solubility and also ease further surface functionalization with various molecules [94]. In addition, the precursor and synthesis methods also determine the composition, morphology, and size distribution of the synthesized CQDs. Various characterization techniques are applied to determine the physical properties and the crystalline structure of the CQDs. These techniques include atomic force microscopy (AFM), high-resolution transmission electron microscopy (HRTEM), X-ray diffraction (XRD), and Raman spectroscopy [10]. To investigate their chemical structure, Xray photoelectron spectroscopy (XPS), element analysis, Fourier transform infrared (FTIR), and nuclear magnetic resonance (NMR) are used [95]. Characterization of CQDs is essential for attaining a better understanding of the mechanisms associated with the unique structural properties of the CQDs.
The morphology and size distribution of CQDs can be measured from TEM images, while AFM is used to measure the height information of CQDs. At present, most biomassbased CQDs are usually spherical, with the average particle size less than 10 nm in a state of uniform dispersion [96]. Smaller-sized CQDs have been obtained from eggshell membrane peel [97], pomelo peel [98], and garlic husk [99]. CQDs with larger size distributions have also been obtained from spent tea, with an average size distribution of 11 ± 2.4 nm [89], and from goose feathers with an average size distribution of 21 ± 5 nm [100]. The crystalline properties are determined using HRTEM, XRD, and Raman spectroscopy. HRTEM is used to determine the lattice fringe spacing of the carbon materials, which largely corresponds to the different diffraction planes. Atchudan et al. reported HRTEM imaging of CQDs prepared from banana-peel waste with a lattice spacing of 0.21 nm [101]. The XRD pattern of CQDs generally presents a broad diffraction peak between 2θ values of 20 • to 25 • and lattice spacing between 0.31 and 0.38 nm [102]. An AFM image of pine-wood-based CQDs with an average height of 2.8 nm, corresponding to 5-7 layers of graphene, was reported by Zhao et al. via a 3D morphology presentation [103].
The graphitization/crystallization of CQDs is examined by Raman spectroscopy. Raman spectra of CQDs exhibit two broad peaks at around 1300 cm −1 and 1580 cm −1 (similar to other graphene-derived carbon materials), which are attributed to the D (sp 3 -hybridized) and G (sp 2 -hybridized) bands, respectively [104]. The D band is associated with the vibrations of carbon atoms with dangling bonds in the termination plane of disordered graphite, and the G-band is related to the in-plane vibrations of sp 2 -hybridized carbon. Hence, the intensity ratio of D to G (I D /I G ) is the measure of the defects present on the graphitic structure; a low I D /I G ratio represents that the integrity of the graphitic shells is sufficiently high to protect the core material well from corrosion and oxidation [105]. The surface functional groups and elemental composition of CQDs are examined by FTIR and XPS. FTIR spectroscopy is used to understand the surface functional groups contained on the CQDs. CQDs usually exhibit main characteristic absorption bands of O-H, C-H, C=C, and (C=O). Other LCB-waste-based CQDs may contain nitrogen and sulphur depending on the chemical composition of the precursor [106]. XPS analysis is carried out to delineate the chemical composition and nature of bonding in CQDs. LCB-waste-based CQDs generally contain carbon, oxygen, nitrogen, and sulfur, which can be detected in XPS. XPS can be used to determine the elemental composition and the chemical and electronic states of the contained elements. CQDs usually show three apparent peaks centred around 283, 400, and 530 eV, which are attributed to C 1s , N 1s , and O 1s , respectively [107].
Optical Properties
Due to quantum-confinement effect, the optical properties are the most notable properties of CQDs, irrespective of their microstructure. CQDs possess excellent optical properties, such as wavelength-tuned emissions, which may be affected by surface states, surface passivation, heteroatom doping, and surface defects. This section presents and discusses the optical properties of CQDs.
UV-Absorption Properties
CQDs usually show a strong absorption peak in the UV region and a lower absorption peak in the visible region [102]. UV absorption (around 230-270 nm) is induced by the π-π* transition of C=C and C=N bonds, while visible absorption (around 300-330 nm) is ascribed to n-π* transition of C=C or C=O bonds [91]. The UV-vis spectra of dwarfbanana-peel CQDs showed two absorption peaks appearing at 272 nm and 320 nm [101]. These were attributed to the π-π* transition of C=C bonds and the n-π* transition of C=O bonds in CQDs, respectively [101]. Moreover, the nature of the CQD precursor and surface functional groups can affect the position and intensity of the absorption peaks. Liu et al. [108] prepared CQDs from different agricultural waste materials (cellulose-based CDs (C-CDs), protein-based CDs (P-CDs), peanut shell-based CDs (PS-CDs), cotton stalk-based CDs (CS-CDs), and soymeal-based CDs (S-CDs)). As shown in Figure 8, two absorption peaks at 273 nm and 322 nm were observed for the P-CDs, while only one absorption peak was observed for the rest of the samples (at 281 nm for C-CDs, 278 nm for PS-CDs, 299 nm for CS-CDs, and 328 nm for S-CDs) [108].
Fluorescence Properties
The fluorescence properties of CQDs are quite fascinating and can affect the application of CQDs in different fields. CQDs possess excellent fluorescence properties, including excitation-wavelength-dependent fluorescence, size-dependent fluorescence emission, upconversion luminescence, strong resistance to photobleaching, and good fluorescence stability [109]. The photoluminescence (PL) emission of CQDs occurs when trap states are present in the bandgap (caused by impurities, surface defects, functional groups, and adsorbed molecules). In such cases, the photoexcited electron or hole can be trapped, and the following recombination leads to a radiative emission of energy [110]. The observed CQD PL can be due to the combination of different mechanisms from different sources, the surface state, quantum confinement effect, and molecular state mechanisms [102]. Most of the CQDs reported so far have a common feature of presenting excitation-dependent emission, giving a decrease in the emission signal that is systematically displaced towards longer wavelengths as the excitation wavelength is increased [94,111]. Atchudan et al. reported an excitation-wavelength-dependent fluorescence emission of CQDs that were prepared from kiwifruit-peel waste ( Figure 9) [111]. The intensity peak of the CQDs initially increased from 300 to 360 nm excitation wavelength but decreased from 360 to 460 nm excitation wavelength. A fluorescence-emission red-shift was also observed with increasing excitation wavelength [111]. However, excitation-independent emission of biomass-wastebased CQDs was recently reported by Abbas et al. [89]. Heteroatom doping, surface functionalization, and surface passivation of CQDs are known to induce surface-state mechanisms. Heteroatom doping is a common method in the preparation of CQDs and allows their intrinsic properties to be tuneable and exploited for their desired potential applications. Elements such as N, B, S, and P are used as a dopants to replace carbon atoms in the sp 2 /sp 3 network [109]. Surface functionalization is related to the introduction of functional groups via covalent bonding on the carbon edge planes [112]. Surface passivation involves the coating of passivating reagents such as polyethylene glycol (PEG), amine terminated polyethylene glycols (PEG-1500N), poly(ethylenimide)-co-poly(ethyleneglycol)-co-poly(ethyl-enimide) (PPEI), 4,7,10-trioxa-1,13-tridecanediamine (TTDDA), and polyethyleneimine (PEI) on the surface of the carbon core of CQDs to regulate their surface state [113]. In general, the surface states of CQDs produce a variety of energy levels and lead to various emissive traps [102]. Monday et al. [114] prepared nitrogen-doped CQDs (N-CQDs) from palm-kernel shells using ethylenediamine and Lphenylalanine as dopants. The as-prepared N-CQDs showed fascinating PL properties, with a QY of 13.7% for ethylenediamine doped N-CQDs and 8.6% for L-phenylalanine doped N-CQDs, as well as an excitation-dependent emission wavelength [114]. Chen et al. [115] prepared N, S co-doped NQDs from used garlic, which displayed strong fluorescence with a QY of 13%. N, P co-doped CQDs with a QY as high as 76.5% were synthesized by Dong and colleagues [116]. From the few literature sources that have been quoted here, it is quite clear that uniformity in these types of material does not exist. This justifies the extensive research that is carried out in order to qualify a set of CQDs to a specific application. Further, the differences in the properties of CQDs make them interesting for a wide range of applications.
Potential Applications of CQDs
CQDs have promising application in bioimaging, biosensing, fuel cells, supercapacitors, catalysis, solar cells, lithium-ion batteries, drug delivery, and light-emitting diodes due to their outstanding chemical, physical, and optical properties ( Figure 10) [13]. According to a Web of Science search done by Li et al. [109] in March 2021, 41% of all published papers on CQDs reported their potential application in sensors (the only application discussed in this section). This is due to their strong luminescence properties and sensitivities towards specific metal ions in aqueous environments. CQD-based sensors give rise to a low limitof-detection (LOD) and high sensitivity and selectivity [117]. Wang et al. [118] proposed a CQD-based PL sensor for the first time and demonstrated that the luminescence of CQDs can be quenched selectively by Fe 3+ through a charge-transfer mechanism, starting a new trend towards using CQDs for the detection of heavy-metal ions. Zhao et al. [119] prepared water-soluble, luminescent N-CQDs from chitosan and utilized them for the sensing of Fe 3+ in aqueous solutions. The N-CQDs presented outstanding selectivity and sensitivity and were successfully applied for the quantitative detection of Fe 3+ with a linear detection range of 0-500 µM and an LOD of 0.15 µM. Magagula et al. reported corncob-derived NCSs (which contained a high concentration of CQDs) for the detection of Fe 3+ in aqueous solution [77]. In their report, a linear detection range of 0-500 µM and an LOD of 70 nM was reported for Fe 3+ . The quenching effect of Fe 3+ on NCSs was demonstrated through a gradual increase of Fe 3+ concentration from 5 µM to 3000 µM (Figure 11a). Furthermore, different metal ions were selected to demonstrate the selectivity of NCSs to Fe 3+ , as shown in Figure 11b [77]. N, P co-doped CQDs were adopted as a fluorescent sensor for the effective detection of Fe 3+ in water, with an LOD of 0.1 µM, and the sensor showed a better linear relationship in the range of 0.1∼50 µM [116]. High-luminescence S-CQDs were synthesized from cellulose fibres with a QY of 32%, and these were utilized to detect Fe 3+ in pH 0 solutions and showed excellent selectivity and sensitivity with an LOD of 0.96 µM [120]. Table 3 shows selected articles listing the synthesis conditions, some properties, and reported potential applications of CQDs obtained from biomass waste precursors. This table confirms the versatility of CQDs as well as their inconsistent quantum yield. In addition to Fe 3+ , CQDs can be applied in the sensing of various transition-metal ions such as Hg 2+ , Cu 2+ , and Pb 2+ . Moreover, CQDs have also been applied in other sensing systems such as biosensing and chemical sensing; however, PL sensing is currently the most-reported potential application for these materials.
Conclusions and Outlook
In recent years, LCB waste has been utilized to prepare CNCs with different attributes and applications. This choice of a feedstock is renewable, green, and affordable. Several modifications of the pre-treatment and extraction stages continue to be explored, with the aim of attaining CNCs with desired attributes (including their production at a commercial scale). In addition to the applications of CNCs in pharmaceuticals, medicine, composite materials, energy, and packaging, their application as a precursor in synthesis of carbon nanomaterials is gaining momentum. This is in line with developing new greener methods of material synthesis as well as finding cheaper ways of producing smart materials. CQDs seem to be much easier to fabricate from LCB waste sources when compared to other types of carbon nanomaterials. Several advantages and disadvantages exist in both the fields of CNC extraction and CQD fabrication from LCB waste. The popular chemical treatment method of LCB in order to obtain CNCs is lengthy and requires the continuous use of strong acids in order to completely separate the amorphous content from the cellulose. This is not environmentally friendly. The use of strong acids and bases also implies that large amounts of water will be used to purify the CNCs. Compared to other extraction methods, chemical methods are cheaper; however, process efficiencies should be evaluated.
The application of LCB waste in the fabrication of carbon nanomaterials is a promising field with the potential to transform the agricultural sector as we know it. CQDs are versatile materials with the potential to replace many toxic, heavy-metal-based optoelectronic devices. Based on the current research trends, we can predict that CQDs will be popular in the market in the near future. However, large variations in the properties of CQDs as the result of differences in feedstocks means that CQDs can be optimised for a specific application per batch. Further studies on large-scale synthesis of CQDs are yet to be explored. | 9,140.6 | 2022-04-01T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
Numerical methods of Laplace transform inversion in the problem of determination of viscoelastic characteristics of composite materials
When designing products made of composite materials intended for operation in difficult conditions of inhomogeneous deformations and temperatures, it is important to consider the viscoelastic properties of the binder and fillers. The article analyzes the relationship between relaxation and creep characteristics. All known creep and relaxation nuclei in the literature are considered. The problem of transformation of creep characteristics into relaxation characteristics and Vice versa is discussed in detail. There is a simple relationship between the creep and relaxation functions in the Laplace image space. However, when returning to the space of the originals, in many cases there are great difficulties in reversing the Laplace transform. Two numerical methods for inverting the Laplace transform are considered: the use of the Fourier series in sine and the method of quadrature formulas. Algorithms and computer programs for realization of these methods are made. It is shown that the operating time of a computer program implementing the Fourier method by sine is almost 2 times less than the operating time of a computer program implementing the quadrature formula method. However, the first method is inferior to the latter method in accuracy of calculations: the relaxation functions and relaxation rates, it is advisable to find the first method, since the computational error is almost indiscernible, and the functions of creep and creep speed, the second way, because for most functions, the result obtained by the second method is much more accurate than the result obtained by the first method.
Introduction
For isotropic viscoelastic materials the defining relations between stress and strain tensors are written as follows where ( ) G t and ( ) J t are relaxation and creep functions respectively [1][2][3][4][5][6][7][8][9][10][11]. Function ( ) G t describes the change in stress over time under constant strain. This process is called stress relaxation [12]. Experiments shows that in this process the stress decreases with time, i.e. the function ( ) G t is decreasing and therefore ( ) 0 dG t dt < .
(2) J t describes the change in strain over time at constant stress. This process is called creep deformation [12]. Experiments shows that in the process of creep at constant stress, the deformation increases, i.e. the function ( ) J t is increasing and therefore ( ) 0 dJ t dt > . (3)
(4) In the theory of viscoelasticity, there is no such simple connection between relaxation and creep functions. We write the formulas (1) in the image space by Laplace, using the convolution theorem From these formulas follows In the Laplace transform theory, the following limit relations take place [2][3] This means that relations of type (4) in the theory of viscoelasticity take place only in two limiting cases: by 0 t → + and t → ∞ .
Analysis of relaxation and creep kernels
Conveniently, the relaxation function ( ) G t and creep function ( ) J t to represent in a dimensionless form. For this we denote where and fucntions t ψ ( ) and g t ( ) are dimensionless, function t ψ ( ) called the relaxation kernel and the function g t ( ) called the creep kernel. By virtue of (4) and (7) In the Laplace image space between images of dimensionless functions t ψ ( ) and g t ( ) there is a relation of type (6), i.e.
Maxwell kernels
Maxwell kernels are the simplest of all known kernels in the literature. Table 1 shows Maxwell functions and their images. Table 2 shows Abel functions and their images. where 0 1 β < ≤ . Note that when 1 β = than Abel kernels transformed into Maxwell kernels. Table 3 shows Rabotnov functions and their images.
Note that when 1 β = than Rzhanitsyn kernels transformed into Maxwell kernels. Table 5 shows Kohlrausch functions and their images.
Kohlrausch kernels
Note that when 0 β = than Kohlrausch kernels transformed into Maxwell kernels. Table 6 shows Gavrillac-Negami functions and their images.
The absence of dimension in all the above functions is due to the parameter τ , i.e. the relaxation time. The relaxation time is the period of time during which the amplitude value of the disturbance in the unbalanced physical system decreases by a factor of e times ( e − the base of the natural logarithm) [11][12][13][14][15][16][17][18][19].
Numerical methods for inverting the Laplace transform
In most cases, finding the original function as an analytical function is impossible or, from a practical point of view, impractical. That is why approximate and numerical methods for inverting the Laplace transform have been developed. The next two methods will be considered [20].
Using Fourier series on sine (1st method)
The method is taken from [20], pp.52-54. Using it, the original function can be written as follows:
Method of quadrature formulas with equal coefficients (2nd method)
The method is taken from [20], pp.121-124. According to this method, the original function will take the following form: | 1,176 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
A Neutron Star is born
A neutron star was first detected as a pulsar in 1967. It is one of the most mysterious compact objects in the universe, with a radius of the order of 10 km and masses that can reach two solar masses. In fact, neutron stars are star remnants, a kind of stellar zombies (they die, but do not disappear). In the last decades, astronomical observations yielded various contraints for the neutron star masses and finally, in 2017, a gravitational wave was detected (GW170817). Its source was identified as the merger of two neutron stars coming from NGC 4993, a galaxy 140 million light years away from us. The very same event was detected in $\gamma$-ray, x-ray, UV, IR, radio frequency and even in the optical region of the electromagnetic spectrum, starting the new era of multi-messenger astronomy. To understand and describe neutron stars, an appropriate equation of state that satisfies bulk nuclear matter properties is necessary. GW170817 detection contributed with extra constraints to determine it. On the other hand, magnetars are the same sort of compact objects, but bearing much stronger magnetic fields that can reach up to 10$^{15}$ G on the surface as compared with the usual 10$^{12}$ G present in ordinary pulsars. While the description of ordinary pulsars is not completely established, describing magnetars poses extra challenges. In this paper, I give an overview on the history of neutron stars and on the development of nuclear models and show how the description of the tiny world of the nuclear physics can help the understanding of the cosmos, especially of the neutron stars.
A neutron star was first detected as a pulsar in 1967. It is one of the most mysterious compact objects in the universe, with a radius of the order of 10 km and masses that can reach two solar masses. In fact, neutron stars are star remnants, a kind of stellar zombies (they die, but do not disappear). In the last decades, astronomical observations yielded various contraints for the neutron star masses and finally, in 2017, a gravitational wave was detected (GW170817). Its source was identified as the merger of two neutron stars coming from NGC 4993, a galaxy 140 million light years away from us. The very same event was detected in γ-ray, x-ray, UV, IR, radio frequency and even in the optical region of the electromagnetic spectrum, starting the new era of multi-messenger astronomy. To understand and describe neutron stars, an appropriate equation of state that satisfies bulk nuclear matter properties is necessary. GW170817 detection contributed with extra constraints to determine it. On the other hand, magnetars are the same sort of compact objects, but bearing much stronger magnetic fields that can reach up to 10 15 G on the surface as compared with the usual 10 12 G present in ordinary pulsars. While the description of ordinary pulsars is not completely established, describing magnetars poses extra challenges. In this paper, I give an overview on the history of neutron stars and on the development of nuclear models and show how the description of the tiny world of the nuclear physics can help the understanding of the cosmos, especially of the neutron stars.
I. INTRODUCTION
Two of the known existing interactions that determine all the conditions of our Universe are of nuclear origin: the strong and the weak nuclear forces. It is not possible to talk about neutron stars without understanding them, and specially the strong nuclear interaction, which is well described by the Quantum Chromodynamics (QCD). But notice: a good description through a Lagrangian density does not mean that the solutions are known for all possible systems subject to the strong nuclear force.
Based on the discovery of asymptotic freedom [1], which predicts that strongly interacting matter undergoes a phase transition from hadrons to the quark-gluon plasma (QGP) and on the possibility that a QGP could be formed in heavy-ion collisions, the QCD phase diagram has been slowly depicted. While asymptotic freedom is expected to take place at both high temperatures, as in the early universe and high densities, as in neutron star interiors, heavy-ion collisions can be experimentally tested with different energies at still relatively low densities, but generally quite high temperatures. If one examines the QCD phase diagram, shown in Fig. 1, it is possible to see that the nuclei occupy a small part of the diagram, at low densities and low temperatures for different asymmetries. One should notice the temperature log scale, chosen to emphasise the region where nuclei exist. Neutron stars, on the other hand, are compact objects with a density that can reach 10 times the nuclear saturation density, discussed later on along this paper. While heavy ion collisions probe experimentally some regions of the diagram, lattice QCD (LQCD) calculations explain only the low density region close to zero baryonic chemical potential. Hence, we rely on effective models to FIG. 1. QCD phase diagram. On the left of the transition region stands hadronic matter and on the right side, the quark gluon plasma. Quarkyonic phases represent a region where chiral symmetry has been restored but matter is still confined. Figure taken and adapted from [6]. advance our understanding and they are the main subject of this paper.
Since the beginning of the last century, many nuclear models have been proposed. In Section II A, the first models are mentioned and the notion of nuclear matter discussed. The formalisms that followed, either nonrelativistic Skyrme-type models [2] or relativistic ones that gave rise to the quantum hadrodynamics model were based on some basic features described by the early mod-els, the liquid drop model [3] and the semi-empirical mass formula [4]. Once the nuclear physics is established, the very idea of a neutron star can be tackled. However, it is very important to have in mind the model extrapolations that may be necessary when one moves from the nuclei region shown in Fig. 1 to the neutron star (NS) region. A simple treatment of the relation between these two regions and the construction of the QCD phase transition line can be seen in [5].
The exact constitution of these compact objects, also commonly named pulsars due to their precise rotation period, is still unknown and all the information we have depends on the confrontation of theory with astrophysical observations. As the density increases towards their center, it is believed that there is an outer crust, an inner crust, an outer core and an inner core. The inner core constitution is the most controversial: it can be matter composed of deconfined quarks or perhaps a mixed phase of hadrons and quarks. I will try to comment and describe every one of the possible layers inside a NS along this text.
NASA's Neutron Star Interior Composition Explorer (NICER), an x-ray telescope [7] launched in 2017, has already sent some news [8]: by monitoring the x-ray emission of gas surrounding the heaviest known pulsar, PSR J0740+6620 with a mass of 2.08 ± 0.07, it has measured its size and it is larger than previously expected, a diameter of around 25 to 27 km, with consequences on the possible composition of the NS core.
In this paper, I present a comprehensive review of the main nuclear physics properties that should be satisfied by equations of states aimed to describe nuclear matter, the consequences arising from the extrapolation necessary to describe objects with much higher densities as neutron stars and how they can be tuned according to observational constraints. At the end, a short discussion on quark and hybrid stars is presented and the existence of magnetars is rapidly outlined. Not all important aspects related to neutron stars are treated in the present work, rotation being the most important one that is disregarded, but the interested reader can certainly use it as an initiation to the physics of these compact objects.
II. HISTORICAL PERSPECTIVES
I divide this section, which concentrates all necessary information for the development of the physics of neutron stars, in two parts. In the first one, I discuss the development of the nuclear physics models based on known experimental properties and introduce the very simple Fermi gas model, whose calculation is later used in more realistic relativistic models. The second part is devoted to the history of compact objects from the astrophysical point of view.
A. From the nuclear physics point of view The history of nuclear physics modelling started with two very simple models: the liquid drop model, introduced in 1929 [3] and the semi-empirical mass formula, proposed in 1935 by Bethe and Weizsäcker [4]. The liquid drop model idea came from the observation that the nucleus has behavior and properties that resemble the ones of an incompressible fluid, such as: a) the nucleus has low compressibility due to its internal almost constant density; b) it presents well defined surface; c) the nucleus radius varies with the number of nucleons as R = R 0 A 1/3 , where R 0 1.2 × 10 −15 m and d) the nuclear force is isospin independent and saturates.
Typical nuclear density profiles are shown in Fig. 2, from where one can observe some of the features mentioned above, i.e., the density is almost constant up to a certain point and then it drops rapidly close to the surface, determining the nucleus radius. The mean square radius is usually defined as where ρ p is the number density of protons and ρ n the number density of neutrons.
A nucleus with an equal number of protons and neutrons has a slightly larger proton radius because they repel each other due to the Coulomb interaction. A nucleus with more neutrons than protons (as most of the stable ones) has a larger neutron radius than its proton counterpart and the small difference between both radii is known as neutron skin thickness, given by [9][10][11][12]: For the last two decades, a precise measurement of both charge and neutron radii of the 208 Pb nucleus has been tried at the parity radius experiment (PREX) at the Jefferson National Accelerator Facility [13] using polarised electron scattering. The latest experimental results [12] point to θ = 0.283±0.071 fm and to the interior baryon density ρ 0 = 0.1480 ± 0.0036(exp) ± 0.0013(theo) fm −3 . These quantities have shown to be important for the understanding of some of the properties of the neutron star. I will go back to this discussion later on.
The binding energy B of a nucleus A Z X N is given by the difference between its mass and the mass of its constituents (Z protons and N neutrons): where m( A Z X) is the mass of the chemical element A Z X and is given in atomic mass units. The binding energy FIG. 2. Schematic representation of the nuclear densities with equal number of protons and neutrons (top) and a larger number of neutrons than protons (bottom). The proton and neutron densities depend on the number of nucleons so that heavier elements present larger densities. Typical theoretical densities for 208 Pb are of the order of 0.09 fm −3 for neutrons and 0.06 -0.07 fm −3 for protons [10].
per nucleon B
A is shown in Fig. 3, from where it is seen that the curve is relatively constant and of the order of 8.5 MeV except for light nuclei. The semi-empirical mass formula, which is a parameter dependent expression was used to fit the experimental results successfully and it reads: (4) In this equation, from left to right, the quantities refer to a volume term, a surface term, a Coulomb term, an energy symmetry term and a pairing interaction term [14], [15]. Of course, with so many parameters, other parameterisations can be obtained from the fitting of the data. One possible set is a v = 15.68 MeV, a s = 18.56 MeV, a c × e 2 = 0.72 MeV, a i = 18.1 MeV and even − even nuclei, 0, even − odd nuclei, Although quite naive, these two models combined can explain many important nuclear physics properties, as nuclear fission, for instance [15]. Parameter dependent nuclear models can also explain the fusion of the elements in the stars and the primordial nucleosynthesis with the abundance of chemical elements in the observable universe, which is roughly the following: 71% is Hydrogen, 27% is Helium, 1.8% are Carbon to Neon elements, 0.2% are Neon to Titanium, 0.02% is Lead and only 0.0001% are elements with atomic number larger than 60. By observing Fig. 3, one easily identifies the element with the largest binding energy, 56 F e. Hence, it is possible to explain why elements with atomic numbers A ≤ 56 are synthesised in the stars by nuclear fusion that are exothermic reactions and heavier elements are expected to be synthesised in other astrophysical processes, such as supernova explosions and more recently also simulated in the mergers of compact objects. For Notice that the temperature increases from right to left. The yellow, orange and red big dots on the right top represent red giants, the blueish sequence on the bottom left represents white dwarfs and the central line is the main sequence, where the red stars are red dwarfs and the blue ones are blue giants. a simplistic and naive, but didactic idea of the stellar fusion chains, I show the possible synthesised chemical elements in Fig. 4.
After the star is born, it takes sometime fusing all the chemical elements in its interior, until its death, which is more or less spectacular depending on its mass. One of the most useful diagrams in the study of stellar evolution is the Hertzsprung and Russel (HR) diagram [16], developed by Ejnar Hertzsprung and Norris Russel independently in the early 1900s. According to the HR diagram, displayed in Fig. 5, the star spends most of its life time in the central line of the diagram, the main sequence. Our Sun will become a white dwarf after its death, the kind of objects shown at lower luminosities and higher temperatures, towards the left corner of the diagram. More massive stars, with masses higher than 8 solar masses (M ) become either a neutron star or a black hole and these compact objects are not shown in the HR Diagram since they do not emit visible light waves. Moreover, neutron stars were only detected much later, as discussed in Section II B. For a better comprehension of the HR diagram, please refer to [17].
The main idea underlying nuclear models is to satisfy experimental values and nuclear properties and to achieve this purpose, in almost one century of research, they became more and more sophisticated. The most important of these properties are the binding energy, the saturation density, the symmetry energy, its derivatives and the incompressibility, all of them already explored in the semi-empirical mass formula given in Eq. (4). An important question to be answered is what happens when one moves to higher densities or to finite temperature in the QCD phase diagram shown in Fig. 1 ?
To better understand this point, let's discuss the con-cept of nuclear matter. This is a common denomination for an infinite matter characterised by properties of a symmetric nucleus in its ground state, and without the effects of the Coulomb interaction. If one divides eq.(4) by the number of nucleons A, one can see that under the conditions just mentioned, the third and forth term disappear. If one assumes an infinite radius, A → ∞ and no surface effects exist. The pairing interaction would be an unnecessary correction. Hence, the binding energy per nucleon becomes approximately which is what one gets for a two-nucleon system if compared with the average value shown in Fig.3. However, the deuteron binding energy is much smaller, around only 2 MeV. This means that nuclear matter is not an appropriate concept if one wants to describe the properties of a specific nucleus, but it is rather useful to study, for instance, the interior of a neutron star. Normally, it is described by an equation of state, which consists of a set of correlated equations, like pressure, energy and density. The equation of state that describes the ground state of nuclear matter is calculated at zero temperature and is a function of the proton and the neutron densities, which are the same in symmetric matter. Useful definitions are the proton fraction y p = ρp ρ and the asymmetry δ = ρn−ρp ρ , which are respectively 0.5 and 0 in the case of symmetric nuclear matter. In these equations ρ = ρ p + ρ n is the total nuclear (or baryonic) density. The macroscopic nuclear energy can be obtained from the microscopic equation of state if one assumes that where E(ρ, δ) is the energy density. Thus, where m n = 939 MeV is the neutron mass (c = 1). The binding energy as a function of the density is shown in Figure 6. We will see how it can be obtained later in the text. The minimum corresponds to what is generally called saturation density and the value inferred from experiments ranges between ρ 0 = 0.148 − 0.17 fm −3 , as mentioned earlier when the PREX results were given. The pressure can be easily obtained from thermodynamics or, dividing by the volume, where T is the temperature, S is the entropy density, µ is the chemical potential and Ω the thermodynamical potential. When we take T = 0, the expression becomes even simpler because the term T S vanishes.
To demonstrate how a simple equation of state (EOS) can be obtained from a relativistic model, we use the free Fermi gas and assume that = c = 1, known as natural units. Within this model, the fermions can be either neutrons or nucleons, but I would like to emphasise that it is not adequate to describe nuclear matter properties, as will be obvious later. Its Lagrangian density reads: From the Euler-Lagrange equations the Dirac equation is obtained: Its well known solution has the form ψ = Ψ(k, λ)e i(k· r−E(k)t) , where Ψ(k, λ) is a four-component spinor and λ labels the spin. The energy can be calculated from where α = γ 0 γ or Moreover, one gets where f ± represents the Fermi-Dirac distribution for particles and antiparticles [18] . For T = 0, f + is simply the step function and there are no antiparticles in the system. In this case, with k F being the Fermi momentum and γ the degeneracy of the particle. If one considers only a gas of neutrons, the degeneracy is 2 due to the spin degeneracy. However, if one considers a gas of nucleons, i.e., symmetric matter with the same amount of protons and neutrons, it is 4 because it accounts for the isospin degeneracy as well.
One can then write For T = 0, it becomes As we still do not know the value of the chemical potential in eq.(10), the pressure can be obtained from the energy momentum tensor: having in mind that and is given by From eq.(18) one can write and finally and for T = 0, The entropy density of a free Fermi gas is given by By minimising eq.(10), the distribution functions are obtained: and On the other hand, the minimisation of the thermodynamical potential with respect to the density yields the chemical potential, i.e., For T = 0, In order to go back to the discussion of nuclear matter, we are lacking exactly the nuclear interaction and its introduction will be seen in section III.
B. From the compact objects point of view I have already discussed the evolution process of a star while it remains in the main sequence of Fig.5. When the fusion ends, it is believed that one of the possible remnants is a neutron star. We see next how it was first predicted and then observed.
In fact, the history of neutron stars started with the observation of a white dwarf and its description with a degenerate free Fermi gas equation of state, as the one just introduced, but with the fermions being electrons instead of neutrons. In 1844, Frederich Bessel observed a very bright star that described an elliptical orbit [19], known as Sirius. He proposed that Sirius was part of a binary system, whose companion was not possible to see. In 1862, it was observed by Alvan Clark Jr. This companion, named Sirius B, had a luminosity many orders of magnitude lower than Sirius, but approximately the same mass, of the order of the solar mass (1 M ). In 1914 Walter Adams concluded, through spectroscopy studies that the temperature at the surface of both stars should be similar, but the density of one of them should be much higher than the density of the companion. This high density star was called white dwarf and its properties were explained only in 1926 by Ralph Fowler [20], with the help of quantum mechanics. He claimed that the internal constituents of the white dwarf should be responsible for a degeneracy pressure that would compensate the gravitational force. This hypothesis was possible since the electrons are fermions and hence, obey the Pauli Principle. In 1930, Subrahmanyan Chandrasekhar calculated the maximum densities of a white dwarf [21] and subsequently its maximum mass [22] that he thought should be 0.91 M due to an incorrect value of the atomic mass to charge number ratio. It is interesting to note that the correct Chandrasekhar limit, 1.44 M , was actually obtained by Landau [23].
Concomitantly, Lev Landau reached the same conclusion as Chandrasekhar and went further: he proposed that even denser objects could exist and in this case, the atomic nuclei would overlap and the star would become a gigantic nucleus [24]. Landau's hypothesis is considered the first forecast of a neutron star, although the neutrons had not been detected yet. Landau's paper was written in the beginning of 1931 but published one year later, just when the neutron was discovered by James Chadwick [25]. The first explicit proposition of the existence of neutron stars was made by Baade and Zwick [26], soon after Chadwick's discovery.
In 1939, Toman and, independently Oppenheimer and Volkoff (TOV) [27], used special and general relativity to correct Newton's equations that described the properties of a perfect isotropic fluid, which they considered could be the interior of compact objects (white dwarfs and neutron stars). While Tolman proposed 8 different solutions for the system of equations, Oppenheimer and Volkoff used the equation of state of a free neutron gas (exactly the one introduced in the last section) and ob-tained a maximum mass of 0.7 M for the neutron star, what was very disappointing because it was lower than the Chandrasekhar limit. However, soon, the limitations of this EOS were noticed: the inclusion of the nuclear interaction could make it harder and then generate higher masses. These calculations will be shown in the next section.
In 1940, Mario Shenberg and George Gamow proposed the Urca process [28], responsible for cooling down the stars by emitting neutrinos, which can carry a large amount of energy with very little interaction.
In 1967, the first neutron star (NS) was detected by Jocelyn Bell and Anthony Hewish [29]. At first, they believed they were capturing signals from an extraterrestrial civilisation and the booklet The Little Green Men really existed. But they soon realised that the radio signals were coming from a compact object with a very stable frequency (pulse) and the object was called pulsar.
It is worth pointing out that white dwarfs and neutron stars bear very different internal constituents and densities. Neutron stars are much denser. This means that general relativity is a very important component in the study of NS, but this is not true for white dwarfs. Hence, it would be expected that only relativistic models, as the ones introduced in the present text, could be used to describe neutron star macroscopic properties. However, there are non-relativistic models, known as Skyrme models which can be used to describe NS, as far as they do not violate causality. Moreover, some non-relativistic models lead to symmetry energies that decrease too much after three times saturation density, which is a very serious problem if we want to apply them to the study of neutron stars, highly asymmetric systems. These problems can be cured with the inclusion of three-body forces, what makes the calculations much more complicated. For a review on Skyrme models, please see ref. [2]. On the other hand, relativistic models are generally causal and always Lorentz invariant and when extended to finite temperature, anti-particles appear naturally. Thus, only relativistic models are discussed in the present work.
Let's go back to history because it continues. In 1974 Russel Hulse and Joseph Taylor identified the first binary pulsar PSR1913+16 [30] with a radio-telescope in Arecibo and proposed that the system was losing energy in the form of gravitational waves (GW), the same kind of waves foreseen by relativistic theories. Note that they did not detect gravitational waves directly but instead proved its existence via pulsar timing and were laureated with the Nobel prize for this discovery. In 2015, the first GW produced by two colliding black holes was finally detected directly by LIGO [31] and in 2017, GW170817, produced by the merger of two NS [32] initiated the era of multi-messenger astronomy [33]. These gravitational waves have become an excellent source of constraints to the EOS used to describe neutron stars, as will be discussed in a future section.
III. RELATIVISTIC MODELS FOR ASTROPHYSICAL STUDIES
In Section II A the EOS of a free Fermi gas was introduced and in Section II B, I mentioned that the EOS can satisfactorily describe a white dwarf, as shown by Chandrasekhar if the free Fermi gas is a gas of electrons. However, if the fermions are neutrons, it cannot describe neutron stars. One important ingredient, besides the already mentioned relativistic effects, is still missing in the recipe: the nuclear interaction. So, let's go back to nuclear matter.
This model, also known as Walecka model [34] or quantum hadrodynamics (QHD-1) is based on the fact that the interaction inside the nucleus has two contributions: an attractive contribution, at large distances and a repulsive one at short distances, and both can be reasonably well described by Yukawa type potentials and represented by fields generated respectively by scalar and vector mesons. This idea was first proposed by Hans Peter Durr in his Ph.D. thesis in 1956, supervised by Edward Teller, who, in 1955, also proposed a version of the model based on classical field theory [35]. But the quantum version proposed by Walecka was the one that really gained popularity and up to now it is largely applied with different versions and extensions. This simplified model does not take pions into account because, as will be seen next, it is usually solved in a mean field approximation and in this case, the pion contribution disappears. As the σ − ω model is a relativistic model, this simpler and more common approximation is always known as relativistic Mean Field Theory (RMF) or relativistic Hartree approximation.
As the name suggests, the σ − ω model considers that the central effective potential for the nucleon-nucleon interaction is given by where r is the modulus of the vector that defines the relative distance between two nucleons, the two constants g σ and g ω are adjusted to reproduce the nucleonnucleon interaction and the meson masses are respectively m σ = 550 MeV and m ω = 783 MeV. The interested reader, can look at the potential V (r) obtained with the coupling constants and masses used in this section in [15]. To obtain the binding energy that corresponds to this potential in RMF, a Lorentz invariant Lagrangian density is necessary and it reads: where ψ represents the baryonic field (nucleons), σ and ω µ represent the fields associated with the scalar and vector mesons and M is the nucleon mass, generally taken as 939 MeV. By comparing eqs. (11) and (33), one can see that besides the Fermi gas representing the nucleons, the latter contains two interaction terms and kinetic and mass terms for both mesons. The usual prescription is to use the Euler-Lagrange equations (12) for each field to obtain the equations of motion. They read: and Note that eq. (35) is a Klein-Gordon equation with a scalar source, eq. (36) is analogous to quantum eletrodynamics with a conserved baryonic current (ψγ ν ψ), instead of the electromagnetic current and eq.(37) is a Dirac equation for an interacting (not free) gas.
In a RMF approximation, the meson fields are replaced by their expectation values that behave as classical fields: and The equations of motion can then be easily solved and they read: and where ρ s is a scalar density and ρ is a baryonic number density. The Dirac equation becomes simply is the effective mass. To obtain the EOS, the recipe is the same as already shown for the free Fermi gas, which leads to expressions for the energy density and pressure. Assuming Other important quantities directly related with nuclear matter EOS are the symmetry energy, its derivatives and the incompressibility. The symmetry energy is roughly the necessary energy to transform symmetric matter into a pure neutron matter, as shown in Fig. 7, i.e., Its value can be inferred from experiments and it is of the order of 30-35 MeV and it can be written as It is common to expand the symmetry energy around the saturation density in a Taylor series as where J is the symmetry energy at the saturation point and L 0 and K sym represent respectively its slope and curvature: and Experimental data for the symmetry energy can be inferred from heavy-ion collisions, giant monopole (GMR) and giant dipole (GDR) resonances, pygmy dipole resonances and isobaric analog states. Accepted values for the slope until very recently lied in between 30.6 and 86.8 MeV [36,37] and for the curvature, in between -400 and 100 MeV [38,39]. These two quantities are correlated with macroscopic properties of neutron stars, as will be seen later on along this manuscript. Based on 28 experimental and observational data, restricted bands for the values of J (25 < J < 35 MeV) and L 0 (25 < L 0 < 115 MeV) were given in [40] . More recently, results obtained by the PREX2 experiment [12] point to a different band, given by L 0 = 106 ± 37 MeV [41]. If confirmed, this result rehabilitates many of the already ruled out EOS and points to a much larger than previously expected neutron star radius, also discussed later on in the present paper.
Another important quantity is the incompressibility, already mentioned when the liquid drop model idea was introduced. It is a measure of the stiffness of the EOS, i.e., defines how much pressure a system can support and it is calculated from the relation and ranges between 190 and 270 MeV [36,37]. These values can be inferred from both theory and experiments. I will go back to the importance of these nuclear matter bulk properties and their connection with neutron star properties later on.
B. Extended relativistic hadronic models I next present one example of a complete Lagrangian density that describes baryons interacting among each other by exchanging scalar-isoscalar (σ) , vector-isoscalar (ω), vector-isovector (ρ) and scalar-isovector (δ) mesons: where [42] and In this Lagrangian density, L nm represents the kinetic part of the nucleons plus the terms standing for the interaction between them and mesons σ, δ, ω, and ρ. The term L j represents the free and self-interacting terms of the meson j, where j = σ, δ, ω, and ρ. The σ selfinteraction terms were the first ones to be introduced [43] to correct some of the values of the nuclear bulk properties. The last term, L σωρ , accounts for crossing interac-tions between the meson fields. The antisymmetric field tensors F µν and B µν are given by The nucleon mass is M and the meson masses are m j .
In a mean field approximation, the meson fields are treated as classical fields and the equations of motion are obtained via Euler-Lagrange equations. Translational and rotational invariance are assumed. The equations of motion are then solved self-consistently and the energy momentum tensor, Eq. (21) is used in the calculation of the EOS. The calculations follow the steps shown in Section II A and III A. The interested reader can also check them, for instance, in [34] and [36]. Nevertheless, some of the important steps are mentioned in what follows. Within a RMF approximation, the common substitution mentioned below is again performed: and the equations of motion read: and where with with τ 3 = 1 and −1 for protons and neutrons respectively and γ = 2 to account for the spin degeneracy. The proton and neutron effective masses read: Due to translational and rotational invariance, only the zero components of quadrivectors remain. From the energy-momentum tensor, the following expressions are obtained: with and with C. Too many relativistic models In [36], a large amount of relativistic models were confronted with two sets of nuclear bulk properties, one more and one less restrictive. The interested reader should check the chosen ranges of properties in both sets and the respective values of 363 models. In what follows, I will restrict myself to 3 parameter sets: NL3 [44], NL3ωρ [45], which is an extension of the NL3 parameter set with the introduction of a vector-isovector interaction and IUFSU [46]. These models are chosen because they are frequently used in various applications in the literature. Moreover, NL3ωρ and IUFSU satisfy all nuclear matter bulk properties, but it will be seen along the text that recent astrophysical observations are not completely satisfied by them. The inclusion of NL3 and its comparison with NL3ωρ help the understanding of the importance of the ω − ρ interaction. Other parameter sets shown along the next sections are GM1 [47], GM3 [48], TM1 [49] and FSUGZ03 [50]. All of them are contemplated in [36] and the interested reader can check their successes and failures in satisfying the main nuclear bulk properties. Notice that none of the parameter sets explicitly mentioned in the present work includes the δ meson, which distinguishes protons and neutrons, and consequently, the effective masses given in eq.(71) are identical. The mesonic crossing terms weighted by the parameters α 1 , α 1 , α 2 , α 2 are not included either. In Table I, the parameter values for the three parametrisations mostly used are presented and in Table II, their main nuclear properties are shown.
In Figure 8 top, I plot the binding energy per nucleon for the three parameter sets and one can clearly see the slightly different saturation densities and binding energy values. Notice that the ω − ρ channel does not influence the binding energy of symmetric nuclear matter, but plays an important role in asymmetric matter. In Figure 8 bottom the symmetry energy is depicted and it is easy to see that they are very similar at sub-saturation densities, but completely different at larger densities. As a consequence of what is seen in Figures 8, the incompressibility, the slope and the curvature of the three models are different, as shown in Table II.
IV. STELLAR MATTER
The idea of this section is to show how the relativistic models presented so far can be applied to describe stellar matter and, in this case, we refer specifically to neutron stars. Looking back at the QCD phase diagram presented in the Introduction, one can see that neutron stars have internal densities that are 6 to 10 times higher than the nuclear saturation density and their temperature is low. Actually, if we compare their thermal energy with the Fermi energy of the system, the assumption of zero temperature is indeed reasonable. At these very high densities the onset of hyperons is expected because their appearance is energetically favourable as compared with the inclusion of more nucleons in the system. To deal with this fact, the first term in the Lagrangian density of eq. (51) has to be modified to take into account, at least, the eight lightest baryons and it becomes: The meson-baryon coupling constants are given by where g j is the coupling of the meson with the nucleon and χ jB is a value obtained according to symmetry groups or by satisfying hyperon potential values. These are important quantities when hyperons are included in the system [48,51]. We come back to the discussion of these quantities below. If we perform once again a RMF approximation and use the Euler-Lagrange equations to obtain the equations of motion, we find: where ρ sB is the scalar density and ρ B is the baryon B density, given by: where k f B is the Fermi momentum of baryon B. The terms E p kin and E n kin that appear in eq. (72), must now be substituted by and Whenever stellar matter is considered, β-equilibrium and charge neutrality conditions have to be imposed and hence, the inclusion of leptons (generally electrons and muons) is necessary. These conditions read: where µ B and q B are the chemical potential and the electrical charge of the baryons, q l is the electrical charge of the leptons, ρ B and ρ l are the number densities of the baryons and leptons.
After the supernova explosion, the remnant is, at first, a protoneutron star. Before deleptonisation takes place, neutrinos are also present in the system and in this case, the chemical stability condition becomes In this process, entropy is usually fixed at values compatible with simulations of neutron star cooling and the lepton fractions reach values of the order of 0.3-0.4. This scenario is not considered in the present paper, but examples of this calculation can be seen in [52].
To satisfy the above conditions of chemical equilibrium and charge neutrality, leptons have to be incorporated in the system and this is done with the introduction of a free Fermi gas, i.e., where the sum runs over the electron and the muon and their eigenenergies are so that their energy density becomes The total pression of the system can be either obtained separately for its baryonic and leptonic parts as in the previous section or by thermodynamics: where f stands for all fermions in the system and it is common to define the particle fraction (including leptons) as Y f = ρ f ρ . As already mentioned, an important point is how to fix the meson-hyperon coupling constants g iB , i = σ, ω, ρ. There are two methods generally used in the literature.
The first one is phenomenological and is based on the fitting of the hyperon potentials [48]: which, unfortunately, are not completely established. The only well known potential is the Λ potential depth U Λ = −28 MeV [47]. Common values for the Σ and Ξ potentials are U Σ = +30 MeV and U Ξ = −18 MeV [53,54], but their real values remain uncertain. According to [47], appropriate values for the meson hyperon coupling constants defined in eq. (77) are obtained if χ Bσ = 0.7 and χ Bω = χ Bρ is given by 0.772 for NL3 and 0.783 if another common parametrisation, the GM1, is used. However, in these cases, the value of χ Bρ remains completely arbitrary. We have mentioned GM1 here because it is very often used in the description of neutron star matter since it was one of the first parameter sets with a high effective mass at the saturation density (M * /M = 0.7) as compared with 0.6 given by NL3, for instance (see Table II). This high effective mass helps the convergence of the codes when the hyperons are introduced because eq. (84) accounts for a large contribution of the σ 0 field, which in turn, carries the information of the scalar densities of eight baryons. The situation is very different as the one in nuclear matter, where the effective mass only carries the σ 0 field coming from the nucleonic scalar density. This means that whenever the 8 lightest baryons are included, the negative contribution in eq. (84) can make the nucleon mass reach zero very rapidly if the effective mass is too low. Other examples of how to fit these couplings based on phenomenological potentials can be seen in [55,56]. The second possibility to choose the meson-hyperon couplings is based on the relations established among them by different group symmetries, the most common being SU(3) [51,57] and SU(6) [58].
In the present work we have used the following sets of couplings, for which U Λ = −28 MeV, U Σ = +30 MeV and U Ξ = −18 MeV: In Fig. 10, the particle fractions obtained with IUFSU are displayed for the two cases shown in Fig.9 right. Notice that when the hyperons are included, these particle fractions depend on the meson-hyperon couplings discussed above. A different choice for these couplings would generate different particle fractions for the same nuclear parametrisation. One can see that the constituents of the neutron stars change with the increase of the density, making their core richer in terms of particles than the region near the crust. From these plots, the conditions of charge neutrality and chemical equilibrium become clear.
A. The Tolman-Oppenheimer-Volkoff equations
As it was just seen, essential nuclear physics ingredients for astrophysical calculations are appropriate equa-tions of state (EOS). After the EOSs are chosen, they enter as input to the Tolman-Oppenheimer-Volkoff equations (TOV) [27], which in turn, give as output some macroscopic stellar properties: radii, masses and central energy densities. Static properties, as the moment of inertia and rotation rate can be obtained as well. The EOSs are also necessary in calculations involving the dynamical evolution of supernova, protoneutron star evolution and cooling, conditions for nucleosynthesis and stellar chemical composition and transport properties, for instance.
The TOV equations were obtained by Tolman and independently by Oppenheimer and Volkoff [27], as already mentioned, and they read: where M is the gravitational mass, M Baryonic the baryonic mass, m n is the nucleon mass and r is the radial coordinate and also the circumferential radius. Be aware that M Baryonic refers to the baryonic mass of the star and it is not the same as the M B , the individual baryonic masses used to compute the EOS.
The first differential equation is also shown in such a way that the corrections obtained from special and general relativity are clearly separated.
The EOSs shown on the r.h.s. of Fig. 9 are then used as input to the above TOV equations and the corresponding mass-radius diagram is shown in Fig. 11. Each curve represents a family of stars, being the maximum point of the curves related to the maximum stellar mass of the family. By comparing the curves shown in Fig. 9 and Fig. 11, one can clearly see that the harder EOS yields higher maximum mass. Hence, the inclusion of hyperons makes the EOS softer, as expected, but results in lower maximum masses. As there is no reason to believe that the hyperons are not present, this connection of softer EOS with lower neutron star mass gave rise to what is known as the hyperon puzzle. I will go back to this debate in the next section.
I would like to call the attention of the reader for the values of the symmetry energy slope (L 0 ), which has been extensively discussed in the last years. Although its true value is still a matter of debate, most studies indicate that it has non-negligible implications on the neutron star macroscopic properties [37,[59][60][61][62][63][64][65]. The slope can be controlled by the inclusion of the ω − ρ interaction, as can be seen in Table II. In general, the larger the value of the interaction, the lower the values of the symmetry energy and its slope [59]. As a general trend, it is also true that the lower the value of the slope, the lower the radius of the canonical star, the one with 1.4 M . In Table II the values of the maximum stellar masses obtained without the inclusion of hyperons and the radii of the canonical stars are displayed. Notice, however, that the value of the radius of the canonical stars depends on the EOS of the crust. To obtain the values shown in Table II, I used the BPS EOS [66] for the outer crust and interpolated the inner crust. As far as the maximum mass is concerned, the crust barely affects it, since the involved densities are too low. I will discuss this subject further when discussing the pasta phase in Section IV C. Another interesting correlation noticed in [67] is that the onset of the charged (neutral) hyperons takes place at lower (larger) densities for smaller values of the slope.
B. Structure of neutron stars and observational constraints
Although the internal constitution of a neutron star cannot be directly tested, it is reasonably well understood. A famous picture of the NS internal structure was drawn by Dany Page and can be seen in [68]. Close to the surface of the star, there is an outer and an inner crust and towards the center, an outer and an inner core are believed to exist. The solid crust is expected to be formed by nonuniform neutron rich matter in β-equilibrium. This inhomogeneous phase is known as pasta phase and calculations predict that it exists at densities lower than 0.1 fm −3 , where nuclei can coexist with a gas of electrons and neutrons which have dripped out. The center of the star is composed of hadronic matter and the true constituents are still a matter of debate, as one can conclude from the results presented in the last section. The fact that the core should contain hyperons is widely accepted, although this possibility excludes many EOS that become too soft to explain the existing massive stars, namely, MSP J0740+6620, whose mass range lies at 2.07 ± 0.08 M [8,69], PSR J0348+0432 with mass of 2.01 ± 0.04 M [70] and PSR J1614-2230, which is also a massive neutron star [71]. Until around 2005, these massive NS had not been detected and practically all EOS could satisfy a maximum 1.4 M star.
Since the appearance of hyperons is energetically favorable, different possibilities were considered in the literature so that the EOS would be stiffer, as the tuning of the unknown meson-hyperon coupling constants, for an example. Another mechanism that increases the maximum mass of neutron stars with hyperons in their core is the inclusion of an additional vector meson that mediates the hyperon-hyperon interaction [51,57]. In Fig. 12, massradius curves are shown for different hyperon-meson coupling constants of the GM1 parametrisation [51]. One can see that all choices produce results with high maximum masses, satisfying the new massive star constraints. I refer the reader to [51] and references therein for explanations on the introduction of the strange meson channel on the Lagrangian density and the corresponding strange meson-hyperon couplings.
As already mentioned in Sec. II B, the observation of the binary neutron star system GW170817 [32] by the LIGO-Virgo scientific collaboration and also in the Xray, ultraviolet, optical, infrared, and radio bands gave rise to the new era of multi-messenger astronomy [33]. The detection of the corresponding gravitational wave helped the establishment of additional constraints to the physics of neutron stars. This subject is better discussed next, but at this point, I would like to mention that a series of papers based on the these constraints imposed restricted values for the neutron star radius [72][73][74][75][76], not always compatible among themselves.
The dimensionless tidal deformability, also called tidal polarisability and its associated Love number are related to the induced deformation that a neutron star undergoes by the influence of the tidal field of its neutron star companion in the binary system. The idea is analogous to the tidal response of our seas on Earth as a result of the Moon gravitational field. The theory of Love numbers emerges naturally from the theory of tidal deformation and the first model was proposed in 1909 by Augustus Love [77] based on Newtonian theory. The relativistic theory of tidal effects was deduced in 2009 [78,79] and since then the computing of Love numbers of neutron stars has become a field of intense investigation.
As different neutron star EOS and related composition have different responses to the tidal field, the tidal polarisability can be used to discriminate between different equations of state. A complete overview on the theory of Love numbers in both Newtonian and General Relativity theories can be found in [80]. Here, I show next only the main equations for the understanding of the constraints on NS.
The second order Love number k 2 is given by where C = M/R is the star compactness, M and R are the total mass and total circumferential radius of the star respectively and y R = y(r = R), which is obtained from r dy dr + y 2 + yF (r) + r 2 Q(r) = 0.
Here the coefficients are given by and where E = 1 − 2m/r, E and P are the energy density and pressure profiles inside the star. Notice that Eq. (94) has to be solved coupled to the TOV equations. Finally, one can obtain the dimensionless tidal deformability Λ, which is connected to the compactness parameter C through In Fig. 13, the second order Love number as a function of the compactness is shown for the 3 equations of state discussed in Section III B, as well as the corresponding tidal deformabilities (Λ 1 , Λ 2 ) for the binary system (M 1 , M 2 ), with M 1 > M 2 . The plots are calculated from the equation for the chirp mass and the diagonal dotted line corresponds to the case M 1 = M 2 . The lower and upper dashed lines correspond to LIGO/Virgo collaboration 50% and 90% confidence limits respectively, which are obtained from the GW170817 event. The EOS used to obtain these curves do not include hyperons to avoid the uncertainties related to the meson-hyperon couplings. It is important to mention the matching procedure used to compute the Love number and the tidal polarisabilities. The outer crust is a BPS EOS, the inner crust is a polytropic function which interpolates between the outer crust and the core. A detailed explanation is given in [81], sections 2.2 and 2.3. More advanced crustal EOSs are available [82,83] and I discuss the sensitivity of some results on the crust model later on. One can see from these figures that the Love numbers are very different for the three models and so are the tidal polarisabilities, the NL3 and NL3ωρ not being able to reproduce the GW170817 data satisfactorily. Actually, this behaviour of the NL3 and NL3ωρ had already been observed in [84], but one should notice that in [84], the confidence lines were taken from a preliminary version of the LIGO/Virgo data [32] while in the present paper, they are taken from [85], where the consideration of massive stars was neglected. Another important constraint concerns the radii of the canonical stars, the ones with M = 1.4 M . According to the LIGO/Virgo collaboration, the tidal polarisability of canonical stars should lie in the range 70 ≤ Λ 1.4 ≤ 580 [85], a restriction that imposed a constraint to the radii of the corresponding stars, which should lie in the range 10.5 km ≤ R 1.4M ≤ 13.4 km. This constraint, which does not take into account a maximum stellar mass of 1.97 M , only excludes the NL3 parameter set from the ones we are analysing (see Table II), exactly the one that was shown not to describe nuclear bulk properties well enough. But...the history has become more complicated: a recently published paper concludes that the canonical neutron star radius cannot be larger than 11.9 km [76]. If such small radius is confirmed, it could imply in a revision of the EOSs or of the gravity theory itself, as done in [86]. Notice, however, that this small radius is in line with older works that predicted that the maximum mass of a canonical star should be 13.6 km [72] and [73], whose authors claimed that any NS, independently of its mass, should bear a radius smaller than 13 km. Moreover, the new information sent by NICER [8] supports the evidence that the detected massive PSR J0740+6620 has a radius of the order of 12.35±0.75 km and that a star with a mass compatible with a canonical star, J0030+0451, has a radius of the order of 12.45 ± 0.65 km [87], or 12.71 +1.14 1.19 km [74] or even 13.02 +1.24
1.06
km [75], depending on the analysis performed. These recent detections point to the fact that the radii of canonical and massive stars are of the same order and this feature is not easily reproduced by most EOSs. On the other hand, one of the analysis of the results from the PREX experiment implies that 13.25 km ≤ R 1.4M ≤ 14.26 km corresponding to a tidal polarisability in the range 642 ≤ Λ 1.4M ≤ 955 [41], also much higher than the above mentioned value obtained from GW170817 data. Notice, however, that the recent PREX results seem to contradict previous understandings on the softness of the symmetry energy [88]. Hence, the sizes of these objects are still a source of debate. One of the conclusions in [41] is that a precise knowledge of the crust of these compact objects may help to minimize the systematic uncertainties of these results.
A detailed analysis of the relativistic mean field models shown to be consistent with all nuclear bulk properties in [36] according to the masses and radii they yield when applied to describe NS, can be found in [89]. Thirty four models were analysed and only twelve were shown to describe massive stars with maximum masses in the range 1.93 ≤ M/M ≤ 2.05 without the inclusion of hyperons. In another paper [90], the very same models were confronted with the constraints imposed by the LIGO/Virgo collaboration. In this case, 24 models were shown to satisfy them. However, only 5 models could, at the same time describe massive stars and constraints from GW170817. These studies did not use EOSs with hyperons, what poses an extra degree of complication due to the uncertainties on the meson-hyperon coupling constants. Looking at the three sets used in the present work, one can clearly see the difficulty. The two models that can describe massive stars are outside the range of validity of the GW170817 tidal deformabilities. On the other hand, IUFSU gives a mass a bit lower than desired, a deficiency that can be correct with some tuning.
Another aspect that deserves to be mentioned refers to the inclusion of ∆ baryons in the EOS. If they are considered as a possible constituent of neutron stars, at least with the parametrisations studied (GM1 and GM1ωρ), no "∆ puzzle" is observed [91].
C. The importance of the inner and outer crusts
When examining neutron star merger, the coalescence time is determined by the tidal polarisability, which as already explained, is a direct response of the tidal field of the companion that induces a mass quadrupole. This scenario suggests that the neutron star crust should play a role in this picture. If one looks at the famous figure drawn by Dany Page [68], one can see that the crust is divided in two pieces, the outer and the inner crust, the latter being the motive for the present section. It may include a pasta phase, the result of a frustrated system in which there is a unique competition between the Coulomb and the nuclear interactions, possible at very low baryonic densities. In the simplest interpretation of the geometries present in the pasta phase, they are known as droplets (3D), rods (2D) and slabs (1D) and their counterparts (bubbles, tubes and slabs) are also possible. Much more sophisticated geometries such as waffles, parking garages and triple periodic minimal surface have been proposed [92][93][94], but I next describe only the more traditional picture.
The pasta phase is the dominant matter configuration if its free energy (binding energy at T = 0) is lower than its corresponding homogeneous phase. Depending on the model, the used parametrisation and the temperature [95], typical pasta densities lie between 0.01 and 0.1 fm −3 . Different approaches are used to compute the pasta phase structures: the coexisting phases (CP) method, the Thomas-Fermi approximation, numerical simulations, etc. For detailed calculations, one can look at [95], [96], [97], for instance. In what follows, I only show the main equations used to build the pasta phase with the CP method.
According to the Gibbs conditions, both pasta phases have the same pressure and chemical potentials for proton and neutron and, at a fixed temperature, the following equations must be solved simultaneously: where I (II) represents the high (low) density region, ρ p is the global proton density, ρ e is the electron density taken as constant in both phases and f is the volume fraction of the phase I, that reads The total hadronic matter energy reads: where E I and E II are the energy densities of phases I and II respectively and E e is the energy density of the electrons, included to account for charge neutrality. The total energy can be obtained by adding the surface and Coulomb terms to the matter energy in Eq. (104), Minimizing E surf + E Coul with respect to the size of the droplet/bubble, cylinder/tube or slabs, we obtain [96] with α = f for droplets, rods and slabs, and α = 1 − f for tubes and bubbles. The quantity Φ is given by where, σ surf is the surface tension, which measures the energy per area necessary to create a planar interface between the two regions. The surface tension is a crucial quantity in the pasta calculation and it is normally parametrised with the help of more sophisticated formalisms. Another important aspect is that the pasta phase is only present at the low-density regions of the neutron stars, and in this region, muons are not present, although they are present in the EOS that describes the homogeneous region. In Fig. 14, I plot the binding energy of the homogeneous matter (dashed line) as compared with the pasta phase binding energy (solid line with different colours representing the different structures). One can see that the pasta phase binding energy is lower up to a certain FIG. 14. npe matter binding energy obtained with the CP method and NL3 parametrisation [44]. Figure taken from [95]. density, when the homogeneous phase becomes the preferential state. In Fig. 15, I show various phase diagrams obtained with the CP and TF methods for fixed proton fractions at different temperatures. As the temperature increases, the pasta phase shrinks. Here I have mentioned the TM1 parameter set [49], not used before in the present work, but also quite common in the literature. The purpose is only to show that different approximations and different parametrisations result in different internal structures with different transition densities from one phase to another.
But then, what is the influence of the pasta phase on the calculation of the tidal polarisability and if this structure is not well determined, how much its uncertainty contributes to the final calculations? This problem was tackled in [98] and, as the model used in that paper is quite different from the RMF models we use in the present work, we do not include any figures, but it is fair to say that the contribution is indeed minor. In [98], the BPS EOS was used for the outer crust. For the inner crust, two possibilities were considered: the existence of the pasta phase and a simple interpolation between the outer crust and the core. It was observed that, although the explicit inclusion of the pasta phase affected the Love number in a visible way, it almost did not change the tidal polarisabilities, a result that corroborated the findings in [99]. These results can be explained by the fact that, for a fixed compactness, even if the Love number is sensitive to the inner crust structure, the tidal polarisability scales with the fifth power of C and hence, the influence is small. And what about the outer crust? Indeed, in this case, the tidal effects should be even more sensitive. In what follows, I test how much the use of a modern EOS for the outer crust, which we call reliable [83] change the results as compared with the BPS generally used and mentioned below. A modified version of the IUFSU model known as FSUGZ03 [50] was used to plot Figure 16 and we trust the qualitative results would be the same for any other parametrisation. In this Figure, the outer crust is linked directly to the core EOS, as seen on the top. Log-scale is used because the differences cannot be seen in linear scale. Then, the different prescriptions are used to compute the tidal polarisabilities shown on the bottom. Once again, one can see that the influence is very small.
Although we have seen that neither the outer nor the inner crust alter significantly the tidal polarisabilities, they do have an impact, which was quantified in [100,101]. The authors of these works concluded that the impact of the crust EOS is not larger that 2%, but the matching procedure (crust-core) can account for a 5% difference on the determination of the low mass NS radii and up to 8% on the tidal deformability. In another recent work [102], the inner crust was parametrised in terms of a polytropic-like EOS and the sound velocity and canonical star radii were computed. EOS for the inner crust with different sound velocities produced radii with up to 8% difference when the same EOS was used for the core. Albeit the fact that present results show that the inclusion of the pasta phase is not essential when the above discussed macroscopic properties of NSs are computed, it may indeed be important for the thermal [103] , magnetic evolution [104,105] and neutrino diffusion of NS [106,107], processes that take place at different epochs. Hence, being able to handle properly the pasta phase structure is still a matter or concern. The first issue worth discussing is the possible existence of baryons that are more massive than nucleons and carry strangeness in the pasta phase. In [108], it was verified that the Λ hyperons, can indeed be present, although in small amounts, as seen in Fig.17, where the Λ fraction is shown as a function of temperature in phases I (clusters) and II (gas). For the parametrisation used, NL3ωρ, the pasta phase disappears at T=14.45 MeV, the Λs being present for electron fractions ranging from 0.1 to 0.5 and in quantities larger than 10 −11 for T > 7 MeV. The Ξ − can also be found, but in much smaller amounts, of the order of 10 −12 .
The second important point refers to the fact that the CP method just presented and also another commonly used method, the Thomas-Fermi approximation [109], can only provide one specific geometry for each density, temperature and proton (or electron) fraction but it is well known that this picture is very naive. In fact, different geometries can coexist at thermal equilibrium [97,110,111]. The problem with these more sophisticated approaches is that the computational cost is tremendous, making them inadequate to be joined to other expensive computational methods that may be necessary to calculate neutrino opacities and transport properties, for instance. In a recent paper, a prescription with a very low computational cost was presented [112]. In that paper, fluctuations are taken into account in a reasonably simple way by the introduction of a rearrangement term in the free energy density of the cluster. A simple result can be seen in Figure 18, where one can see that different geometries can coexist at a certain temperature for a fixed density. If different proton fractions are considered, the dominat geometry changes as in the CP or TF method, but the other geometries can still be present. The complete formalism has been revised and extended to asymmetric matter and can be found in [113].
V. HYBRID STARS
So far, I have discussed the possibility that hadronic matter exists in the core of a neutron star and that nuclear physics underlies the models that describe it. The idea of a hybrid star, containing a hadronic outer core that has a different composition than the inner core, which could be composed of deconfined quarks was first proposed by Ivanenko and Kurdgelaidze [114] in the late 60's. In their papers, they have even foreseen that a transition to a superconducting phase would be possible. This idea has gained credibility lately. A modelindependent analysis based on the sound velocity in hadronic and quark matter points to the fact that the existence of quark cores inside massive stars should be considered the standard pattern [115]. In this case, one would be dealing with what is known as hybrid star and, from the theoretical point of view, its description requires a sophisticated recipe: a reliable model for the outer hadronic core and another model for the inner quark core. The ideal picture would be a chiral model that could describe both matters as density increases, but those models are still rarely used [116][117][118][119]. Generally, what we find in the literature are Walecka-type models as the ones presented in Section III B or density-dependent models, whose density dependence is introduced on the mesonbaryon couplings as in [120,121] for the hadronic matter and the MIT bag model [122] or the Nambu-Jona-Lasinio (NJL) model [123] for the quark matter. While the MIT bag model is very simplistic, the NJL model is more robust and accounts for the expected chiral symmetry but cannot satisfy the condition of absolutely stable strange matter that will be discussed next. The MIT bag model EOS is simply the EOS calculated for a free Fermi gas in Section II A, where the masses are the ones of the u, d, s quarks, generally taken as m u = m d = 5 MeV and m s varying from around 80 to 150 MeV and the inclusion of a bag constant B of arbitrary value, which is responsible for confining the quarks inside a certain surface. B enters with a negative sign in the pressure equation and consequently a positive one in the energy density equation. The NJL EOS is more complicated and besides accounting for chiral symmetry breaking/restoration, also depends on a cut-off parameter. The derivation of the EOS can be obtained in the original papers [123], in an excellent review article [124] or in one of the papers I have co-authored as [125], for instance and I will refrain from copying the equations here. Contrary to the MIT bag model, the NJL model does not offer the possibility of free parameters. All of them are adjusted to fit the pion mass, its decay constant, the kaon mass and the quark condensates in the vacuum. There are different sets of parameters for describing the SU(2) (only considers u and d quarks) and the SU(3) versions of the model.
When building the EOS to describe hybrid stars, two constructions are comonly made: one with a mixed phase (MP) and another without it, where the hadron and quark phases are in direct contact. In the first case, neutron and electron chemical potentials are continuous throughout the stellar matter, based on the standard thermodynamical rules for phase coexistence known as Gibbs conditions. In the second case, the electron chemical potential suffers a discontinuity and only the neutron chemical potential is continuous. This condition is known as Maxwell construction. The differences between stellar structures obtained with both constructions were discussed in many papers [126][127][128] and I just reproduce the main ideas next.
In the mixed phase, constituted of hadrons and quarks, charge neutrality is not imposed locally but only globally, meaning that quark and hadron phases are not neutral separately. Instead, the system rearranges itself so that where ρ iP c is the charge density of the phase i = H, Q, χ is the volume fraction occupied by the quark phase, and ρ l c is the electric charge density of leptons. The Gibbs conditions for phase coexistence impose that [48]: and The Maxwell construction is much simpler than the case above and it is only necessary to find the transition point where µ HP n = µ QP n and P HP = P QP , and then construct the EoS.
In Fig.19, different EOS are built with both constructions and the respective mass radius curves are also shown. In all cases, the hadronic matter was described with either GM1 [47] or GM3 parametrisations [48] and the quark phase with the two most common parametrisations for the NJL model (HK [129] and RKH [130]). On the top one can see that under the Maxwell construction, the EOS presents a step at fixed pressure and under the Gibbs construction, the EOS is continuous. It is then easy to see that for both constructions, the mass radius curves are indeed very similar and yield almost indistinguishable results for gravitational masses and radii. In these cases, the differences of the hadronic EOSs dominate over the differences of quark EOSs. Hence, the maximum mass is mostly determined by the hadronic part. It is also important to stress that the quark core is not always present in the star even if it the quark matter EOS is included in the EOS. This fact is noticed when one compares the density where the onset of quarks takes place with the star central density. If the star central density is lower than the quark onset, no quark core exists.
A more recent analysis of the dependence of the macroscopic properties of hybrid stars on meson-hyperon coupling constants and on the vector channel added to the NJL model can be seen in [58].
In 2019, the LIGO/Virgo collaboration detected yet another gravitational wave, the GW190814 [131], resulting from the merger of a 23 M black hole and another object with 2.59 +0.08 −0.09 M , which falls in the mass-gap category, i.e., too light to be a black hole and too massive to be a NS. In [117], a chirally invariant model was used to describe hybrid stars with a variety of different vector interactions and this compact object could be explained as a massive rapidly rotating NS. A comprehensive discussion on ultra-heavy NS (masses larger than 2.5 M ) and the possibility that they are hybrid objects can be found in [132].
If the reader is interested in understanding the effects of different quark cores that also include trapped neutrinos at fixed entropies, reference [52] can be consulted.
VI. QUARK STARS
All experiments that can be realised in laboratories show that hadrons are the ground state of the strong interaction. Around 50 years ago, Itoh [133] and Bodmer [134], in separate studies, proposed that under specific circumstances, as the ones existing in the core of neutron stars, strange quark matter (SQM) may be the real ground state. This hypothesis, later on also investigated by Witten, became known as the Bodmer-Witten conjecture and it is theoretically tested with the search of a stability window, defined for different models in such a way that a two-flavour quark matter (2QM) must be unstable (i.e., its energy per baryon has to be larger than 930 MeV, which is the iron binding energy) and SQM (three-flavour quark matter) must be stable, i.e., its energy per baryon must be lower than 930 MeV [134,135]. As shown in the previous section, although the Nambu-Jona-Lasinio (NJL) model [123] can be used to describe the core of a hybrid star [119,125], it cannot be used in the description of absolutely stable SQM as shown in [136][137][138][139]. The most common model, the MIT bag model [122] satisfies the Bodmer-Witten conjecture, but cannot explain massive stars J0348+0432 [70], J1614-2230 [71] and J0740+6620) [8,69], as can be seen in Fig.20, from where one can observe that the maximum attained mass is 1.94 M obtained for a non-massive strange quark.
Hence, we next mention another quark matter model that satisfies de Bodmer-Witten conjecture at the same time that can describe massive stars and canonical stars with small radii, the density dependent quark mass (DDQM) proposed in [141,142] and investigated in [143]. In the DDQM model, the quark masses depend on two arbitrary parameters and are given by where the index I stands for the medium corrections and the baryonic density is written in terms of the quark densities as and ν i is the Fermi momentum of quark i, which reads: and µ * i is the i quark effective chemical potential. The energy density and pressure are respectively given by and where Ω 0 stands for the thermodynamical potential of a free system with particle masses m i and effective chemical potentials µ * i [141]: with g i being the degeneracy factor 6 (3 (color) x 2 (spin)) and the relation between the chemical potentials and their effective counterparts is simply On the left of Fig. 21 the stability window is plotted for a fixed value of C, so that it displays a shape that can be compared with Fig. 20. For other values of the constants, more stability windows are shown in [143]. On the right of Fig. 21, different mass-radius curves are shown and one can see that very massive stars can indeed be obtained. At this point, it is worth mentioning that quark stars are believed to be bare (no crust is supported) and for this reason, the shape of the curves shown in Fig.21 is so different from the ones obtained for hadronic stars and shown in Figs. 11, Fig.12 and for hybrid stars, as seen in Fig.19. There is still another very promising model, an extension of the MIT bag model based on the ideas of the QHD model. In this extended version, the Lagrangian density accounts for the free Fermi gas part plus a vector interaction and a self-interaction mesonic field and reads [144]: where the quark interaction is mediated by the vector channel V µ representing the ω meson, in the same way as in QHD models [34]. The relative quark-vector field interaction is fixed by symmetry group and results in with adequate redefinitions given by and b 4 taken as a free parameter. Using a mean field approximation and solving the Euler-Lagrange equations of motion, the following eigenvalues for the quarks and V 0 field can be obtained: With this new approach, when the self-interaction vector channel is turned off, the stability window increases and a 2.41M quark star that satisfies all astrophysical constraints is obtained. The self-interaction vector channel does not change the stability window, but allow even more flexibility in the calculation of the tidal polarisability and the canonical star radius due to the inclusion of the free parameter b 4 . In this case, a 2.65 M quark star corresponding to a 12.13 km canonical star radius and a tidal polarisability within the expected observed range is obtained along many other results which satisfy many of the presently known astrophysical constraints. Some of the results are displayed in figure 22. After all the discussion on the radii of NS constrained with the help of gravitational wave observation and neutron skin thickness experimental results presented in section IV B and on the uncertainty of these values, I just would like to add one comment: contrary to what is obtained for a family of hadronic stars (maximum mass stars are generally associated with a smaller radii than their canonical star counterparts), a family of quark stars may produce canonical stars with radii that can be approximately the same as the maximum mass star radii, depending on the model used [144] and this feature could accomodate the recent NICER detections for J0030+0451 and J0740+6620 .
This modified MIT bag model has also been used to investigate the finite temperature systems and to obtain the QCD phase diagram in [5] with the help of a temperature dependent bag B(T ), as discussed in the Introduction of the paper. Some of the possible phase diagrams are shown in figure 23.
I have outlined the main aspects concerning the internal structure of quark stars, but the discussion about their bare surface [145,146] is not completely settled [147] and important problems as its high plasma frequency and neutrinosphere are out of the scope of the present work but should not be disregarded. I could not end this review without mentioning magnetars [148,149], a special class of neutron stars with surface magnetic fields three orders of magnitude (reaching up to 10 15 G at the surface) stronger than the ones present in standard neutron stars (10 12 G at the surface). Most of the known magnetars detected so far are isolated objects, i.e., they are not part of a binary system and manifest themselves as either transient X-ray sources, known as soft-γ repeaters or persistent anomalous x-ray pulsars. They are also promising candidates for the recent discovery of fast radio bursts [150]. So far, only about 30 of them have been clearly identified [151] but more information is expected from NICER [7] and ATHENA [152], launching foreseen to take place in 2030. So far, NICER has already pointed to the fact that the beams of radiation emitted by rapidly rotating pulsars may not be as simple as often supposed: the detection of two hot spots in the same hemisphere suggests a magnetic field configuration more complex than perfectly symmetric dipoles [8].
From the theoretical point of view, there is no reason to believe that the structure of the magnetars differs from the ones I have mentioned in this article. Thus, they can also be described as hadronic objets [80,[153][154][155][156][157], as quark stars [137,[156][157][158][159] or as hybrid stars [154,156]. At this point, it is fair to claim that the best approach to calculate macroscopic properties of magnetars is the use of the LORENE code [160], which takes into account Einstein-Maxwell equations and equilibrium solutions self consistently with a density dependent magnetic field. LORENE avoids discussions on anisotropic effects and violation of Maxwell equations as pointed out in [159], for instance. However, at least two important points involving matter subject to strong magnetic fields can be dealt with even without the LORENE code. The first one is the crust core transition density discussed in [155] and [161]. Although the magnetic fields at the surface of magnetars are not stronger than 10 15 G, if the crust is as large as expected (about 10% of the size of the star), at the transition region the magnetic field can reach 10 17 G. The transition density can then be estimated by computing the spinodal sections, both dynamically and thermodynamically. The point where the EOS crosses the spinodal defines the transition density [162]. An interesting aspect is that the spinodals of magnetised matter are no longer smooth curves. Due to the filling of the Landau levels, more than one crossing point is possible [155,161], what introduces an extra uncertainty on the calculation. The second aspect refers to possible oscillations in magnetars caused by the violent dynamics of a merging binary system. One has to bear in mind that so far, all observed magnetars are isolated compact objects, but there is no reason to believe that binary systems do not exist. In this case, the perturbations on the metric can couple to the fluid through the field equations [163,164]. For a comprehensive discussion on the equations involved, please refer to [80]. The gravitational wave frequency of the fundamental mode is expected to be detected in a near future by detectors like the Einstein Telescope. In [80], the effect of strong magnetic fields on the fundamental mode was investigated. From the results presented in that paper, one can clearly see that magnetars bearing masses below 1.8 M present practically the same frequencies. Nevertheless, more massive stars present different frequencies depending on their constitution: nucleonic stars present frequencies lower than their hyperonic counterparts, a feature which may define the internal constitution of magnetars.
The DDQM described in Section VI was also investigated under the effects of strong magnetic fields and the main expressions can be found in [165]. This may be an interesting model for future calculations of the fundamental models.
VIII. FINAL REMARKS
From the existence of a massive ordinary star that lives due to nuclear fusion, to its explosive ending and its aftermath, I have tried to tell the neutron star history. All these stages can be explained thanks to nuclear physics and I have revisited the main aspects and models underlying each one.
I have also tried to emphasise that nuclear models are generally parameter dependent and a plethora of models have been proposed in the last decades, but it is unlikely that the very same models can be used to describe different aspects of nuclear matter and, at the same time, all macroscopic properties of neutron stars. I do not advocate that the models I have chosen to use are the best ones, but the main idea is to show that different models should be used at the discretion of the people who employ them. I have not used density dependent hadronic models as the ones proposed, for instance, in [121,166] to avoid extra theoretical complications, but they are indeed very good options, since they can describe well nuclear matter, finite nuclei and NS properties, as seen in [36,89,90].
As far as detections of gravitational waves are concerned, a window was opened in 2015 and many observations will certainly be disclosed even before I finish writing this paper. Besides the ones already mentioned, I would like to comment on the GW190425 [167], GW200105 and GW200115 [168]. The first one was used in conjunction with a chiral effective field theory to constrain the NS equation of state [169]. The authors obtained a radius equal to 11.75 +0.86 0.81 for a canonical star, also quite small as compared with the ones obtained from the PREX experiment. The other two probably refer to neutron star -black hole mergers, systems that have been conjectured for a long time and will probably contribute to the understanding of NS EOS.
Before concluding, I would like to mention that many aspects regarding either isolated NS or binary systems have not been tackled in this manuscript and, in my opinion, rotation is the most important one. A better understanding of these compact objects depends on many rich features, including thermal and magnetic evolution. Different observation manifestations as pulsars, accreting X-ray binaries, soft γ-repeaters and anomalous X-ray pulsars, also deserve an attentive investigation. Hence, this review is just one step towards the incredible exotic world of neutron stars.
As far as the QCD phase diagram is concerned, many aspects have been extensively studied and are well understood: matter at zero temperature, symmetric nuclear and pure neutron matter, low density matter, including clusterisation and the pasta phase, high density matter and matter in β-equilibrium. Nevertheless, an EOS that covers the complete QCD phase diagram parameter space in (T, µ B ) in a single model is not available yet. Some of the EOS can be found on the CompOSE (CompStar Online Supernovae Equations of State) website [170]. | 19,546.8 | 2021-06-16T00:00:00.000 | [
"Physics"
] |
Risk perceptions of individuals living in single-parent households during the COVID-19 crisis: examining the mediating and moderating role of income
The COVID-19 crisis had severe social and economic impact on the life of most citizens around the globe. Individuals living in single-parent households were particularly at risk, revealing detrimental labour market outcomes and assessments of future perspectives marked by worries. As it has not been investigated yet, in this paper we study, how their perception about the future and their outlook on how the pandemic will affect them is related to their objective economic resources. Against this background, we examine the subjective risk perception of worsening living standards of individuals living in single-parent households compared to other household types, their objective economic situation based on the logarithmised equivalised disposable household incomes and analyse the relationship between those indicators. Using the German SOEP, including the SOEP-CoV survey from 2020, our findings based on regression modelling reveal that individuals living in single-parent households have been worse off during the pandemic, facing high economic insecurity. Path and interaction models support our assumption that the association between those indicators may not be that straightforward, as there are underlying mechanisms–such as mediation and moderation–of income affecting its direction and strength. With respect to our central hypotheses, our empirical findings point toward (1) a mediation effect, by demonstrating that the subjective risk perception of single-parent households can be partly explained by economic conditions. (2) The moderating effect suggests that the concrete position at the income distribution of households matters as well. While at the lower end of the income distribution, single-parent households reveal particularly worse risk perceptions during the pandemic, at the high end of the income spectrum, risk perceptions are similar for all household types. Thus, individuals living in single-parent households do not perceive higher risks of worsening living standards due to their household situation per se, but rather because they are worse off in terms of their economic situation compared to individuals living in other household types.
Introduction
The COVID-19 pandemic had enormous social and economic impact on individuals around the globe.Millions of people were severely affected in terms of their health (Kontoangelos et al., 2020); massively restricted in their personal freedom (e.g., social distancing and lockdowns) (Tisdell, 2020); had to make major changes to their daily routines (Broersma, 2022;Li et al., 2022); remained numerous weeks in furlough with minimal income, or even gradually stumbled into unemployment after dramatically reduced working hours (Schulten and Müller, 2020).In Germany, despite multiple policy interventions intended to protect citizens from infection as well as from economic hardship, severe consequences on the life of most residents could not be forestalled.More than 13.5 million individuals' incomes fell below the poverty line in 2021, which presents an all-time high of poverty rates in Germany; unemployment rates also escalated severely (Schneider et al., 2022).Job and/or income loss during this crisis also bear the risk of increasing socioeconomic stress for individuals.This can lead to impaired risk perception and lower subjective well-being, even resulting in anxiety and/or depression (Fancourt et al., 2020;Ettman et al., 2021).As the pandemic prolonged, risk perceptions, states of mental health and well-being were likewise deteriorating (Entringer and Kröger, 2021;Hiekel and Kühn, 2022;Romero-Gonzalez et al., 2022).Thus, the severe consequences of the COVID-19 crisis can be displayed by a set of objective (e.g., income, unemployment rate, hours employed, number of days of sick leave) and subjective (e.g., risk perception, well-being, life satisfaction) indicators.
Looking at some of these indicators separately, prior research has shown that the recent crisis has hit certain social groups harder than others and that socioeconomic risks are not equally distributed among different household types in Germany (BMAS, 2017;Butterwege, 2021;Hipp and Bünning, 2021;Huebener et al., 2021;Kreyenfeld and Zinn, 2021;Kuhn et al., 2021;Blundell et al., 2022;Li et al., 2022).In a nutshell, those indicators emphasise that individuals living in singleparent households 1 were particularly affected during the pandemic (Dromey et al., 2020;Hertz et al., 2021).They faced the highest poverty rate of all household types (42 percent in 2022) and revealed worrying perceptions about their future (Schneider et al., 2022).This may not come as a surprise since they had to manage additional obstacles such as an increased burden of unpaid housework and home schooling overnight.Single parents were severely affected by the shift 1 We use the term single-parent households to refer to individuals living in households, declaring themselves as single parents.This includes single parents (mothers or fathers) who raise one or more children living in the same household, while not living in the same household with another adult (e.g., their partner, grandparents), or (currently) not having a partner.By using this definition, we do not differentiate between parents who were single when they had their child and those who got separated afterwards or were bereaved (Nieuwenhuis and Maldonado, 2019).By comparison, we refer to couple-parent households to reflect that both adults are living in the same household and are parenting one or more children also living in this household.Singles per definition do not have children and do not share their living environment with another adult.Finally, we define couples without children as two adults living in the same household.of all childcare responsibilities from formal institutions to private households, putting them under enormous stress and only raising more concerns about caregivers' mental health and wellbeing (Li et al., 2022).Where couple-parent households with children at least had greater flexibility arranging their additional tasks and time budgets for balancing work and family issues, single parents did not even have the comforting support of a partner.In fact, single parents had to shoulder it all on their own and were left alone to cope with the impossible in times of increasing uncertainty (O'Reilly, 2020;Carotta et al., 2022).
Interestingly enough, previous research on the situation of singleparent households during the COVID-19 crisis has not sufficiently investigated the relationship between those subjective and objective indicators so far.It is still not entirely clear how to explain their perception about the future and to what extent their outlook is related to their objective economic conditions.In order to close this research gap, we analyse their situation during the pandemic, by determining how new (and worsened) economic realities influence the subjective future risk perception of individuals living in single-parent households.
Against this background, we go beyond previous research as we do not only examine (i) the subjective indicator of individual risk perception of individuals living in single-parent households and (ii) their objective economic situation (based on the logarithm of their equivalised disposable household income), but (iii) also assess the relationship between those indicators.In applying this approach, we focus on the experiences of individuals living in single-parent households in Germany during the pandemic, while comparing them to individuals living in three other household types (singles without children, couple-parent households with children, couples without children).For our analyses, we use data from the German Socio-Economic Panel (GSOEP), including the specific SOEP-CoV survey from 2020, which observed the same individuals before and during the COVID-19 crisis.As our modelling strategy, we apply path and interaction models (Baron and Kenny, 1986;Aichholzer, 2017) in order to disentangle the seemingly obvious relation between household type and risk perception, whilst considering income as mediating and moderating variable.Here, path modelling is particularly suitable to test a mediation relation, because (1) it allows us to look at the relationship of two variables (in our case household type and risk perception) at the same time, next to (2) analysing the changing relation between them once we include another explanatory variable (income).In addition, an interaction model (between household type and income) allows us to test whether and to what extent the income level affects the risk perceptions of different household types.
Our findings reveal that individuals living in single-parent households have been worse off in the past decades and continue to be a special risk group, showing high economic insecurity during the pandemic.Although individuals in different household types seem to reveal unequal risk perceptions at first glance, these effects forfeit explanatory power once we include income into the model.In particular, we find that their economic situation mediates the effect of household types on risk perception during the COVID-19 crisis.Furthermore, the interaction model reveals that the level of income does moderate the risk perception of parents in contrast to non-parents, yet both partnered parents and single parents share similar negative risk perceptions when they earn a low income.Since single-parent households are likely to have a low income, this also largely explains the differences in risk perceptions between coupled parents and single parents.Thus, our findings demonstrate that the weak financial situation prevalent amongst individuals living in single parent households is inherent in their comparably more negative future risk perception.
2 Background information on individuals living in single-parent households in Germany Providing some institutional background information, Germany is categorised as a corporatist welfare state, coinciding with a (modernised) male-breadwinner model/female caregiver model, shaping the distribution of resources and opportunities contingent on employment or family position (Esping-Andersen, 1990;Lewis, 1992;Orloff, 1996;Lohmann and Zagel, 2016).For most of the time after the German reunification, family and social policies favoured traditional couple-parent households through the tax code, health insurance, child care, child benefits and other social security regulations, thus either perpetuating women's dependence on a male breadwinner or disadvantaging single-parent households (Trappe et al., 2015, p. 232).In the course of an expanding service sector, however, the female employment rate increased steadily (even if almost always in part-time work), in turn fostering women's labour market attachment, their educational attainment and progressive gender role norms (Brückner, 2004;Fritsch, 2014;Fritsch et al., 2022).In line with these changing contextual conditions and combined with an ongoing flexibilization of the labour market addressing an overall economic crisis in the 1990's (Verwiebe and Fritsch, 2011;Verwiebe et al., 2013Verwiebe et al., , 2014Verwiebe et al., , 2019;;Teitzer et al., 2014;Fritsch and Verwiebe, 2018;Riederer et al., 2019), Germany is slowly experiencing social policy changes (Streek, 2009;Hinrichs, 2010).This includes familialising policies such as the introduction of an earnings-related and genderneutral parental leave benefit for the duration of 12-14 months, alongside de-familialising policies such as the expansion of childcare provision for children between the ages of one to 3 years, and a legal claim for publicly provided or subsidised childcare for every child over the age of one since 2013 (Seeleib-Kaiser, 2016, p. 225).
During the pandemic crisis, existing social security programs were substantially expanded and provided generous subsidies for German citizens.Especially with the social insurance program Kurzarbeit (short-time), authorities devised a massive 700 billion Euros plan, in order to protect worker's income and prevent masslayoffs; here, the government pays employees at least 60% of their regular pay for the hours not worked; and even 67% for working parents (Bariola andCollins, 2021, p. 1682).On the downside, this program does not include temporary and marginal employment, such as "mini jobs" for example.And indeed, single parents are more often in Kurzarbeit (short-time) and thus have to face above average income loss during the pandemic (BMFSFJ, 2021b, p. 26).Although the German welfare state is intended to promote social protection for vulnerable groups, we observe significant imbalances in terms of guaranteeing achieved living standards-especially for individuals living in single-parent households.
In order to build a bridge between institutional arrangements and economic realities of individuals living in different household types in Germany, we present some descriptive trends of how their shares have developed over the last two decades and portray their economic situation (median incomes, poverty risks, and unemployment rates) in Table 1 (the percentages are displayed for individuals who live in different household types).In Germany around 7.6 percent of the individuals live in single-parent households, and one of five households with children are headed by single parents, which corresponds to 6% of all households.Dependent children are living in around 1.5 million single-parent households, numbers that have stayed constant since 2009, 88% of them headed by females (BMFSFJ, 2021a).
Furthermore, Table 1 reveals that economic risks are not equally distributed among individuals living in different household types and they are gaining relevance over the past decades in Germany (BMAS, 2017;Boehle, 2019;Schneider et al., 2022).This increase in inequality between different household types can be (at least partially) attributed to the massive labour market reforms (Hartz legislation).Overall, we observe that of all household types, single parents and their children are most often affected by socioeconomic risks, which only have become more pronounced in the past decades (Kraus, 2014).Within the last decades, we observe (1) a general tendency of decreasing unemployment rates, followed by ( 2) by increased poverty rates, which are (3) especially elevated for individuals living in singleparent households.With regard to the income development, it is apparent, that the median monthly household income of the total population has risen significantly in the last decade.However, this is not the case for individuals living in single-parents households.Rather, the income gap of individuals living in single-parent households has grown compared to the total German population. 2t is important to notice that in times of the COVID-19 pandemic, individuals living in single-parent households were worse off once again.Next to individuals living in households with three or more children (32 percent), single-parent households face the highest poverty rate of all household types in Germany in 2022 (42 percent) (Schneider et al., 2022).Federal intervention programs were likely to fizzle out due to high inflation rates and especially support households with proportionally higher incomes. 3Noticeable financial relief increases with the amount of income while the poorest again only received support insufficiently.Thus, the pandemic-followed by historically high levels of inflation-has widened the gap between poorer and richer households in Germany.
3 Individuals living in single-parent households, subjective risk perceptions, and income: prior and present research
Subjective indicator: individual risk perception of worsening living standards
As the subjective indicator we use the individual risk perception of worsening living standards.The concept of risk perception is complex and scholars from varying disciplines approach it differently, accounting for diverse ways in which people perceive and process risks they face in the social context of day-to-day life (Zinn, 2006;Soiné et al., 2021).One common denominator is the distinction between reality and possibility, where an undesirable state of reality 4 may occur as a result of human activities or natural events-such as the COVID-19 pandemic-and may (not) lead to consequences that affect aspects of what individuals value (Renn and Rohrmann, 2000, p. 13).Within this process, individuals receive signals (such as lockdowns and a threatening labour market crisis), as well as information about possible future outcomes (e.g., job and income loss) and then tend to form respective opinions and attitudes toward the impact.Thus, risk perception can be defined as individual's evaluation of possible outcomes they are or will be exposed to Taylor-Gooby and Zinn (2006) and Lidskog and Sundqvist (2013).
With respect to prior research focusing on the COVID-19 pandemic, it has been shown, that due to higher health risks, confinement-related adjustments in daily routines, a reduction of social contacts outside the household, additional screen-time and fewer opportunities for physical (outdoor) activities, risk perceptions are deteriorating (Prime et al., 2020;Möhring et al., 2021).Amongst other things, this applies to growing socioeconomic insecurities (e.g., because auf Kurzarbeit (short-term), layoffs or income loss) as well as, in turn, worsening individuals' personal assessment about their future living standards.All parents (Hipp and Bünning, 2021;Li et al., 2022), but in particular single parents were challenged, since they had to manage the double burden of paid employment and additional care work at the same time (Bariola and Collins, 2021).In line with this, Calvano et al. (2022) and Racine et al. (2021) suggest that managing child care obligations, employment assignments and complying with the confinement measures was one main contributing factor for the decline in parents' mental health.Against this background, we assume that single-parent households show worse subjective risk perceptions, compared to other household types during the COVID-19 pandemic, since they are disadvantaged by not having a partner to rely on emotionally or economically in times of crisis (Hypothesis 1).
Furthermore, prior research reveals that next to the household type other individual characteristics have an impact on risk assessments during the COVID-19 crisis, such as age, education, migration background, or employment status.For example, Wanberg et al. (2020) displays that highly educated individuals experience a greater increase in depressive symptoms and a greater decrease in life satisfaction from before to during COVID-19 in comparison to those with lower education.Kivi et al. (2021) reveal that although senior adults aged 65-71 perceived high societal risks related to the pandemic, the majority was neither particularly worried about their financial situation nor showed pronounced declines in their overall well-being.Finally, there is research emphasising the disproportionately harsh impact on unprivileged populations such as migrants.These populations are often more exposed to infections, but less protected, while at the same time being at higher risk of suffering from poor living and working conditions, and limited access to healthcare, all of which is challenging to their mental health (Garrido et al., 2023).Bearing those results in 4 Although referring to desirable risks which individuals aspire to reach, rather than relating to the danger of unwanted events, is per definition plausible as well (Machlis and Rosa, 1990), but not subject to this paper.The median income is based on the equivalised disposable household income for each individual living in the particular household type.Income is defined as the total income of a household, after tax and other deductions, that is available for spending or saving, divided by the number of household members converted into equalised adults; household members are equalised or made equivalent by weighting each according to their age, using the so-called modified OECD equivalence scale (Eurostat, 2021).Risk of poverty is defined as the share of individuals with an equivalised disposable income (after social transfer) below the at-risk-of-poverty threshold, which is set at 60% of the national median equivalised disposable income after social transfers.Unemployment rate is defined as the share of individuals aged 16-65 not employed during the reference week (Eurostat, 2021).
Objective indicator: equivalised income
As the objective indicator we consider equivalised earnings derived from the disposable household income of each individual's household.Here, single-parent households comprise a vulnerable group on the labour market, facing above average financial hardship (Gornick and Meyers, 2003;Wu and Eamon, 2011;Maldonado and Nieuwenhuis, 2015;Nieuwenhuis and Maldonado, 2018).Per definition, they not only lack a second parent but also a second (potential) earner in the household.Furthermore, their income is a reflection of disadvantaged labour market positions due to avoidance of jobs, which require long working hours or overtime hours, and instead choosing jobs which offer flexible working arrangements but come with lower earnings (Casey and Maldonado, 2012).Thus, single parents face a double burden as they are likely to have a deficit in both money and time, with less money to pay for professional childcare and fewer hours during the day to work and care for their children (Nieuwenhuis and Maldonado, 2018, p. 172).
The pandemic added additional fuel to this already tense situation (Cook and Grimshaw, 2021).First, single parents in Germany have an above-average employment rate within the service industry (e.g., gastronomy, trading sector), which usually offers flexible working arrangements necessary for balancing the work-family conflict.For example, in 2020, more than 17% of single-parent households were employed in the trading industry compared to 12% of couple-parent households and 9% of singles without children (GSOEP 2020/21; own calculation).However, during the pandemic large parts of the service sector were shut down for many months, either forcing employees to work in Kurzarbeit (short-time) and reduced wages or even facing layoffs.Second, the pandemic drastically changed daily working routines and the way in which work was done.Here, working (remotely) in paid employment (from home), combined with an additional burden of unpaid care work, was difficult or even impossible for single parents, hence lowering their labour productivity.In this light, we expect that single-parent households continue to be worse off and earn less than other household types during the pandemic (Hypotheses 2).
Toward an understanding of the link between household type, risk perception, and income
The central aim of this paper is to investigate the relationship between household type, risk perception, and income.However, there are two ways to look at this relationship, each embedded in another strand of existing research.On the one side, we find a growing body of research concentrating on single-parent households, especially on single mothers, and their well-being or life satisfaction (Branowska-Rataj et al., 2014;Ifcher and Zarghamee, 2014;Pollmann-Schult, 2018).This life satisfaction penalty for single mothers is commonly attributed to elevated emotional and financial stress, high levels of role overload, time pressure, and strain that accompany long-term single parenting (Nelson et al., 2013;Pollmann-Schult, 2018).These studies find that although single mothers are substantially less happy than individuals in other household types, their happiness increased in absolute and relative terms over the past few decades (Herbst, 2012); here Ifcher andZarghamee (2014, p. 1234) suggest some "possible explanations for the observed trends: changes to social welfare programs, increased labor force participation, compositional shifts in single motherhood, and reduced stigma." Within this strand of research, objective indicators (such as income for example) are either used as control variables, or to explain differences within the group of single-parent households.
On the other hand, there is research which has established a link between objective indicators (e.g., material goods and resources like income or wealth) and subjective indicators (e.g., risk perception, wellbeing, life satisfaction or happiness) (Cummins, 2000;Lever, 2004;Cho, 2018;Riederer et al., 2021;Fritsch et al., 2023).This line of research indicates that the material conditions of life are related to and constitute a reliable predictor of the individual assessments of one's life (Burchell, 2011;Clark et al., 2013;Van der Meer, 2014).However, findings on the concrete direction of this relationship remain controversial.In short, some scholars sustain a strong positive relationship, where rich people are happier with their lives, and this relationship is more pronounced, the richer the individuals are (Esterlin, 2001;Lever, 2004).Others question this relationship, affirming that a significant part of the variance of one's subjective assessment is not directly explained by economic variables, but rather by other psychological and physiological variablesthemselves contributing a significant influence (Fuentes and Rojas, 2001;Diener and Biswas-Diener, 2002).
In order to contribute to the current state of research, in the present paper, we argue that the relation between risk perception (subjective indicator), income (objective indicator) and household type, is anything but straightforward.We analyse this seemingly obvious relationship by dismantling the underlaying mechanisms step by step.From an analytical perspective two main mechanisms are plausible, which could influence the effect of household type on risk perception (see Figure 1).First, a mediating effect, where an independent variable influences a dependent variable through a thirdthe mediating-variable which is related to both the independent and the dependent variable (Baron and Kenny, 1986;Cho, 2018).With respect to our research focus, this would mean that the difference in risk perception of single-parent households can be explained through a third indicator, namely income.This mediating effect will reveal, in the path model, once we look at the relationship of household type, risk perception and income at the same time, that income (partially) accounts for the link between household type and risk perception.Considering that single-parent households are more likely to face financial hardship and are above average confronted with unstable labour market conditions during the COVID-19 pandemic, it seems plausible that the economic component contributes some explanatory power for the different level of risk perception of single-parent households in comparison to other household types (Hypothesis 3a).Second, we expect to observe a moderating effect, where the third variable alters the direction or strength of the relationship between an independent variable and a dependent variable (Baron and Kenny, 1986).In our case, this would mean that the level of income affects the relationship of household type and risk perception, revealing different levels of risk perception of household types across the earnings distribution.Considering the concrete position at the income level is particularly important since financial conditions may change massively during a crisis and have shown to be a substantial predictor for risk perceptions in times of uncertainty (Burns et al., 2012) 2011), the potential consequences of low(er) incomes may weigh somewhat stronger for this special risk group.Against this background, we assume comparatively less negative risk perceptions regardless of the household type at the upper end of the income distribution, while at the lower end we expect increased negative risk perceptions amongst parents and especially amongst single parents who lack the support of a partner (Hypothesis 3b).
4 Data, methods and variables
Data
As our empirical basis we use the harmonized data from the sub-survey of the German Socio-Economic Panel, the SOEP-CoV sample, 5 which contains details on specific household circumstances during the pandemic, including objective information on the household economic conditions, as well as subjective assessments of the current and future situation.The initial sample consist of 8,133 individuals; once we consider valid information on our main variables of interest (risk perception, household type, income, and controls) our final sample contains information on 6,065 respondents (2,502 men and 3,563 women). 6 5 The sub-survey was dedicated to monitor the pandemic situation.A total of 12,000 households were asked to participate in the SOEP-CoV study.The first wave of the survey started on April 1, 2020, and ended on June 28, 2020.Individuals from a total of 6,694 households were surveyed.
6 Compared to the original sample, we excluded about 761 respondents not living in one of the four household types we are interested in and 32 respondents because of their age.In addition, 1,275 respondents do have missing values for one of the other variables.
We restrict our sample to 2020 since important information on households-including household type and individual incomesare not available for 2021.
Analytical strategy and variables
For our analytical strategy, we use a three-step procedure.In the first two steps, we are interested in how individuals living in singleparent households assess their situation during the pandemic whilst examining the subjective indicator of risk perception and the objective indicator of income.Throughout our modelling strategy, we compare individuals living in single-parent households to individuals living in three other household types [(1) singles without children, (2) couple-parent households with children, and (3) couples without children]. 7For measuring the subjective indicator of individual risk perception respondents are asked (a) how likely they think that their living standards will diminish due to the pandemic (from 0 to 100%) or (b) if it already happened, enter 1, which we translate into 100% likelihood, since they already see the pandemic diminish their living standards. 8As the objective indicator, 7 We use single-parent households as reference category to compare this group with all other household types in the regression models.
8 For sensitivity analyses, we additionally calculated two models with (a) the metric variable assessing the likelihood that the living standard will diminish due to the pandemic excluding those with already lower living standard; and (b) the binary variable that the living standard already diminished or not.For (a) the results show the very same patterns of mediation and moderation like the models presented in section 5.For (b) the patterns of mediation are still the same, however, the interaction effect of household type and income is not significant for this model.Disentangling the link between household type, risk perception and income.Source: own illustration. 10.3389/fsoc.2023.1265302 Frontiers in Sociology 07 frontiersin.orgwe use the logarithmised equivalised household income 9 of individuals.In the first two steps we calculate linear regressions and present unstandardized coefficients for the subjective and objective indicator (Table 2). 10 In the next step we estimate a path model to uncover a possible mediation effect of income intervening in the risk perception of different household types (Aichholzer, 2017, p. 51;Baron and Kenny, 1986) and an interaction model between income level and 9 The equivalised disposable income is defined as the total disposable income of a household, divided by the equivalised number of household members; household members are weighting each according to their age, using the modified OECD equivalence scale (Eurostat, 2021).In order to reflect differences in a household's size and composition, the total disposable household income is divided by the number of 'equivalent' individuals (1.0 to the first adult; 0.5 to the second and each subsequent person aged 14 and over; 0.3 to each child aged under 14) (Eurostat, 2021).
10 In order to substantiate our findings, we calculated a number of sensitivity analyses.For the models in Table 2, we additionally calculated linear regressions without control variables, which are presented in the supplementary material Table A1.
household type to test whether there is a moderation effect of income, meaning that the effect of household type differs across different income levels (results are displayed in Figures 2, 3).According to our analytical strategy, we are interested in how earnings mediate and moderate the effect of household type on economic risk perception during the COVID-19 crisis.By using logarithmised incomes, we take into account that an increase in income has a stronger effect on risk perception in lower income groups than in higher income groups.As differences in the compositions of the household groups might be present in relation to other variables might affect risk perceptions directly, control variables we include are gender as a dummy variable (0 = men, 1 = women), age as a metric variable, migrant background as a dummy variable (0 = no migration background, 1 = direct or indirect migration background 11 ), level of education (low, mid:
Results
5.1 How do individuals living in single-parent households make it through the COVID-19 pandemic and how do they assess their future?
To evaluate how individuals living in different household types muddle through during the COVID-19 crisis, and whether singleparent households are particularly at risk concerning their future, we first contrast individuals' present economic situation as well as their subjective evaluation of their prospects.Table 2 displays differences in risk perceptions (subjective indicator) and income (objective indicator) between individuals living in different household types, based on linear regression modelling after controlling for sociodemographic variables.A set of relevant findings result from these models: With respect to the subjective indicator of risk 12 For sensitivity analyses, we additionally calculated our models with variables including the age of the children (e.g., "kids in school" and "kids in preschool") and employment security, accounting for (1) more time-consuming parenting work and possibly more worries about the development of their children during the pandemic and (2) occupation and type of contract.The results show that parents were in general more worried and being in a partnership does not seem to moderate this risk perception.With respect to the age of children, there does seem to be an independent effect of having young children (beneath age 6) on making the risk perception of individuals more negative.When we add these variables to the models, they explain part of the differences between household types.However, we did not include the age of the children into our final models since we already account for children in the definition of the household types.When we additionally consider employment security the main results remain stable (see Tables A3 and A4 in the Supplementary material).2, column 1), we observe that couples without kids and individuals living in couple-parent households assess their situation less negatively during the pandemic, compared to individuals living in single-parent households and single households (Hypothesis 1). 13Thus, our findings confirm prior research addressing household type as well as marital status as specifically important when it comes to detrimental consequences during the COVID-19 crisis (Reichelt et al., 2020;Bariola and Collins, 2021;Hiekel and Kühn, 2022).Moreover, looking at the control variables in Model 1, we can conclude that future prospects are rated less negatively with increasing age (Kivi et al., 2021) and worse if respondents have direct or indirect migration background (Garrido et al., 2023).Furthermore, the individual evaluation of future living standards is less negative the better educated and worse if individuals are currently unemployed or working part-time.
perceptions (Table
With respect to the objective indicator (Table 2, column 2) we notice that individuals living in single-parent households have to face a worse situation compared to individuals living in all other household types (Hypothesis 2).The unstandardized coefficients indicate that the gap in disposable household income is highest between individuals living in single-parent households and couples without children; but couples with children and singles are financially better off as well.Concerning our control variables Model 2 reveals patterns commonly known for Germany and other corporatist welfare states (Esping-Andersen, 1990;Orloff, 1996;Teitzer et al., 2014): We observe that the financial situation improves with increasing age, and for highly educated individuals.However, it deteriorates for individuals with migrant background, in flexible employment relationships-such as part-time jobs-, as well for unemployed or economically inactive individuals.
Against this background and in line with prior research (Hipp and Bünning, 2021;Li et al., 2022), our findings affirm our first and second assumption.We observe that individuals living in single-parent 13 Since the dependent variable measures worsening living standards, a negative sign translates into a less negative assessment.2, column 3) we account for the effect of income on subjective risk perception, which shows a negative association.This means, that individuals with a lower household income estimate a higher probability of a worsened living standards due to the COVID-19 crisis.Looking at the bigger picture, our findings suggest that having a low income, which might in turn be related to unstable and precarious labour market situations, is likely to reduce the probability of positive feelings, including exerting environmental control, and in projecting oneself into a brighter future (Cummins, 2000, p. 138).
5.2 How does income affect the subjective risk perception of individuals living in single-parent households during the pandemic?
In Figure 2 we present the results of our analyses displaying the link of household type, risk perception and income.Here, we are interested in whether or not individuals living in single-parent households evaluate their situation during the pandemic more negatively because they are single-parents, or rather because they are financially worse off.Here, we used path modelling in order to account for the relationship between the subjective indicator of risk perception, the objective indicator of earnings and household type.In the upper part of Figure 2, we again display that couples without kids and couple-parent households assess their situation during the pandemic less negatively compared to single-parent households and single households.In the next step, we include income into the relationship between household type and risk perception as a mediating variable; our findings reveal that the direct effect of household type on risk perception is in part not statistically significant anymore.Put differently, the direct effect of household type on subjective risk perception partly disappears (Table 3).
In more detail, path modelling reveals statistically significant differences in the average income of individuals living in different household types-singles, couples with children and especially coupleparent households earn more compared to single-parent households.Furthermore, the significant indirect effects of household type on risk perception via income indicate that income is an important mediator for a family's evaluation of their future standard of living (Hypothesis 3a).
With respect to our last hypothesis (3b), we also observe a statistically significant moderating effect of income on the risk perception by household type.Individuals living in childless households with lower income show significantly less negative risk perceptions than households with children (Figure 3 and Table A2 in Supplementary material).Moreover, while income does not play a role in the risk perception of single and couple households without children, for couple households with children as well as individuals living in single-parent households a lower income increases their risk perception.This increase in risk perception is significantly stronger for individuals living in single-parent households than for individuals living in households without children.We find no significant differences, however, between individuals living in single-parent households and individuals living in couple-parent households.The interaction of household type and income reveals that income moderates the perceptions of those who live with children, regardless of whether they have a partner or not.Thus, we can see that in fact households with children that do have a high income also do not suffer from a more negative risk perception in comparison to other household types (Hypothesis 3b, only partially confirmed -no differences between single and couples parents). 14 14 To substantiate our findings, we calculated a number of sensitivity analyses.
We included individual health status and source of information on COVID-19 (e.g., watching the news, reading newspapers) into the path model in order to evaluate, whether for example high stress levels of individuals living in singleparent households or a poor health status is influencing the relationship between household type, income and risk perception.These sensitivity analyses reveal that neither do individuals living in single-parent households show a worse health status, nor is the individual health status related to the risk perception.We further find that individuals living in single-parent households use different information sources than individuals living in other household types.However, even if we include the source of information into our model the initial relationship between household type, income and risk perception does not change.
Against this background, and in line with other research, our findings show that a significant part of the variance of one's subjective assessment can be directly explained by economic conditions (e.g., income or wealth) (Esterlin, 2001;Lever, 2004;Burchell, 2011;Clark et al., 2013;Van der Meer, 2014).Thus, we conclude that the effect of household type on risk perception is mediated and moderated via the household income.All in all, we conclude that single-parent households are not worse off per se during COVID-19 pandemic.However, based on the SOEP-CoV data for 2020 for Germany, our results reveal that the risk perception of individuals living in singleparent households is worse on average because of their financially vulnerability.
Discussion
The COVID-19 pandemic had severe consequences on the lives of millions of individuals around the globe.However, some have been hit harder than others: For sure, individuals living in single-parent households account for a vulnerable group, especially and heavily at risk of facing financial distress and emotional hardship.In the paper at hand, we put the situation of individuals living in single-parent households during the pandemic in Germany at centre-stage.By focusing on the relationship of subjective and objective measures of financial and emotional struggles we show how they are intertwined.We started by displaying a historical perspective on the economic situation of individuals living in single-parents households, whilst comparing them to individuals living in other household types.This descriptive time series highlights the consistently exposed position of individuals living in single-parent households over two decades and points toward a recent widening of pre-existing social trenches among different societal groups.After setting the scene for single-parent households' circumstances of life in the years before the crisis began, we applied a three-step analytical procedure in order to disentangle the relationship of household type, risk perception and incomes during the pandemic.
Based on the SOEP-CoV data for 2020 for Germany, our findings once again underline the strong financial vulnerability of individuals living in single-parent households during the COVID-19 pandemic and highlight that they are most vulnerable to worsen their perception of future living standards.Although this first set of findings might not come as a surprise, it is nonetheless a relevant finding for evidence-based social policy decisions-especially if we consider that around 6% of the households are single-parent households in Germany.Thus, our study is in line with other research, pointing toward the unequal effects of the pandemic, particularly affecting those who were already in precarious situations (Wachtler et al., 2020;Kuhn et al., 2021).We add to the body of literature addressing the pronounced increase in inequality and growing economic risks for individuals living in different household types (Huebener et al., 2021).In this respect, we specifically refer to Schäfermayer et al. (2022), who likewise showed that single parents worried more than couple-parents in partnerships largely due to their bleak socio-economic conditions.Furthermore, our second set of findings is new and contributes to the current literature by clarifying the entangled relationship between household type, objective indicators (such as material goods, income or wealth) and subjective indicators (such as risk perception, well-being or happiness) (Cummins, 2000;Lever, 2004;Clark et al., 2013;Cho, 2018).We used path and interaction models to show that earnings mediate and moderate the effect of household type on economic risk perception during the COVID-19 crisis.These findings indicate that individuals living in single-parent households do not perceive higher risks of worsening living standards due to their household situation per se, but rather because they are worse off in their economic situation compared to individuals living in other household types.In our view, this is a relevant finding.Although individuals living in single-parent households are in need of more support and are worse off during the pandemic in Germany, our results could nevertheless serve as tiny ray of hope.Governmental support programs are not set out to change the household structure.However, they are very well able and all the more urged to improve the currently poor income situation of this vulnerable group.
Finally, our study is limited since we were not able to consider the full spectrum of objective and subjective risks faced by individuals living in single-parent households in the past 2 years.Next to the individual assessment of worsening living standards in the present/ future and income, there are plenty of other indicators which can be used to describe the situation during the pandemic.Furthermore, we did not analyse the evaluation of present situation of single-parent households, but rather their prospects.Finally, longitudinal analysis is needed to further disentangle the layered relationship of subjective and objective indicators and to uncover the causal effect of the pandemic on the different household types.
2000-2020; own calculations for individuals aged 16-65, weighted results; percentages displayed for individuals living in different household types (single-parent households, singles without children, couple-parent households, couples without children).
11
Being born in another country than Germany indicates, by definition, a direct migration background, while respondents born in Germany may have either no or an indirect migration background.Respondents whose parents had no migration background were assigned the code "no migration background." households have detrimental future perspectives and are worse off with respect to their economic position compared to individuals living in other household types.Finally, in Model 3 (Table FIGURE 3Moderation effect of income (Hypotheses 3b) (incl.controls).Source: CoV-Sample 2020; own calculations.As subjective indicator we use individual risk perception of worsening living standards; as objective indicator we use logarithmised equivalised disposable household income (axis labels in Euros).
TABLE 1
Economic situation of individuals living in different household types in Germany (2010-2020).
. Since single-parent cannot balance financial hardship or income loss with the help of a second adult earner in the household (Eibach and Moch,
TABLE 2
Linear regression modelling (unstandardized coefficients, incl.controls).The dependent variable in Model 1 is the subjective indicator of individual risk perception of worsening living standards; the dependent variable in Model 2 is the objective indicator of logarithmised equivalised disposable household income; the dependent variable in Model 3 is individual risk perception of worsening living standards. | 9,933 | 2023-11-30T00:00:00.000 | [
"Economics",
"Sociology"
] |
Explanation of the X(4260) and X(4360) as Molecular States
We study the X(4260) and X(4360) solving Faddeev Equation under the Fixed Center Approximation. We find a state of I = 1 with mass around 4320 MeV and a width bout 25 MeV for the case of ρ meson scattering from X(3700)(DD̄) and 4256 MeV and a width about 30 MeV of D̄ scattering from D1(2420)(ρD). The results obtained in present work are in good agreement with experimental results.
Introduction
The question of "What the hadron is made of?" is a permanent question more than 50 years. This question was answered by Murray Gell-Man and George Zweig introducing the quark model for the first time. In this model hadrons are made of qq (meson) or qqq (baryons). However all the hadrons cannot be explained within the quark model and some complex structure such as glueballs, hybrids and molecules are needed.
Quantum Chromodynamics (QCD) is the theory of strong interactions which describe the nature and internal structure of hadrons. However, because of confinement problem of QCD in the low energy region, the perturbation theory does not work. Hence one needs nonperturbative methods such as Chiral Perturbation Theory [1][2][3][4], lattice QCD [5,6] and the QCD sum rules [7,8] to investigate the hadrons in the low and medium energy region.
The Chiral Perturbation Theory is one of the most powerful method to deal with hadrons, at low energy region. To extended the Chiral Perturbation Theory at medium energy region unitary extensions of chiral perturbation theory was introduced [9][10][11].
In order to investigate the three body systems one needs to solve the Faddeev Equations. The Fixed Center Approximation (FCA) was formulated to calculate the three body systems since the Faddeev Equations is quite long and complicated to deal with the three body systems. In this method a pair of particles bound together which is called a cluster and a third particle scatters from that cluster. The method has proved to be rather reliable for cases like light (scattering particle)-heavy (cluster) systems. Hence the cluster is not modified by the third particle and one can safely use the FCA to Faddeev Equations to calculate the three or many body systems. The limits on this model has been done in the work of [12,13] and the authors have explained φ(2170) meson andKNN system properly. The ∆ 5/2 + (2000) puzzle was investigated using the FCA to Faddeev Equations [14]. In this work the authors gave a plausible explanation of the ∆ 5/2 + (2000) puzzle. The three body NKK scattering amplitude was calculated by using the FCA to Faddeev Equations, taking theK(N) as a scattering particle and KN(KK) as a cluster [15]. A peak appeared in the modulus squared of the three body scattering amplitude with a mass at around 1920 MeV with spin-parity J p = 1/2 + . The authors suggested that the NKK three body systems is the N * (1920)(1/2 + ) state.
More recently this method is used for the charm and bottom mesons with total spin-3 successfully. The ρD * D * three body systems has been studied within the FCA to Faddeev Equations [16] and they made predictions on three body states for spin J = 3 case. Similarly, ρB * B * systems with total spin J = 3 has been investigated by using the Faddeev Equations under the FCA [17]. In this work, BB * system with J=2 forms a cluster and ρ meson scattering from that cluster. The two-body ρB * and ρB * scattering have been calculated in the Local Hidden Gauge Theory [18]. In [17] the authors found a I(J PC ) = 1(3 −− ) state with a mass at around 10987 ± 40 MeV and width 40 ± 15 MeV. Besides, ρKK and ηKK three body systems was investigated within FCA to Faddeev Equation [19,20]. In these works ρ(1700) and η(1475) mesons have been explained as ρKK and ηKK molecular states respectively.
In recent years, cc mesons has been of great interest to both theorists and experimentalists. For instance, the internal structure of the X(3872) [21], X(4140) [22], X(4260) [23][24][25] and X(4360) [26][27][28] mesons are not clarified using the quark model. These mesons have been investigated theoretically and experimentally and supported the existence of new types of hadronic states.
In the present paper, we study ρDD three body system solving the Faddeev Equations under the FCA. In order to calculate the amplitude of the ρDD three body system, we take DD and ρD as clusters and ρ andD mesons scattering from these clusters respectively. In the case of ρ scattering from X(3700), dynamically generated states of DD, we have only one state with total isospin I=1. But for the case ofD scattering from D 1 (2420), obtained dynamically ρD and its coupled channels, we have two states with I=0 and I=1 for the total three body system.
Formalism for ρDD Three Body System
We study ρDD three body system by using the FCA to Faddeev Equation. In this method we are taking two particles as a cluster and a third particle scattering from that cluster. In the case of the ρDD three body system, there is two options for the cluster, one is the DD and the other one is the ρD. The ρD was studied [30,31] in the open and hidden charm sector and DD in [32,33] within the framework of the chiral unitary approach. In order to calculate the three body scattering amplitude T by solving Faddeev Equations, one needs to calculate two partition functions T 1 and T 2 . The diagrammatic sketches are shown in Fig.1. T 1 (T 2 ) sums all diagrams of the series of Fig.1 which begin with the interaction of particle 3 with particle 1(2) of the cluster.
Then T can read a summation of the two partition functions T 1 and T 2 , Because we follow the normalization of Mandl and Shaw [34], we must determine the field normalization. We find for the single scattering, where k, k (k cls k cls ) refer to the momentum of initial, final scattering particle (cls for the cluster), ω(ω ) is the on-shell energy initial(final) particle and V is the volume of the box to normalize the external fields to unity.
The double-scattering diagram is given by where F cls (q) is the form factor of the cluster that we shall discuss below. The full S-matrix for scattering of particle 3 with the cluster will be given by As we can see the field normalization factors that appear in the amplitude of the different terms are different. However, if we combine Eqs. (2),(3),(4),(5) and approximate ω i = m i where i corresponds to particle 1-3, then we can introduce suitable factors in the elementary amplitudes, One can sum the partition functions T 1 and T 2 and obtain, where the function G 0 is the propagator of the particle 3 inside the cluster and given by The function F cls (q) in the above equation and also in Eq. (4) is the form factor of the cluster and given by with the normalization factor, Channels πD * Dρ πD * 1 1 Dρ 1 1 In present work we calculate ρ − DD andD − ρD three body scattering amplitude. Let us start from ρ − DD scattering. Here DD is the cluster and ρ is orbiting around that cluster. To investigate this we need the two body ρD and ρD scattering amplitude which is done in [30,31]. We are going to follow the same procedure. There are eight coupled channels πD * , Dρ, KD * s , D s K * ,ηD * , Dω, η c D * , DJ/ψ in I = 1/2, and two coupled channels, πD * and Dρ, in the I = 3/2 case.
We solve the Bethe-Salpeter equation to calculate the two body scattering amplitude with coupled channels unitary approach: where − → ( − → ) is the polarization of the incoming(outgoing) vectors mesons and V is the potentials and given byṼ The coefficients C I in an isospin bases are given in Table 1 and Table 2 for the I = 1/2 and the I = 3/2 respectively. In Table 1, γ = (m l /m H ) 2 , m l and m H are scales in the order of magnitude of the light and heavy vector mesons masses, respectively.
In the Bethe-Salpeter equations, G is given as below for the l channel, where α i is the subtraction constant, µ is a cutoff scale and M l and m l are masses of the vector and pseudoscalar mesons in the l channel, respectively. The other option in the three body ρDD scattering isD − ρD. In this case ρD is the cluster andD is orbiting around this cluster. Hence, we need two body DD and ρD scattering amplitude with coupled channels. There are six coupled channels DD, KK, ππ, ηη, η c η, D sDs and five coupled channels DD, KK, ππ, πη, η c π in the I = 0 and I = 1 case respectively. Next we solve the Bethe-Salpeter equation, where G is the loop function of pseudoscalar-pseudoscalar mesons which is given in Eq. (13) changing the vector meson masses (M l ) with the pseudoscalar meson masses (m l ) and removing the factor 1 + p 2 /3M 2 .
Results and Discussion
We calculate the modulus squared of the scattering amplitude of ρDD within the FCA to the Faddeev Equations. In Eq. (12), as it appears the meson decay constant that we take f π = 93 MeV for the light mesons, f D = 1.77 f π and f D s = 2.24 f π for the heavy mesons. As we stated before, in the loop function Eq. (13) there is the loop parameter µ. We set this parameter to µ = 1500 MeV both ρ − (DD) andD − (ρD) three body scatterings. In the same formula there is also subtraction constant (α i ) in the loop function which is only free parameter of the theory. To obtain D 1 (2420), the bound state of ρD with coupled channels, we use α i = −1.55 as in [31] and also X(3700), the bound state of DD with coupled channels with I = 0, we take α i = −1.3 as in Refs. [32,33]. In Fig. 2 we depict |T 2 | for ρ − (DD) X(3700) with total isospin I = 1. As we can see in this figure there is a clear peak at 4320 MeV with a width about 25 MeV. This state could correspond to the X(4360) with quantum numbers I G (J PC) =? ? (1 −− ) [35].
The modulus squared of theD(ρD) D 1 (2420) three body scattering amplitude with total isospin I=1 is shown Fig 3. There is a peak around 4256 MeV with about 25 − 30 MeV. This state could be associated with X(4260) with the quantum number I G (J PC) =? ? (1 −− ) with a width of 120 MeV [35]. Our numerical result is in good agreement for the mass but not for the width. In [29], the cross section for the process e + e − → ππ + Jψ is measured at center of mass energies from 3770 to 4600 MeV. They have observed two resonance states (see Fig.1 of Ref. [29]). The first one has a mass m = 4222.0±3.1(stat)±1.4(syst) MeV/c 2 and a width 44.1±4.3(stat)±2.0(syst) MeV, while the second one has a mass 4320.0 ± 10.4(stat) ± 7.0(syst) MeV/c 2 and a width of 101.4 +25.3 −19.7 ± 10.2 MeV. The first resonance near 4222 MeV/c 2 is associated to the X(4260). Hence newly BESIII result for the width of X(4260) is in good agreement with our result. In conclusion the results from BESIII Collaboration are in good agreement with our results. | 2,795.8 | 2017-01-01T00:00:00.000 | [
"Physics"
] |
Squid-Inspired Tandem Repeat Proteins: Functional Fibers and Films
Production of repetitive polypeptides that comprise one or more tandem copies of a single unit with distinct amorphous and ordered regions have been an interest for the last couple of decades. Their molecular structure provides a rich architecture that can micro-phase-separate to form periodic nanostructures (e.g., lamellar and cylindrical repeating phases) with enhanced physicochemical properties via directed or natural evolution that often exceed those of conventional synthetic polymers. Here, we review programmable design, structure, and properties of functional fibers and films from squid-inspired tandem repeat proteins, with applications in soft photonics and advanced textiles among others.
INTRODUCTION
Many globular and fibrous proteins have repetitions in their sequences or structures. However, a clear relationship between these repeats and their contribution to the physical properties in materials remains elusive. Exquisite knowledge of structure-property relationships in proteins will allow the design of materials with programmable properties that have novel functionalities. The scientific progress in this field is growing rapidly as we understand the effects of long-range order (i.e., the frequency and form of repetition) on macromolecular complexity. Here, we summarize recent studies on a specific class of tandem repeat proteins inspired by squid ring teeth as a model material system by combining expertise in nanoscale materials science, molecular biology, and protein physics.
Proteins based materials are composed of large biomolecules consisting of long chains of amino acids that fold and hierarchically assemble into complex and well-defined structures (Bechtle et al., 2010;Hu et al., 2012). The amino acid sequence of proteins can be precisely tuned since a defined sequence is genetically encoded in the DNA. This allows absolute control over stereochemistry, sequence, and chain length. Proteins are heteropolymers, which have exact molecular weight and assemble into complex hierarchical structures (defined by the sequence), whereas conventional homopolymers mainly form random coil conformations, and have statistical distributions of molecular weights and sequences. The precise control of the primary amino acid sequence regulates the assembly into the hierarchical structures, and ultimately governs the resulting physical, chemical, and biological properties of the material (e.g., mechanics, stability, activity, etc.) (Mann and Jensen, 2003;Jenkins et al., 2008). Additionally, proteins are naturally biocompatible with cell-interactive properties and tailored biodegradability, which makes them a material of interest for biomedical applications.
Naturally occurring proteins can be directly extracted from the native organisms. Due to lack of abundance or programmability requirements, recombinant expression in a variety of hosts has been the choice for the production of proteins. Over the past couple of decades, researchers have explored a wide range of expression systems for the high-yield production of proteins such as bacteria (Lewis, 2006;Xia et al., 2010;Heidebrecht and Scheibel, 2013), yeast (Fahnestock and Bedzyk, 1997;Cereghino et al., 2002), plants (Scheller et al., 2001), mammalian cell lines (Lazaris et al., 2002), and transgenic organisms (Tomita et al., 2003). Genetically modified Escherichia coli (E. coli) is the most established suitable host for industrial-scale production due to commercially available of expression vectors and well-understood genetics (Schmidt, 2004;Terpe, 2006;Heidebrecht and Scheibel, 2013). In addition, recombinant expression of engineered artificial genes allows for the biosynthesis of proteins with specified combinations of the 20 natural amino acids and a variety of unnatural amino acids (>100), expanding the possibilities of protein design (Link et al., 2003;Johnson et al., 2010).
REPETITIVE STRUCTURAL AND FIBROUS PROTEINS
Nature has evolved many functional materials across the animal and plant kingdom with hierarchical structures across the mesoscale and nanoscale that are built from protein building blocks. Many of the protein-based biological building blocks converged into a same family of structures despite evolving separately. Figure 1 summarizes the major structural elements found in repetitive protein polymers, namely coiled-coils, βsheets, and β-turns/spirals, which are briefly reviewed below.
Helical Coiled-Coil Proteins
Coiled-coils are bundles of α-helices that are twisted into a superhelix, and are usually found in nature in extracellular matrix proteins (Lupas et al., 1991;Lupas, 1996;Kohn et al., 1997). αhelix structures (first predicted by Pauling et al., 1951) consist of a helical arrangement of the protein backbone, typically with 3.6 amino acid residues per turn of the helix. Each α-helix is stabilized by hydrogen bonding between the backbone amino and carbonyl groups and those in the next turn of the helix, leaving the amino acid side chains in the outer shell of the helix (Voet and Voet, 2011). Coiled-coil structures are abundant in naturally occurring proteins such as collagen and keratin.
Keratin, on the other hand, forms helical filaments that can be found in epithelial and epidermal appendages such as hair, nails, horns, hooves, wool, and skin (Rouse and Van Dyke, 2010). Due to its high sulfur content (i.e., disulfide bonds crosslink the coils), keratin is highly insoluble and mechanically strong, contributing to waterproofing and strengthening of hair and epidermal tissues (Wang et al., 2016). α-keratins have a repeating hepta-peptide, α-[X 1 X 2 X 3 X 4 X 5 X 6 X 7 ] n , sequence that form right handed α-helices dimers (Wang et al., 2016). Within the repeat unit, the first, fourth, fifth, and seventh positions are located at the hydrophobic interface between two α-helices, while the second, third, and sixth positions are exposed to the outside environment. The first and fourth amino acids of the heptapeptide are non-polar (usually occupied by leucine, hence the name "leucine zippers") (Landschulz et al., 1988), and they form the hydrophobic plane along each helix and dominate the inter-helical hydrophobic interactions (Wang et al., 2016). The hydrophobic planes align between helices to form dimers, which are further stabilized by hydrogen bonding and crosslinking of cysteine residues via disulfide bonds (Fraser et al., 1976;Rouse and Van Dyke, 2010;Wang et al., 2016). Common heptapeptide units such as EVSALEK, KVSALKE, EIAALEK, KIAALKE, VAALEKE, and VAALKEK have been used as supramolecular cross-linkers in keratin-inspired coiledcoil protein-based materials (Wang et al., 2016). The hierarchical assembly of coiled-coil domains has been explored in the development of biomedical hydrogels. Since the aggregation of coils is driven by hydrophobic inter-helical interactions, a variety of stimuli can disrupt the association and trigger stimuli-responsive behaviors: temperature, ionic strength, pH, and denaturing buffers (Petka, 1998;Xu et al., 2005). In addition, the mechanical properties and association kinetics can be tailored by adjusting the amino acid composition of the heptapeptides (different side chains protruding from the helix) . Control of association and dissociation of coiled domains has led to shear thinning and self-healing protein-materials, which are used as injectable biomedical hydrogels (Ifkovits and Burdick, 2007;Wong et al., 2009;Olsen et al., 2010).
β-Turn/β-Spiral Elastic Proteins
Most elastic proteins are intrinsically disordered but contain a high fraction of β-turns and polyproline structures Shewry, 2000, 2002;Shewry et al., 2003;Roberts et al., 2015). β-turns are small secondary structures involving four amino acids that form intramolecular hydrogen bonding (Muiznieks and Keeley, 2010;Voet and Voet, 2011). Elastin, which is found in the extracellular matrix and connective tissue (especially in human skin), is composed of water-soluble monomers that aggregate into non-soluble constructs. It has a common hydrophobic domain VPGVG that exhibits a lower critical solution temperature (LCST). Above this temperature, the hydrophobic domains interact and aggregate into β-turn structures separating from the soluble phase (Urry and Parker, 2003). Additionally, elastin has lysine residues that, after posttranslational modification into allysine, chemically crosslink the hydrophobic domains yielding non-soluble stretchable elastin (Pinnell and Martin, 1968;Yeo et al., 2011). The ability to control and modify specific amino acid residues FIGURE 1 | Molecular architecture and repetitive sequences of fibrous protein polymers: (i) coiled-coils (e.g., collagen and keratin), (ii) β-turns/spirals (e.g. elastin and resilin), and (iii) β-sheets (e.g., silks and squid ring teeth).
along the backbone of elastin provides programmability of hydrophobicity and aggregation kinetics, yielding thermoresponsive elastic materials. Hence, elastin-like proteins (ELPs) that mostly derive from the VPGVG repeat have been used in drug delivery of pharmaceuticals, tissue engineering, biosensing, and protein purification (Simnick et al., 2007;Chow et al., 2008;Qi and Chilkoti, 2014).
Resilin is another elastic protein with high content of βturn and β-spiral structures. In nature, it is found in the wing hinge, jumping pads, and vocal cords of some insects (Kim et al., 2007;Qin et al., 2012). Their high frequency functions require very elastic and resilient materials (e.g., up to 95% resilience) (Qin et al., 2012). Resilin proteins have three main components that function cooperatively as an energy storage/release mechanism: (i) exon I, waterlubricated elastic domain, (ii) exon II, cross-linked to chitosan frame, and (iii) exon III, energy-storing component (Qin et al., 2012). Resilin has GGRPSDSYGAPGGGN hydrophilic repeats of glycine and proline providing chain flexibility, which are stabilized via dityrosine cross-linking (Tamburro et al., 2010;Qin et al., 2012). Resilin has been expressed recombinantly, and dityrosine cross-linking has been achieved through enzymatic chemistry and photo-cross-linking (Elvin et al., 2005). Synthetic resilin-like proteins were used in tissue engineering as degradable scaffolds with cell-binding domains (Li et al., 2011).
Flagelliform silk, which is the connecting lines of a spider web absorbing the energy of impacting prey, is an elastic protein that has high content of β-turns and β-spirals (Hayashi and Lewis, 1998). Ninety percent of flagelliform silk is composed of GPGGX motifs (common β-turn motif) that can be cross-linked via disulfide bonds through incorporating cysteine residues (Heim et al., 2010). β-Sheet-Structured Proteins β-sheet structures are formed by laterally-connected strands of peptides with hydrogen bonding interaction between the backbone carbonyl oxygen and the amino hydrogen atoms, and provide stability and mechanical strength through strong intermolecular interactions. Multiple β-strands are arranged into an extensive hydrogen-bonding network with their neighboring strands, forming crystal-like domains in the protein matrix. Silk is the most extensively studied β-sheet-structured fibrous protein. Spun by a variety of insects (including 45,000 different kinds of spiders), it serves as predatory and protective material, with tensile strength (i.e., ∼700 MPa and ∼1 GPa for Bombyx mori silkworm and Araneus diadematus spider silks respectively) and toughness (i.e., approaching 160 MJ m −3 depending on the silk type) surpassing those of highend synthetic polymers such as Kevlar (Altman et al., 2003;Vendrely and Scheibel, 2007;Hardy et al., 2008). Silkworm silk fibroin consists of heavy and light chain, which are bound through disulfide bridges and glycoproteins. The heavy chain consists of GAGAGS hydrophobic motifs that associate into stiff pleated β-sheets, while the hydrophilic light chain provides flexibility (van Hest and Tirrell, 2001;Kundu et al., 2013). Spider silk is composed of several types of proteins such as dragline silk (i.e., main frame of spider webs) and flagelliform silk (i.e., elastic connecting silk, which is rich in β-turns and spirals). Dragline silk, spun by the major ampullate gland, contains polyalanine and GA repeats that form pleated βsheets, and helical and turn domains that provide elasticity (Fu et al., 2009;Hardy and Scheibel, 2010). The hydrophobic interactions in the polyalanine domains drive the formation of β-sheets, and govern the semicrystalline morphology and mechanical properties of the material (Hayashi et al., 1999;Keten et al., 2010;Cetinkaya et al., 2011). Silkworm silk is obtained directly from silkworm cocoons, and it has been used historically in textiles and paper since 3000 B.C., and in the two last decades for biomedical applications such as wound dressings, drug delivery, tissue repair, biophotonics Kaplan, 2008, 2010;Pritchard and Kaplan, 2011;Kundu et al., 2013). Spider silk based repetitive proteins are also recombinantly expressed (Spiess et al., 2010) due to the fact that spiders are difficult to farm and direct expression of the full protein is difficult due to its large size (Xia et al., 2010).
Curli proteins are β-sheet-rich proteins that are found in amyloid fibers and in E. coli and Salmonella biofilms (Knowles and Buehler, 2011;Evans and Chapman, 2014). Amyloid fibers have recently received significant research efforts in order to understand their aggregation mechanism and their potential role in neurodegenerative diseases such as Huntington's, Parkinson's, and Alzheimer's diseases (Prusiner and Hsiao, 1994;Lednev et al., 2006). The core of amyloid fibers has S(X) 5 QXGXGNXA(X) 3 Q repeating motifs that aggregate into cross-β-sheet structures (i.e., β-sheet-turn-β-sheet) (Evans and Chapman, 2014). Synthetic curli proteins were used for the development of functional biofilms with site-specific binding, abiotic, and adhesive properties (Nguyen et al., 2014;Botyanszki et al., 2015). Frontiers in Chemistry | www.frontiersin.org A recently discovered structural protein from squid ring teeth presents opportunities for developing multifunctional films and coatings for adhesives, wound dressing, electronic devices, sensing, smart repairable textiles, abrasion-resistant microfibers, and other applications. We review them in the next section in detail.
SQUID RING TEETH BASED FIBERS AND FILMS
SRT are predatory appendages located inside the suction cups of squid species, used to strongly grasp prey (Williams, 1910). These teeth are composed of a naturally occurring protein complex (Nixon and Dilly, 1977) with mechanical properties in the range of 4-8 GPa (Miserez et al., 2009), and have recently gained attention in the biomimetics field due to their interesting structure and properties (Pena-Francesch et al., 2018a). SRT proteins can be extracted directly from squid suction cups, or can be also biosynthetically produced using heterologous expression in bacteria after genome sequencing (Figure 2A) (Pena-Francesch et al., 2014b, 2018a.
Design and Synthesis
Biosynthetic expression of SRT proteins presents several advantages over direct extraction from the natural source: , and (v) incorporation of functional polypeptide modules by de novo design of amino acid sequences . Direct extraction of natural SRT protein from squid tentacles is limited to availability and cost of natural sources. Global capture production in the major fisheries over the last decade is 2.2 annual million ton approximately (including all major squid species for human consumption) (Arkhipkin et al., 2015). One can make a rough estimation of the overall cost by considering a 0.5 kg average squid (for example, Loligo vulgaris) that can produce 100 mg of SRT (Roper et al., 1984). If SRT were extracted from all captured squids, this would yield approximately 220 ton of SRT annually. With an efficient and low cost system for extracting SRT (without contaminating the food processing so it could be later sold for human consumption), we could estimate a minimum extraction cost of $1 per squid and $10 per gram of SRT. If an efficient and low cost system for extracting SRT without damaging the rest of the animal were to be designed, the production cost could be approximated to a minimum of a $1 per squid ($0.7/squid and $0.3/squid for current collection and handling price, assuming that the whole squid could be sold for human consumption after the process without any additional cost). This would give an estimated minimum production cost of $10 per gram of SRT by means of direct extraction from the animal. Compared to the production cost of a high end polymer ($10/kg) and to the large production volume in the polymer industry (300 million tons produced per year globally) (PlasticsEurope, 2018), the volume and production costs of SRT by direct extraction are several orders of magnitude inferior and more costly. Therefore, large-scale production is necessary for economically feasible and sustainable protein-based bioplastic production for engineering and medical applications. Genetically modified Escherichia coli (E. coli) bacteria is the most established suitable host for industrial-scale protein expression due to the availability of expression vectors and well-understood genetics (Schmidt, 2004;Terpe, 2006;Heidebrecht and Scheibel, 2013). However, two major challenges remain for the production of high molecular weight repetitive proteins: the aggregation of proteins in inclusion bodies (limiting the yield), and an expensive infrastructure for scale-up (Landschulz et al., 1988). Currently, we are producing synthetic SRT protein in 80L fermenters with yields of ∼1 g/L, purity of >90%, and an estimated minimum cost of ∼$100/kg. We note that higher protein production yields (>10 g/L) and lower cost ∼$10/kg could be achieved by optimizing the expression process (Edlund et al., 2016).
SRT proteins have a segmented amino acid sequence with alternating crystalline and amorphous regions (i.e., reminiscent of block copolymers) (Sariola et al., 2015). The amorphous regions include flexible chains rich in glycine and tyrosine amino acids, while the crystalline regions (β-sheet nanocrystals) are formed by Ala-rich segments stabilized by hydrogen bonding, separated by proline residues (Guerette et al., 2013). Tandem repetition observed in SRT proteins results in a network morphology, where β-sheets act as physical cross-linkers, and provide mechanical strength for the polymeric material (Pena-Francesch et al., 2018c). In order to fully replicate the chemistry, structure, and properties of natural SRT proteins, a new design strategy for the expression of SRT-inspired polypeptides with precise control of the sequence, segment length, and molecular weight is required. Recombinant DNA technology has been successfully used in the synthesis of tandem repeats of naturally occurring peptides (Kempe et al., 1985;Lee et al., 2002;Rao et al., 2005;Hou et al., 2007;Wang and Cai, 2007). However, current methods for DNA polymerization have major limitations: (i) they require multiple sequential steps, (ii) they cannot be run in parallel, and (iii) they do not offer precise tunable control over a range of molecular weights (Amiram et al., 2011). The synthesis of high molecular weight repetitive sequences is complicated due to genetic instability (Meyer and Chilkoti, 2002;Tang and Chilkoti, 2016), and researchers often opt for protein cross-linking from a tandem repeat monomer (which introduces defects in the protein structure such as cyclic chains) (Dimarco and Heilshorn, 2012;Li et al., 2015;Yang et al., 2017). Overcoming these limitations, rolling circle amplification (RCA) offers a one-step method to synthetize repetitive proteins from a DNA monomer with precise control over the number of repeats (Amiram et al., 2011). Recently, our team used protected digestion rolling cirle amplification (PD-RCA) to synthetize a library of squid ring teeth-tandem repeat (SRT-TR) proteins with controlled number of repeat units (Figure 2B), which is summarized here . A DNA sequence encoding for a SRT-inspired "monomer" was constructed based on consensus sequences derived by inspection of the native SRT proteins of several squid species: Loligo vulgaris, Loligo pealei, Todarodes pacificus, Euprymna scolopes, Dosidicus gigas, Sepioteuthis lessoniana, and Sepia esculenta (Guerette et al., 2013(Guerette et al., , 2014Jung et al., 2016). A representative sequence consisted of a crystal-forming segment of PAAASVSTVHHP, and a disordered segment of STGTLSYGYGGLYGGLYGGLGYG was selected to create tandem repeat proteins inspired by squid ring teeth proteins. The "monomer" DNA construct was digested and circularized (i). The circularized "monomer" DNA is used as template as polymerase rolls around it, forming random RCA products (linear oligomers) (ii). The RCA products are digested, yielding a library of TR products comprised of an integer number of repeats of the TR "monomer" gene (iii). The TR products are separated by size via electrophoresis, and specific TR DNA oligomers can be selected by direct extraction from the electrophoresis matrix (iv). The selected DNA oligomers are then ligated to an expression vector to create an expression library for TR protein synthesis (v). Hence, this method can generate protein libraries comprising TR polypeptides (with the same building block sequence) with a specified number of repeat units.
Physicochemical Properties
The physical and chemical properties of SRT proteins are governed by: (i) the amino acid composition, (ii) the secondary structure content (e.g., random coils, α-helices, β-sheets, etc.), and (iii) the overall network morphology. For example, SRT proteins contain 11% histidine amino acids (pKa 6.0) which regulate the protein charge as function of pH (i.e., positive at low pH, neutral at pH 7, and negative at high pH) (Pena-Francesch et al., 2014a). Furthermore, histidine residues contribute to proton conductivity in SRT proteins, as recently demonstrated in self-healing highly proton conductive protein films (Pena-Francesch et al., 2018b). The secondary structure of SRT film also has a strong impact on the material properties. Ordered domains like β-sheet structures provide mechanical strength (e.g., SRT and silk fibroin are β-sheet-rich structural proteins with modulus in the GPa range), while disordered domains provide elasticity and flexibility to the material (e.g., similar to disordered resilin in the wing tendon of insects) (Cheng et al., 2010;Guerette et al., 2013;Yarger et al., 2018). The secondary structure content does not only influence the mechanical properties of SRT-based materials, but also their thermal , conducting (Pena-Francesch et al., 2018b), and optical properties (Yilmaz et al., 2016(Yilmaz et al., , 2017. The morphology of disordered domains also plays an important role in defining the bulk properties of protein-based SRT materials. From this perspective, SRT is considered as a network protein gel. The disordered amorphous strands can adopt different arrangements including tie-chain conformations (i.e., connecting two neighboring βsheet nanocrystals) or defective conformations such as dangling ends and loops (i.e., considered as topological defects). Therefore, by tuning the number of tandem repeats in SRT film, it can exhibit network morphologies ranging from a perfect network (rich in connecting tie-chains) to a defective network (rich in topological defects) (Pena-Francesch et al., 2018b,c). Effective strands contribute to stress bearing and transport throughout the bulk material, and their density should be maximized in order to improve the material properties. In SRT and other tandem repetition proteins, the effective strand density scales with reciprocal molecular weight, and consequently the mechanical and transport properties (thermal conductivity, proton conductivity) can be optimized by adjusting the molecular weight and tandem repetition (Figure 3) ; -Francesch et al., 2018b,c;Tomko et al., 2018). Therefore, SRT proteins offer programmable properties through the fine control of the amino acid sequence, nanostructure, and network morphology, which can be all encoded in the DNA sequence of SRT-inspired synthetic polypeptides (Pena-Francesch et al., 2014b, 2018aJung et al., 2016;Tomko et al., 2018).
Fabrication and Processing
Due to the reversible and non-covalent nature of the crosslinking mechanism, SRT proteins are available for fabrication and processing methods that are common in the polymer industry. Solution-based processing of SRT, for example, consists in disrupting the hydrogen bonding in the β-sheet structures, and solubilizing the protein for posterior solvent casting (Figure 4a). Acidic/basic aqueous solutions, salts, surfactants, and organic solvents are typically used to accelerate the disruption of βsheets and increase the protein solubility (solvent residues can be easily washed away from the final product after evaporation) (Pena-Francesch et al., 2018a). On the other hand, thermoplastic processing of SRT proteins is also possible by heating the protein material above its glass transition temperature (Figures 4b,c; Pena-Francesch et al., 2014b). The glass transition temperature of SRT can be tailored to the desired processing conditions by optimization of the nanostructure and use of plasticizers, opening up processing capabilities traditionally restricted to synthetic materials (extrusion, injection, lamination, etc.) (Pena-Francesch et al., 2018a). Using solution-and thermal-based methods, SRT proteins have been processed into numerous complex materials at the nano-, micro-, and macroscale. Transparent and flexible free-standing SRT films (Figure 4d) were fabricated by drop casting (Pena-Francesch et al., 2014b, 2018aJung et al., 2016), and used as substrate, membrane, or support material for multiple applications including but not limited to bioadhesive pads (Pena-Francesch et al., 2014a), fully biodegradable sensors (Yilmaz et al., 2016(Yilmaz et al., , 2017, and stretchable proton conductors (Pena-Francesch et al., 2018b). SRT proteins were processed into complex 3D geometries (Figure 4e) by combining solution-and thermal-based techniques (e.g., micro-/nanomolding, nanowetting) (Guerette et al., 2013;Pena-Francesch et al., 2014b, 2018aYilmaz et al., 2017). The processing versatility of SRT proteins has allowed the design and fabrication of bioinspired devices and materials, such as insectinspired wings for flapping wing micro air vehicles (FWMAVs) (Figure 4f). Insects have superior flight maneuverability in close quarters than other flying animals mainly due to the material and structural properties of their wings (Ansari et al., 2006). Insect wings are generally composed of a stiff chitinbased venation structure embedded within a protein membrane, which provide mechanical support and flexibility to the wing (Combes, 2010). A protein-based artificial wing inspired in the hawkmoth Manduca sexta was fabricated using SRT proteins (Michaels et al., 2015), demonstrating the potential of SRT proteins in replicating natural systems with biological materials while maintaining the physical and chemical properties of the proteins. Arrays of high-quality optical cavities (such as microresonators for photonic devices and biosensors) have been integrated in flexible protein films via soft lithography and protein molding techniques (Figure 4g; Yilmaz et al., 2016Yilmaz et al., , 2017. SRT nanostructured films, such as nanograss, have been fabricated using template-based nanowetting and capillary micromolding, producing high aspect ratio nanofiber arrays (Figure 4h) that replicate textured surfaces found in nature (lotus leaf, gecko's footpad, butterfly wings, etc.) (Guerette et al., 2013;Pena-Francesch et al., 2014b). SRT-based free-standing thin films have been explored as separation membranes. Thin membranes to remove molecular contaminants in water treatment processes have gained recent attention in the research community due to the growing global problem of water pollution (Shannon et al., 2008;Vandezande et al., 2008). A diversity of new materials and fabrication techniques have been investigated to develop efficient membranes, including nanomaterials biopolymers (cellulose, silk, amyloids) (Bolisetty and Mezzenga, 2016;Ling et al., 2016Ling et al., , 2017Zhang et al., 2016). However, developing thin, mechanically strong membranes with tunable barrier properties and good separation performance remains a challenge. SRTmaterials hold promise in this field due to their mechanical strength, flexibility, tunable nanostructure, and self-healing properties (Pena-Francesch et al., 2014b;Sariola et al., 2015). SRT membranes show a good performance under low flux conditions, with 100% rejection of Rhodamine B dye (Figure 4i; Barbu, 2016).
SRT BASED FILMS IN TEXTILE APPLICATIONS
Since the dawn of civilization, natural fibers (e.g., wool, cotton, sisal, ramie, silk) were used in textiles. However, due to increased demand and cost issues, synthetic fibers made of polyester, nylon, and others replaced natural alternatives. Recently bio-derived or biosynthetically produced fibers received significant interest due to sustainability and environmental reasons. Although the environmental and health regulations in textile industry have been a driving force for the sustainability movement, novel properties discovered in biosynthetic fibers are also increasing the momentum of this initiative. In this respect, SRT proteins hold great promise to provide a broad range of solutions for the textile industry because of its programmable properties, biodegradability, and easy processing, such as self-healing recyclable fabrics, natural sewing free adhesive, smart garments for health monitoring, and new strategies for the reduction of environmental pollution and health impact.
Abrasion-Resistant Coatings for Microfibers
Microplastics (small plastic particles <5 mm in size) are environmental pollutants that are found in freshwater (Dris et al., 2015;Eerkes-Medrano et al., 2015), marine (Cole et al., 2011;Galloway and Lewis, 2016;Gago et al., 2018), and terrestrial environments (Rillig, 2012). Once released in the environment, microplastics are ingested by local organisms (Watts et al., 2015(Watts et al., , 2016Sussarellu et al., 2016), resulting in the intake of toxic chemicals that have a negative impact on marine life and can enter the human water and food supply (Mathalon and Hill, 2014; Yang et al., 2015;Koelmans et al., 2016;Wardrop et al., 2016). Microplastics are originated from primary sources such as microbeads in cosmetics or secondary sources such as the breakdown of larger plastic debris. Synthetic microfibers are generated from washing cycles of common garments (e.g., submillimeter size of polyester, acrylic, and nylon fibers), and are then discharged to sewers or surface waters (Hartline et al., 2016). Wastewater treatment plants cannot completely filter microfibers due to their small size, and their efficient removal from effluents represents a major technological challenge in the protection of the environment (Murphy et al., 2016). Recent research has focused not only in improving filtration efficiency and removal of microplastic pollutants from the environment, but also in preventing their generation and release in the first place. With this problem in mind, SRT protein fibers and films have been explored as a potential solution for minimizing microplastic pollution. In Figure 5, a microfiber cloth (87% polyester, 13% polyamide) is coated with SRT protein film, and the microfiber resistance to mechanical damage (i.e., abrasion) is tested. The protein coating was examined by Fourier transform infrared spectroscopy (FTIR), revealing a successful homogeneous coating as shown in Figure 5a. The measured spectrum is consistent with those of previously reported polyester/polyamide fibers (Marjo et al., 2017). Electron microscopy showed the microstructure of the coated and non-coated microfiber cloths are significantly different in their morphologies after wear and tear test. Non-coated cloths showed bundles of microfibers homogeneously distributed over the cloth surface (Figure 5b). However, after the abrasion test, the bundles are frayed and damaged, and individual microfibers are broken and released (Figure 5c). SRT-coated microfibers are also arranged in bundles (Figure 5d), similarly to noncoated fibers. However, SRT-coated fibers do not break after the abrasion test and are not detached from the cloth (Figure 5e). Interestingly, the microfibers align in the direction of the force applied during the abrasion test. These findings suggest that SRT coatings provide mechanical stability to microfibers, could potentially prevent release of microfibers to the environment after mechanical abrasion.
Self-Healing SRT Films
Smart textiles that are capable of autonomous self-healing represent an increasingly important class of advanced materials for substrates prone to damage, such as biomedical implants or garments tailored for protection against chemical and biological warfare agents (Lee et al., 2003;Singh et al., 2004). Because of their biocompatibility and self-healing properties, SRT films are good candidates for developing such advanced textiles. A broad variety of textiles can be easily coated with SRT proteins by dip coating, including (Leberfinger et al., 2018). (d) Multilayer biomolecule encapsulation (enzymes), providing built-in detection and protection against hazardous agents. Self-healing and repairable (e) fabrics and (f) single fiber maintain the activity of encapsulated biomolecules. Reproduced with permission (Gaddes et al., 2016). Copyright 2016 American Chemical Society.
woven, non-woven, and single fibers (Figure 6a). SRT proteins homogeneously coat the fibers with controllable thickness by adjusting the coating process (i.e., solvent, protein concentration, viscosity, and drying) as shown in Figures 6b,c. Moreover, SRT coatings allow for multilayer biomolecule encapsulation such as enzymes, enabling smart textile applications in biosensing, drug delivery, and chemical/biological warfare protection by enzymatic neutralization. Urease enzymes were used as model enzyme in these studies and were successfully encapsulated in SRT-coated textiles using layer-by-layer deposition, providing built-in multifunctionality (Figure 6d; Gaddes et al., 2016). Stable enzyme-doped SRT films were reproducibly deposited on textile substrates to form a composite that resists dry cracking, repairs macroscopic textile tears in the presence of water, and maintains urease enzyme activity (Figure 6e). Multilayer enzyme/SRT self-healing coatings can be applied not only to textiles but also to single fibers and threads (Figure 6f), which can be later woven into multifunctional smart garments with multiple advanced fibers.
SRT PROTEINS FOR SOFT PHOTONICS
Multifunctionality in nanostructured SRT films is not limited to passive surfaces, but also includes the incorporation of active sensing capabilities. Photonic devices are typically manufactured with conventional hard materials (silica, silicon, silicon nitride, glass, and quartz) by using standard lithography techniques (Armani et al., 2003). However, these materials are not suitable for applications that require soft, flexible, biocompatible, and biodegradable photonic devices and structures such as in vivo biosensing and biodetection. Optical wave-guiding capabilities of flexible protein-based fibers were demonstrated by coupling light from silica fiber taper to SRT fibers (Figure 7a), opening up new functionalities of SRT as soft photonics platforms. All-SRT photonic platforms have been fabricated by integrating whispering-gallery-mode (WGM) microresonators in flexible protein films (Figure 7b). SRT proteins proved an excellent soft material for WGM biophotonic platforms, with quality factors as high as 10 5 , and two orders of magnitude larger thermooptic coefficient than silica (Figure 7c; Yilmaz et al., 2017). Furthermore, the resonance wavelength and quality factor of SRT WGM flexible resonators remained unaffected when the substrate film was bent, making SRT-based microresonators an attractive platform for biologically integrated sensing (Figure 7d). To exploit these promising optical properties, we designed and fabricated SRT-based soft photonic devices. We fabricated add-drop filters (optical communication architectures) by coupling two separate fiber-taper waveguides to protein WGM resonators (Figure 7e; Yilmaz et al., 2017). Non-resonant light passes through the input waveguide to the transmission port. The filter performed with an efficiency of 51% (Figure 7f), which can be increased by improving the waveguide-resonator coupling and decreasing scattering losses. Furthermore, photonic on/off switches were fabricated from SRT protein (Figure 7g; Yilmaz et al., 2017). The transmission of a signal field through a waveguide-coupled microresonator was switched between on and off states by a control field via the thermal response of SRT proteins (hence, exploiting the strong thermo-optic coefficient of the proteinaceous material) (Figure 7h). All-SRT switches achieved an isolation of 41dB at a control field power of 1.44 µW (circulating power 0.129 mW (Figure 7i). Compared to an all-silica switch (25 dB isolation, 16.43 µW control power, 219.76 mW circulating power), SRT-based switches are 14x more energy efficient than their "hard" equivalents (due to their strong thermo-optic coefficient and negative thermal expansion). Therefore, protein-based soft, flexible, biodegradable photonic FIGURE 8 | Protein-based 2d-layered nanocomposites. (a) Inkjet printing self-assembly of (b) nanostructured, (c) flexible MXene-SRT protein electrodes and their (d) electrical response to environmental humidity. Reproduced with permission . Copyright 2018 Wiley. (e) Vacuum assisted self-assembly (VASA) of (f) graphene oxide (GO) and SRT proteins for the fabrication of (g) programmable bimorph thermal actuators with (h) high efficiency. Reproduced with permission (Vural et al., 2017). Copyright 2017 Elsevier. devices are attractive for low power consumption applications such as biosensing.
TANDEM REPEAT PROTEIN BASED 2D LAYERED NANOCOMPOSITES
Tandem repeat proteins play an important role for creating composite structures in nature such as nacre. Similarly, these proteins are recently used for dynamic assembly of 2d layered structures as shown in Figure 8 . For example, nanocomposite films of SRT proteins with 2d-layered MXene structures were demonstrated in stimuli responsive flexible electronic films via inkjet printing selfassembly (Figure 8a; Vural et al., 2018). MXenes are conductive materials that have the general formula of M n+1 X n T n (M is an early transition metal, X is carbon or nitrogen, T x stands for surface functional groups [-F, -O, -OH] and n = 1-3). Their high electrical conductivity and electromagnetic interference shielding efficiency (EMI SE) can be harnessed as printed electrodes. Tandem repeat proteins inspired by SRT played a significant role as promising binders between MXene 2d layers via hydrogen bonding with surface termination groups of Mxenes (-F, -O, -OH) as well as a stabilizer for printable conductive inks. Inkjet printed electrodes of SRT-MXene exhibits superior electrical conductivity values as high as 1080 S/cm on flexible polyethylene terephthalate (PET) substrate, which is meaningfully higher than other two-dimensional materials such as graphene (250 S/cm) and reduced graphene oxide (340 S/cm). These electrodes demonstrated stimuli responsive (e.g., humidity) metal-insulator transitions through percolation of conductive layer. The electrodes of Ti 3 C 2 T x -SRT exhibit on/off respond to humidity change which is desirable for humidity sensors. Moreover, electromagnetic interference (EMI) shielding ability of printed electrodes was also demonstrated. Printed electrodes with protein concentration of 0.95 mg/ml inks show EMI SE values as high as 50 dB for an electrode thickness of 1.35 µm between 8 and 12 GHz at ambient humidity (60% RH). As another example, vacuum assisted self-assembly (VASA) of highly ordered 2D composites based on graphene oxide (GO) attracted interest in applications that require high mechanical strength as well as increased thermal conductivity (Figure 8b; Vural et al., 2017). A wide spectrum of tandem repeat proteins has been utilized to fabricate GO-protein composites such as elastin-like protein, nacre-like gelatin, silk fibroin and SRT. Additionally, by manipulating the interlayer distance of GO 2d layered composites, bimorph thermal actuators have been fabricated combining the high thermal conductivity of GO (300 W/mK) and the high thermal expansion coefficient of SRT proteins (−95 × 10 −6 K −1 ). GO sheets are responsible for homogenous heat dispersion, whereas the tandem repeat proteins trigger thermal expansion. Compared to regular GO actuators, protein-GO 2d layered composites showed 1800x higher performance enhancement of thermal actuation. In summary, assembly and control of 2d-layered/protein composites could find applications in next-generation, programmable, flexible, optically and electrically superior, energy efficient and mechanically strong materials and devices such as 2d heterostructures for topological electronics, mottronics, photonics, and spintronics including but not limited to: engineering spectrum of physical properties such as direct bandgap, strong spinorbit coupling, optical non-linearities, and photoconductance, extraordinary electronic and optical properties such as thin-film photodetectors, logic memory devices, transistors, photovoltaics, and supercapacitors.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
FUNDING
MD and AP-F were partially supported by the Army Research Office (grant no. W911NF-16-1-0019 and W911NF-18-1-026), Air Force Office of Scientific Research (grant no. FA9550-18-1-0235), and the Materials Research Institute of Pennsylvania State University as well as Lloyd and Dorothy Foehr Huck Endowment in Biomimetic Materials. | 8,446.2 | 2019-02-21T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Problems and outlook for marketable wheat grain production
In recent years, Russia has made considerable advances in the production and export of grain. Grain production, primarily wheat, is the driver of the country’s economic growth. At the same time, wheat grain has a multipurpose use, it is a valuable multifunctional primary product for obtaining a wide range of production with high added value, consumed by many economic sectors. A paradoxical situation has developed: Russia is the world’s largest exporter of grain – primary product, and is acutely dependent on imported products of its processing. Therefore, the outlook for the wheat grain production development is associated with the expansion of production of wheat grain processing products, with its advanced processing and export of products with high added value. The development of advanced grain processing will increase the profitability as of agribusiness, the processing industry in general, as of exporters.
Introduction
In recent years, Russia has made impressive progress in the production and export of grain. For the fifth year in a row, our country becomes a world leader in exports. Wheat grain is the basis commodity of world trade, the key agricultural crop in Russia, mainly exported abroad. The wheat planted area in 2020 amounted to 36.8% of the total planting acreage and 61.4% of the total acreage under cereal and pulse crops, the share of wheat in grain exports is about 80%. From 1990 to 2020 grain exports significantly increased -24.4 times, and from 2015 to 2020 wheat grain exports almost doubled. However, wheat grain is not only the basis commodity of world trade and Russian export, at the same time it is a valuable primary product, from which a wide range of production with high added value is obtained. «Nowadays trading in just grain itself is equal to trading in oil. It is necessary to develop more economically profitable processing industry.»emphasized the Deputy Chairman of the Government of the Russian Federation A.V. Gordeev at the XX International Grain Round in June 2019.
The purpose of the study is to identify problems and determine the development outlook for marketable wheat grain production and use.
Materials and methods
The grain complex of Russia is a complex system for research. Therefore, it is advantageous to use a systemsynergetic approach. It proposes to study the grain complex in dynamics along the entire value chain (from grain production to export).
Results and Discussion
The main indicators, characterizing the development of the grain complex of Russia, are presented in the table. In general, it has a positive trend. From 1990 to 2020, the gross harvest of grain in general and of wheat increased by 14.3% and 73.2%, respectively. The grain complex developed most intensively from 2015 to 2020. During this period, the compound feed industry increased by 24.2%, the wheat gluten productionby 34.2%, and the export of wheat grain increased by 80.7%. At that, the share of wheat in grain exports is about 80% in recent years.
Wheat grain is a primary product for multipurpose use, but in recent years, an emphasis is placed on the development of grain export at the expense of its processing and the development of its domestic consumption. So, from 1990 to 2020, the volume of grain processed into flour, groats, compound feed and other products reduced by almost 57%, which is 40.9 million tonnes, that is, by an amount almost equal to the volume of grain exports. At that, the compound feed industry is growing, but the demand for it is not so high, as the cattle population in 2019 reached its historical low of 18.1 million heads (in the most difficult year of the 2 nd World War -1942, the cattle population was 18.8 million heads). From 2015 to 2019, the production of wheat and wheatrye flour decreased by 5.5%. During the same period, flour exports had a small volume despite an increase of 17.3%, especially in comparison with the Turkish export of flour, to a considerable extent produced from Russian grain. [3] Gross harvest, mln t/%: -cereal, pulse cropstotal, incl. Summarizing the results of analysis of the Russian grain complex development we can single out its global problem, which is the fact that it produces and exports products with low added value instead of processing these products and selling them with high added value. [4,5] Wheat grain is a primary product for multipurpose use and is widely used in the national economy due to its nutritional value and rich chemical composition. Wheat grain contains proteins, fats, carbohydrates. Wheat contains up to 20% of protein, about 60% of carbohydrates, 1.5% of fats, essential oil, hemicellulose, fiber, starch, pectin, glucose, fructose, lactose, maltose, raffinose, vitamins -Е, F, В1, В2, В6, С, РР, carotin, niacin, choline, biotin, folacin. There are also such macroand microelements as potassium, calcium, silicon, magnesium, sodium, sulphur, phosphorus, chlorine, aluminium, boron, vanadium, iron, iodine, cobalt, manganese, copper, molybdenum, nickel, tin, selenium, silver, strontium, titanium, chromium, zinc, zirconium in wheat. Wheat contains 3.4% of essential amino acids.
Wheat is used in many areas and industries, from the food sector to pharmacy and cosmetology. The tremendous value of wheat is determined by the fact that the products obtained from it serve as the main food products of the population. The food products obtained from it, in comparison with others the most demanded and cheap ones, are the products of mass and everyday consumption.
First of all, it is the main bakery crop. Baker's flour of 6 grades is obtained from wheat grainthey are extra grade, top grade, fine wheat flour, first grade, second grade, wholemeal flour, which are traditionally used for the production of bread. Wheat bread is distinguished by its palatability and is superior to bread made from other cereals flour in its nutritional value. Wheat used for flour production should contain a sufficient amount of protein (10-20%). For the production of high-quality bread such important indicators of wheat grain baking capacity as gluten quantity (at least 23-25%) and quality (at least I good quality group -43.0-77.0 units of IDK) should be taken into account. The demand for wheat for the baker's flour production in the Russian Federation is 15 million tonnes per year. [6] In addition, wheat flour is traditionally used for the confectionery, pasta, culinary products and flour national goods production. At present, the range of wheat flour products has expanded, for the production of which flour with certain properties that differ from the ones of the baker's flour is required.
For the pasta production, durum wheat is used, but also there high vitreous bread wheat (Triticum aestivum) can be used. Pasta made from Triticum aestivum is also popular in Russia and finds a market. Thus, for example, in Italy, which is the world trendsetter in the production of durum wheat and pasta, the latter can also be produced from Triticum aestivum, but such a product cannot be called "pasta".
Instant porridges are made from specially processed wheat flour, and high-protein breakfast flakesfrom wheat gluten.
Along with the main product of the milling industry, by-products are obtainedbran, wheat germ. Wheat bran is used as an additive in the production of bakery and flour confectionery goods, as well as like a biologically active additive to human food, which has recently gained high demand. Wheat bran is, first of all, an excellent source of fiber, a large amount of dietary fiber, which is very useful and important for digestion. Wheat germ has a high digestibility and biological value. It is the certain leader among natural sources of vitamin E and B vitamins and has excellent organoleptic characteristics (odorless, with a sweetish flavor, golden-yellow). [7] Several types of groats are produced from wheat grains: semolina, wheat groats (Poltavskaya, "Artek"), burghul, couscous; they are used for cooking porridges, garnishes and also baked puddings, cheese pancakes, puddings, mousses, pilau, added to soups and sauces.
The use of grain wheat is not limited to the food sector. Wheat is widely used for fodder purposes and the compound feed production. According to experts, the share of wheat used for those purposes is up to 50 %. Wheat grain used for fodder purposes and the compound feed production should contain a sufficient amount of soluble protein and a reduced content of gluten (insoluble protein).
Wheat grain is the basic primary product in the distillery industry, from which beer, alcohol and vodka are produced.
Wheat is a crop with extraordinary useful properties, the application of which has opened up many possibilities for pharmacy and cosmetology. For cosmetic purposes, wheat germ oil is used, as it contains a rich mineralantioxidant complex. Wheat bran and crushed grain are included in many eco-scrubs and peels for sensitive and problem skin.
Earlier, monosodium glutamate enhancing the taste of food was obtained from wheat protein, now this substance is obtained mainly by chemical means. Considering that at present the population's demand for food products containing natural plant components is increasing, it is necessary to bring back the production of monosodium glutamate from wheat protein.
Products obtained from wheat grain have various consumer properties. For their production, wheat with different grain quality indicators is needed, therefore, the effective and rational use of wheat grain can be ensured by combining all stages of production and processing into a single manufacturing chain, in which the requirements for the final product should be established at the grain growing stage.
To this end, the All-Russian Scientific and Research Institute for Grain and Products of its Processing (VNIIZ) develops target standards for grain and its processed products. Currently, interstate standards are in force -GOST 26574-2017 «Wheat bakery flour. Specifications», which establishes the requirements for wheat flour for baking, and GOST 34702-2020 «Wheat bakery. Specifications», which determines the requirements for bakery wheat. VNIIZ conducts scientific researches on the development of requirements for grain and flour for its intended purpose. The lack of requirements for flour for the intended purpose leads to the lack of relevant requirements for grain, which has already led to the absence of confectionery wheat (soft white wheat) growing. [8,9] Based on the multifunctionality of wheat grain as a primary product, as well as on the understanding that the added value from its processing should remain on the territory of Russia, the economic efficiency of not only traditional processing, but, moreover, advanced processing of wheat grain becomes obvious.
As a result of advanced processing of wheat grain, a wide range of products with high added value can be obtained which are native and modified starches, gluten, glucose-fructose syrups (HFSS), starch syrups, organic acids, food-grade alcohol, biofuel and other products. At further processing of the obtained starch, a wide range of various derivatives and biobased products is produced, the application of which is practically unlimitedfrom the food industry to the petrochemical one. [10] A paradoxical situation has developed: Russia is the world's largest exporter of grain, and is acutely dependent on imported products of its processing (considerable amounts of amino acids and vitamins are almost 100% imported from abroad). The necessity for advanced processing of grain is determined by the «Long-term strategy for the development of the grain complex of the Russian Federation until 2035».
The Strategy is based on the system approach to increasing wheat grain production and improving its quality. In terms of the strategic outlook, the system of measures involves [6]: stimulation of the production of Triticum durum and Triticum aestivum with high quality indicators; the promotion of positions of Russian wheat on the world market will be greatly determined by its quality. This will require a rational combination of using natural competitive advantages, innovative and investment factors, in particular, the creation of specialized zones for the production of high-quality wheat varieties and the formation of grain clusters in the regions; increasing the yield and quality of grain by maintaining a healthy phytosanitary environment; providing with agricultural machinery in order to optimize the timing of agritechnological activities (by 2035, 1 tractor will be charged for 156 ha of arable land and 1 combine harvesterfor 278 ha of cultivated area); increasing domestic consumption through the processing of wheat grain, including the creation of advanced grain processing capacities. The Strategy proposes to channel investments in the amount of 150 billion rubles for these purposes.
Conclusion
In general, the studies conducted allow us to draw the following conclusions: 1.
Wheat grain is a very valuable primary product, from which a wide range of products with high added value is obtained.
2.
There are certain problems in the development of the grain complex of Russia, including wheat. In recent years, an emphasis is placed on the development of grain export at the expense of its processing and the development of its domestic consumption. Therefore, there is an acute dependence on imports of certain wheat processing products. 3.
The outlook for the wheat grain production is associated with the expansion of not only traditional, but also advanced processing of wheat grain. | 3,010 | 2021-01-01T00:00:00.000 | [
"Economics",
"Agricultural And Food Sciences"
] |
Physical and Antimicrobial Properties of Hydroxypropyl Starch Bio-plastics Incorporated with Nyamplung (Calophyllum inophyllum) Cake Extract as an Eco-Friendly Food Packaging
Nyamplung (Calophyllum inophyllum) cake as a by-product of nyamplung oil production is still limited. This research aimed to evaluate characteristics of antimicrobial bio-plastic made from hydroxypropyl starch as a basic ingredient and Nyamplung cake extract as additive. Nyamplung cake extract addition affected bio-plastic mechanical property by reduction of tensile strength but improved physical properties by reduction of vapor and oxygen permeability, water solubility, and increased elongation. This was probably due to the extract serve as natural crosslinking. Fourier Transform Infrared Spectroscopy analysis showed no difference in five bio-plastic samples, which probably caused by low concentration of extract. Thermogravimetry analysis showed the highest weight reduction in control of 95.824% and the lowest on Ext2% of 84.471%. Morphology analysis showed agglomeration of the extract on sample surface due to uneven ingredient distribution in mixture. Bio-plastic was more sensitive against gram positive bacteria than gram negative with their respective largest inhibition zone of 30 mm (Staphylococcus aureus) and 23 mm (Escherichia coli). This was probably due to the content of the extract serve as natural crosslinking and antibacterial agent.
Introduction
The main function of packaging is to isolate food from the environment to minimize food defects during distribution by reducing air pressure, humidity, gas, odor, and mechanical strength. Packaging avoids the exposure of food to degradation agents and also prevents food from microorganism contamination [1,2]. In the last few decades, a large number of antimicrobial food packaging products have been developed, which able to control microbial growth and effectively extend food shelf life up to 2 weeks or more.
Antimicrobial agents in packaging materials are used to provide security assurance, to extend shelf life and food quality. Antimicrobial packaging can inhibit food decay and suppress pathogenic microbes in foods [2,3]. Biodegradable materials derived from renewable materials, such as those from polysaccharides, proteins, and lipids, have much more attention due to their potential to replace conventional plastics [4,5].
Nowadays, there is an increase in consumers demand on environmentally friendly food packaging from natural sources without preservatives, while food processing industries also want to increase shelf life and product safety. Starch is one of the common ingredients used to produce environmentally friendly packaging. It consisting of the straight molecule (amylose) that form a gel when heated and branched molecules (amylopectin) [6].
In this study, hydroxypropyl starch (HPS) was used as the main material to overcome the weakness of native starch because the hydroxyl group, which is sensitive to air, is the main obstacle for the application of starchbased food packaging materials [7]. HPS (C13H56O22) is a derivative starch or modified starch. Hydroxypropylation can be done through modification using propylene oxide (C3H6O). HPS is chemically modified through the conversion of hydroxyl group of glucose monomer into -O-(2-hydroxypropyl) group. HPS has a higher solubility compared to native starch [8].
High production cost is one of the causes for the low number of antimicrobial packaging materials in the market. Nyamplung cake as an ingredient for antimicrobial packaging material is expected to be a solution to this problem because it is obtained as a byproduct after removal of oil from seeds [9]. Nowadays, the utilization of nyamplung cake is still limited. It is used as organic fertilizer or animal feed. This study aimed to evaluate the potential and its feasibility overview of nyamplung cake extract as an additional ingredient to produce antimicrobial packaging, in which HPS was as the main ingredient.
Materials
Nyamplung cake was obtained from biodiesel industry in Purworejo, Central Java, Indonesia. Escherichia coli (gram-negative bacteria), Staphylococcus aureus (grampositive bacteria) and Aspergillus niger (fungi) were obtained from Biotechnology Center of University of Gsdjah Mada, Yogyakarta. HPS was obtained from Haihang Industry Co., LTD, Jinan City, Shandong, Province, China., glycerol and ethanol were obtained from Eco-green Oleochemicals, Indonesia and distilled water.
Nyamplung Seed Cake Extraction
Nyamplung seed cake extract was prepared by adding 20 g of 120 mesh nyamplung seed cake flour to 120 ml 96% ethanol. Mixture was incubated at 80 ˚C for 1 h. The mixture was then filtered using Whatman paper no. 1. Filtrate was then dried in oven dryer at 50 ˚C for 24 h [10].
Mechanical Properties Analysis
Mechanical properties test, such as tensile strength (TS), elongation at break (EI), and modulus of elasticity (MOE), was carried out using Brookfield USA analyzer texture. The five bio-plastics were cut into 30 x 5 mm.
Bio-plastics were held parallel with an initial grip separation of 15 mm, and pull apart at a head speed of 25 mm/min. Tensile strength was calculated by dividing the maximum force.
Water Vapor and Oxygen Permeability
Water vapor permeability (WVP) of bioplastic was determined using a modified ASTME 96 procedure. Permeation cell (glass acrylic), which contained water and bioplastic, was closed tightly (0% RH and 0 Pa partial vapor pressure). The closed cell was placed in a control room at 70% RH and maintained at 25 ˚C (2300 Pa, partial vapor pressure). After 20-24 h, when water vapor transmission reaches stationary, cell weight changes were recorded over a 4-day period to calculate WVP [12]. : pressure difference under the film and outside, Pa Analysis of oxygen transfer and other gases through packaging materials can be done as follows [13].
Water Solubility
Water solubility analysis was carried out by soaking dry bio-plastics in 50 ml distilled water, then put in the flash on shaking incubator at 25 ˚C for 24 h. Bio-plastics were then taken and dried again (at 105 ˚C for 24 h) to determine the weight of the dry material, which dissolved in the water. The mass of water-soluble was calculated by subtracting the initial dry mass with the mass of the insoluble dry matter and expressed as a percentage of the initial dry matter content [14].
Antimicrobial Analysis
Antimicrobial analysis of bio-plastics was carried out using agar diffusion method according to the optimized method described by Pranoto [15]. About 0.1 ml suspension of 10% sample was spread into sodium agar medium and on malt extract agar media for bacteria and fungal strain, respectively. Well pits with ± 6 mm diameter in the media were used to put ± 6 mm the test samples and incubated at 40 ˚C for 1-2 days, while fungi strain was incubated at 30 ˚C for 2-3 days.
Thermogravimetric Analysis (TGA)
TGA analysis of bioplastic was carried out using TGA Analyzer-60. Approximately 5-10 mg samples were heated from room temperature up to 600 ˚C at heating rate of 10 ˚C/s, measured under high purity nitrogen atmosphere at 100 ml/s flow rate [17].
Scanning Electron Microscopy
Bio-plastics microstructure analysis was carried out by SEM. The sample was stored in a desiccator containing P2O5 for two weeks to ensure no water in the sample. To observe the cross-section, bio-plastics were frozen with N2 and cry-o-fractured liquids [12]. All samples were mounted on bronze stubs and sputter coated with gold layers before imaging. Surface micrographs of bio-plastic and fracture surface were obtained using electron scanning microscopy (SEM; JSM-6510LA) 10-300.000x magnification with a resolution of 1-10nm.
Result and Discussion
HPS was used as the main material to produce bio-plastic due to its ability to increase solubility, reduce gelatinization temperature, retrogradation, and crystallinity, while it improved mechanical properties compared to native starch [8].
Mechanical Properties of Bio-plastic
The results show that tensile strength (TS) was inversely proportional to the elongation at break (EI). The addition of extract concentration resulted in a decrease in TS and an increase in EI (Figure 1). TS and EI of bio-plastic must withstand to the normal level of stress during product distribution and food handling, while they also maintain its integrity and barrier properties [10]. High TS level is generally needed, but the deformation value must also be adjusted to the intended application. All the tensile strength of bio-plastics HPS decreased significantly (ρ<0.05) in the addition of nyamplung extract. It indicated that Nyamplung cake extract addition affected bio-plastic mechanical properties. This may be due to the incompatibility of nyamplung cake extract and HPS biopolymers [10] or the effect of glycerol concentration. The optimum concentration of plasticizer and ingredients is needed to obtain good mechanical and physical properties of bio-plastic for packaging function [18]. It is suggested that nyamplung cake extract contributed to the reduction of intermolecular structural carbon interaction in HPS bio-plastics, which induced the development of heterogeneous bio-plastics structure, led to discontinuity and TS reduction. Besides, Nyamplung cake extract can easily interact with the HPS chain and inhibits the bonds among HPS molecules [10]. Excessive hydroxypropyl groups can cause a sharp decrease in TS of films [19].
The properties of polymer can be modified by using several crosslinking methods. The chemical crosslinking can be carried out by chemical cross-linker addition to the polymer. Besides, there is also natural cross-linker, which can be derived from plant extract in the form of tannin compounds. Tannin is a polyphenol compound that contains several OH groups that can be used as natural cross-linking [20]. Figure 2 shows that sample without Nyamplung cake extract as control had the highest water vapor transfer rate through the packaging. WVP among five samples decreased with an average of 1.6116 x 10 -11 . Water vapor transport can be increased by vapor molecules in the air [7]. Internal factors that influence WVP level are material thickness and type, coating of packaging, and bonding of polymeric materials. The presence of additional additives in bio-plastics may also reduce bio-plastics water sensitivity by protecting the exposure part of water sensitive OH groups [7]. Besides, the main function of bio-plastic is to block oxygen and moisture transfer from the surrounding atmosphere or between two heterogeneous food products [10]. Therefore, water vapor permeability must be as low as possible. Figure 2 shows that the highest OVP was in control sample (9.13 x 10 -5 %/mm 2 .S). It indicated that treatment without nyamplung cake extract addition had high permeability and oxygen transfer compared to others. Oxygen inhibition is one of the important properties that must be considered in starch-based packaging because it can reduce food quality. Gas transportation through the film is very dependent on diffusivity. The pores type of nanocrystals affect gas molecules diffusion and it can increase molecular migration, and thus consequently it limits permeability [21]. This indicated that the addition of nyamplung cake extract can reduce water solubility thus making bio-plastics more resistant to the relative humidity of the atmosphere. This is also possible because of the presence of phenolic compounds found in the extract [10,14]. Phenol, tannin, and flavonoid were detected as prominent secondary metabolites [22]. Therefore the addition of nyamplung cake extract, in general, can reduce WVP and OVP.
Water Solubility
The addition of Nyamplung cake extract reduced water solubility and it caused bio-plastics more resistant to the relative humidity of the atmosphere ( fig. 3). Extract addition (0.5%) reduced water solubility about 1.59%. This is due to the presence of phenolic compounds found in the extract [10,14].
Water solubility is one of the important properties of bio-plastics for applications in food protection. In general, high water solubility will cause low water resistance. Solubility in water is directly related to the structural properties of proteins and the presence of nonprotein component in bio-plastics, such as phenolic compounds [23,14]
FT-IR Analysis of Bio-plastic
FT-IR spectroscopy was used to explain molecular interactions between HPS, glycerol, and Nyamplung cake extract in bio-plastics. FT-IR spectrum profiles of five samples had very identical patterns, except the absorbance intensity of peaks. Figure 4 shows FT-IR spectrums of five samples (control, bio-plastics made using 0.5%, 1%, 1.5% and 2% extract) compared to bioplastics with 15% extract. Fig. 4. The Different in FT-IR spectrum between five bio-plastics samples and bio-plastics with a contribution of 15% extract, found on peak 6, and band 1706.14 that show amide group.
5 Thermogravimetric Analysis
Thermal analysis was carried out to determine thermal stability of HPS-extract of Nyamplung cake bio-plastic. The first stage of degradation occurred at a temperature range of 30 -200 ˚C for all bio-plastic samples based on water and volatile compound loss before the initial decomposition temperature [25,26] the chemical decomposition of bio-plastic components. The third stage occurred at 320 -600 ˚C range based on oxidative degradation of carbon residue formed. Carbon material in the third stage is not decomposed in the presence of nitrogen, but the complete oxidation and 100% loss of mass occur in the presence of oxygen [26]. About 18-27% of the remaining degradation of samples at a temperature of 650 ˚C was calcium-rich residue and impurity, as well as oil [27]. 84.471 The addition of extract did not have a significant effect on thermal behavior (Table 1). However, the addition of extract increased the onset and end set of degradation temperature compared to bio-plastics control and reduced mass loss.
Morphology of Bio-plastic
Morphology of HPS bio-plastics, which was added with Nyamplung cake extract as an antimicrobial agent, was identified by scanning electron microscope (SEM). HPS bio-plastic without extract showed a more compact texture than that of Nyamplung cake extract addition ( Figure 5). The presence of Nyamplung cake extract can also be observed in bio-plastic, as some of it appeared as agglomeration on the surface of the sample, which was likely due to the uneven mixing of materials during bioplastic preparation.
Antimicrobial Analysis of Bio-plastic
Antimicrobial analysis on gram positive (Staphylococcus aureus), gram negative bacteria (Escherichia coli) and fungi (Aspergillus niger) were conducted using agar diffusion method. This method was based on the measurement of clear zones, or growth inhibition zones, which was caused by bio-plastic after direct contact with bacteria culture [28,29]. In this study, higher Nyamplung cake extract concentration generally increased inhibitory power of bio-plastic on gram-positive (S. aureus) and gram-negative bacteria (E. coli), but not to fungi (A. niger). Bio-plastic without Nyamplung cake extract as control also showed antimicrobial activity both against gram-positive (S. aureus) and gram-negative (E. coli). Inhibition zone against gram-negative bacteria (E. coli) was smaller than gram positive (S. aureus). Differences in sensitivity may also be related to the structure and function of cell walls of these microbes [10].
Conclusion
The addition of Nyamplung cake extract caused a decrease in Tensile Strength, Water Vapor Permeability, and Oxygen Vapor Permeability and it increased Elongation at Break. The results showed that addition of Nyamplung cake extract to HPS bio-plastics had good potential in food packaging because of their antibacterial properties, and further research is needed to improve mechanical properties and improve bioplastics morphology (FTIR and Scanning Electron Microscopy Analysis). However, if it is added as an antimicrobial agent in the manufacture of HPS-based bio-plastics it does not have inhibitory power against fungi (A. niger) but effective for inhibiting gram positive bacteria (S. aureus) and gram-negative bacteria (E. coli). This is both due to the addition of small extract concentration and bacteria are more sensitive than fungi. Besides, addition of nyamplung cake extract on bio-plastics product also plays a role for creating a good environment because it can increase the value-added of nyamplung waste and make bio-plastics that are easily degraded so that is environmentally friendly. | 3,515.4 | 2019-01-01T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
Mouse Genome Database (MGD) 2019
Abstract The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the community model organism genetic and genome resource for the laboratory mouse. MGD is the authoritative source for biological reference data sets related to mouse genes, gene functions, phenotypes, and mouse models of human disease. MGD is the primary outlet for official gene, allele and mouse strain nomenclature based on the guidelines set by the International Committee on Standardized Nomenclature for Mice. In this report we describe significant enhancements to MGD, including two new graphical user interfaces: (i) the Multi Genome Viewer for exploring the genomes of multiple mouse strains and (ii) the Phenotype-Gene Expression matrix which was developed in collaboration with the Gene Expression Database (GXD) and allows researchers to compare gene expression and phenotype annotations for mouse genes. Other recent improvements include enhanced efficiency of our literature curation processes and the incorporation of Transcriptional Start Site (TSS) annotations from RIKEN’s FANTOM 5 initiative.
INTRODUCTION
The Mouse Genome Database (MGD) is the community model organism knowledgebase for the laboratory mouse. MGD contains comprehensive information about mouse gene function, genotype-to-phenotype annotations, and mouse models of human disease (1). The mission of the MGD is to advance the use of the laboratory mouse as a model system for investigating the genetic and genomic basis of human health and disease. MGD maintains a comprehensive catalog of mouse genes and genome features connected to genomic sequence data and biological annotations. Annotations include (i) molecular function, biological process and cellular location of genes using terms and relations of the Gene Ontology (GO) (see Gene Ontology Consortium, 2), (ii) mutations, variants and human disease models using terms from the Mammalian Phenotype On-tology (MP) and Disease Ontology (DO) and (iii) official nomenclature and identifiers for mouse gene names, symbols, alleles and strains ( Table 1). The rigorous application of nomenclature and annotation standards in MGD ensures that the information in the resource is curated consistently to support robust and comprehensive data retrieval for sets of genes that share biological properties and data mining for knowledge discovery.
MGD is a core resource within the Mouse Genome Informatics (MGI) consortium (http://www.informatics.jax.org). Other database resources that are coordinated within the MGI consortium include the Gene Expression Database (GXD) (3), the Mouse Tumor Biology Database (MTB) (4), the Gene Ontology project (GO) (5), MouseMine (6), the International Mouse Strain Resource (IMSR) (7) and the CrePortal database of recombinase expressing mice (8). Data included in all resources hosted at the MGI website are obtained through a combination of expert curation of the biomedical literature and automated or semi-automatic processing of data sets downloaded from more than fifty other data resources. A summary of the current content of MGD is summarized in Table 2.
In this report we describe significant enhancements to MGD, including two new graphical user interfaces: (i) the Multiple Genome Viewer for exploring the genomes of multiple mouse strains and (ii) the Phenotype/Gene Expression matrix which allows users to compare gene expression and phenotype annotations for mouse genes. Other improvements include improvements to literature curation processes, and the incorporation of TSS annotations from RIKEN's FANTOM 5 initiative (9).
Multiple genome viewer
The recent release of assembled and annotated genomes for 16 inbred mouse strains (https://www.biorxiv.org/content/ early/2018/02/12/235838) and two wild-derived strains (CAROLI/EiJ and PAHARI/EiJ) (10) represent major milestones in mouse genetics and comparative genomics. MGD's Multiple Genome Viewer (MGV; http://www. informatics.jax.org/mgv) was developed specifically to en- able researchers to explore and compare chromosomal regions and synteny blocks between the C57BL/6J reference genome and the 18 other available mouse genomes ( Figure 1). MGV shows corresponding regions of the user-selected genomes as horizontal stripes and the equivalent features in each genome via vertical connectors ( Figure 1). The navigation of the genomes is synchronized as a user scrolls in 5 or 3 directions. Researchers can generate custom sets of genes and other genome features to be displayed in MGV by entering genome coordinates, function, phenotype, disease and/or pathway terms. The genome feature annotations for the C57BL/6J genome displayed in MGV are taken from MGI's Unified Mouse Genome Feature Catalog that integrates the genome feature annotations from Gencode, NCBI and miRBase into a single, non-redundant set (11). Currently, only the C57BL/6J assembly and annotations are 'reference quality', thus there are some gaps in the annotations which limit the ability of a user to identify equivalent genome features across all of the available genomes. As additional sequence data are generated, improvements will be made to the quality of all the assemblies and their corresponding genome feature predictions.
Gene model structure details and sequences for all 19 annotated mouse genomes are also accessible from MGI's MouseMine (http://www.mousemine.org) through its user interface and web services (MouseMine web services back the Multiple Genome Viewer). Using MouseMine, researchers may search for genes in specific strains and retrieve relevant data including transcripts, exons in a GFF file, and CDSs in FASTA format. On a MouseMine gene page--or when viewing a list of genes--several new query templates are automatically run and provide easy navigation and retrieval of the structural components of gene models (e.g. exons, introns) across user-selected strains. These new templates provide access from a gene to its strainspecific genomic sequences, transcripts, CDSs, or exons. An Export button, located above a report, will allow a user to download results in several formats: tab or commaseparated file, FASTA or GFF3.
Phenotype/Gene expression comparison matrix
In collaboration with the Gene Expression Database (GXD), we deployed a new interface that allow users to compare gene expression and phenotype data for a given gene (see also Smith CM et al.,12). The new Phenotype/Gene Expression Comparison Matrix, accessible from the Expression and Mutations, Alleles and Phenotype section of MGD's gene detail pages, visually juxtaposes information about tissues where a gene is normally expressed against tissues where mutations in that gene cause abnormal phenotypes (Figure 2). Using this new data display tool researchers may explore the molecular mechanisms of disease by answering such questions as 'What tissues affected by a gene mutation also show expression of that gene?' or 'What tissues affected by a gene mutation do not express that gene?'.
Literature triage process improvements
While the numbers of publications indexed in PubMed that mention mice continues to grow (∼72 000 papers added in 2017), the subset of papers relevant to MGD (i.e. those focused on genetics and genomics of the laboratory mouse) has remained relatively stable. We curate ∼12 000 of these papers each year, primarily from a core set of 160 journals. A major challenge for MGD curators is how to identify the relevant subset of papers from a large corpus of biomedical literature. Manuscripts, while peer-reviewed, are often published without annotations using relevant bio-ontologies; authors often do not adhere to existing gene, allele or mouse strain nomenclature standards for example. As a consequence, the identification of publications that are actually relevant to MGD's mission requires a substantial investment of time for manual review.
To improve the scalability of our literature curation efforts, we have streamlined our literature selection processes and implemented software infrastructure to support au-D804 Nucleic Acids Research, 2019, Vol. 47, Database issue tomation for these processes. We now store the full text of papers extracted from PDFs downloaded from publishers and assess the relevance of papers using keyword searches. Downloading papers from PLoS journals is performed automatically and takes advantage of the PLoS API's full text search capabilities. Full text searches improve the identification of relevant papers for MGD because important keywords such as mouse and murine are often not mentioned in article titles and abstracts. In the eleven months following the implementation of the improved literature selection processes (aka, literature triage), individual curator efficiency in identifying papers relevant to mouse phenotypic alleles has increased by 83% as measured by number of relevant papers identified by an individual per unit time. For our user communities, the increased efficiency in literature curation means a shorter time between publication and accessibility of the phenotype and disease annotations from MGD.
To build training sets that can be used for future automatic efforts and to support research in natural language processing and machine learning, we also now store full text of papers that are deemed not relevant to MGD in addition to the relevant papers.
Transcriptional Start Site (TSS) genome features
As part of MGD's efforts to represent experimentally supported regulatory regions in the mouse genome, over 164 000 Transcriptional Start Sites (TSS) identified by investigators at the RIKEN Institute using Cap Analysis Gene Expression (CAGE) sequencing (9) were loaded into MGD. TSS sites are particularly informative for delineating the structure of promoter regions of genes; many genes have more than one promoter region that controls the expression of alternative transcript forms. Over 22 000 of the TSS sites identified by the RIKEN data are associated with annotated mouse genes. From a gene detail page, users can see the annotated TSS sorted by distance from the gene's 5 end.
IMPLEMENTATION AND PUBLIC ACCESS
The production database for MGD is a highly normalized relational database hosted on a PostgreSQL server behind a firewall. The production database is designed and optimized for data integration and incremental updating and is not directly accessible by the public. The public web interface is backed by a combination of a highly denormalized databases (also in PostgreSQL) and Solr/Lucene indexes, designed for high performance query and display in a readonly environment. The front-end data stores are refreshed from the production database once a week. The separation of public and production architectures provides a large measure of flexibility in project planning, as either side can (and often does) change without affecting the other.
MGD broadcasts data in a variety of ways to support basic research communities, clinical researchers and advanced users interested in programmatic or bulk access. MGD provides free public web access to data from http://www. informatics.jax.org. The web interface provides a simple 'Quick Search', available from all web pages in the system and is the most used entry point for users. The Quick Search may be used to search for genes and genome features, alleles and ontology or vocabulary terms. Multi-parameter query forms for a number of data types are provided to support searches based on specific user-driven constraints, Genes and Markers; Phenotypes, Alleles and Diseases; SNPs; and References. Data may be retrieved from most results pages by downloading text or Excel files, or forwarding results to Batch Query or MouseMine analysis tools (see below).
MGD offers batch querying interfaces for data retrieval for users wishing to retrieve data in bulk. The Batch Query tool (http://www.informatics.jax.org/batch) (13) is used for retrieving bulk data about lists of genome features. Feature identifiers can be typed in or uploaded from a file. Gene IDs from MGI, NCBI GENE, Ensembl, UniProt and other resources can be used. Users can choose the information set they wish to retrieve, such as genome location GO annotations, list of mutant alleles, MP annotations, RefSNP IDs and Disease Ontology (DO) terms. Results are returned as a web display or in tab delimited text or Excel format. Results may also be forwarded to MouseMine (see below).
MGD data access is available through MouseMine (http: //www.mousemine.org), an instance of InterMine that offers flexible querying, templates, iterative querying of results and linking to other model organism InterMine instances. MouseMine access is also available via a RESTful API, with client libraries in Perl, Python, Ruby, Java and JavaScript. MouseMine contains many data sets from MGD, including genes and genome features, alleles, strains and annotations to GO, MP and DO.
MGD provides a large set of regularly updated database reports from http://www.informatics.jax.org/downloads/. Direct SQL access to a read-only copy of the database is also offered. Those interested in SQL access should contact MGI user support for an account. MGI User Support is also available to assist users in generating customized reports on request.
Interactive graphical interfaces for browsing mouse genome annotations is supported through our instance of JBrowse (http://jbrowse.informatics.jax.org/), a JavaScriptbased interactive genome browser with multiple features for navigation and track selection (14).
MGD is one of the founding members of the Alliance of Genome Resources, a new data resource integration effort among the major model organism (MOD) database groups and the Gene Ontology Consortium (GOC). The other founding members of the Alliance are FlyBase, WormBase, Saccharomyces Genome Database (SGD), Rat Genome Database (RGD) and the Zebrafish Information Network (ZFIN). The Alliance is standardizing access to common data types from different model organisms to better support comparative biology investigations for biomedical researchers (15). Genetic and genomic data for the laboratory mouse that are curated by MGD are available from the public web portal for the Alliance (http://www.alliancegenome. org). Data types accessible from the Alliance web site currently include gene names and symbols, genome locations, orthology, function annotations, and disease associations. New data types (e.g. gene expression, interactions, etc.) are being added to the site regularly. The Alliance serves as one of the designated Data Stewards for the NIH Data Commons Pilot Project, providing access to model organism data and annotations via APIs to promote the development of the next generation of cloud-based data access and analysis platforms in genome biology.
FUTURE DIRECTIONS
In addition to continuing the essential core functions of MGD, three major enhancements are planned for this resource over the next year. First, following the decision of NCBI's dbSNP to no longer include variation data from model organisms, MGD will implement data loads for mouse SNP data from the European Variation Archive (EVA; https://www.ebi.ac.uk/eva/). Second, we will implement new user interfaces focused on delivering diverse data about individual mouse strains. Although we have provided a strain accession ID service and descriptions of strain characteristics from the classic Festing's inbred strain lists resource for many years (http://www.informatics.jax.org/ inbred strains/mouse/STRAINS.shtml), the new strain detail pages will provide access to detailed information about strain-specific mutations, phenotype and disease model information, published references, and links to multiple external resources such as Mouse Phenome Database (MPD) (16), the International Mouse Strain Resource (IMSR) (7) and MGD's new Multiple Genome Viewer. Third, the Multiple Genome Viewer will be extended to support display of the intron/exon structure of protein coding genes to allow the comparison of gene structure across strains.
OUTREACH
User Support staff are available for on-site help and training on the use of MGD and other MGI data resources. MGD provides off-site workshop/tutorial programs (roadshows) that include lectures, demos and hands-on tutorials and can be customized to the research interests of the audience. To inquire about hosting an MGD roadshow, email<EMAIL_ADDRESS>On-line training materials for MGD and other MGI data resources are available as FAQs and ondemand help documents.
Members of the User Support team can be contacted via email, web requests, phone or fax.
• World wide web: http://www.informatics.jax.org/ mgihome/support/mgi inbox.shtml • Facebook: https://www.facebook.com/mgi.informatics • Twitter: https://twitter.com/mgi mouse and https: //twitter.com/hmdc mgi • Email access<EMAIL_ADDRESS>• Telephone access: +1 207 288 6445 • Fax access: +1 207 288 6830 MGI-LIST (http://www.informatics.jax.org/mgihome/ lists/lists.shtml) is a forum for topics in mouse genetics and MGI news updates. It is a moderated and active email-based bulletin board for the scientific community supported by the MGD User Support group. MGI-LIST has over 1800 subscribers. A second list service, MGI-TECHNICAL-LIST, is a forum for technical information about accessing MGI data for software developers and bioinformaticians, for using the APIs and for making web links to MGI pages.
CITING MGD
For a general citation of the MGI resource, researchers should cite this article. In addition, the following citation format is suggested when referring to datasets specific to the MGD component of MGI: mouse genome database (MGD), MGI, The Jackson Laboratory, Bar Harbor, Maine (URL: http://www.informatics.jax.org). Type in date (month, year) when you retrieved the data cited. | 3,661 | 2018-11-08T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Time-division multiplexing of Mbit/s data-packets within Gbit/s data sequences through nonlinear temporal focusing
. In this work, we report on an all-optical, real-time, nonlinear temporal compression technique based on a counter-propagating degenerate four-wave mixing interaction in birefringent optical fibres. As a proof-of-concept, we demonstrate the extreme temporal focusing and interleaving of a 10-Mbit/s data packet into a 10-Gbit/s data sequence, with record temporal compression factors ranging from 3 to 4 orders of magnitude and including non-trivial on-demand time-reversal capabilities. Our approach is scalable to different photonic platforms and offers great promise for ultrafast arbitrary optical waveform generation and related applications, while enabling the compression of THz-bandwidth optical signals from low-cost, low-bandwidth optical waveform generators.
The capability of compressing the timescale of optical waveforms beyond the bandwidth limitations of conventional optoelectronic technologies is an important functionality targeted in numerous applications for which cost-effective generation of ultrafast optical waveforms is required.To this aim, several approaches have been reported for the temporal compression of optical data packets and mostly rely on optical buffering combined to optical delay lines [1][2].Basically, these previous demonstrations are mainly based on a time-interleaving process requiring ultra-short pulses and do not affect the signal or pattern duration.Consequently, the initial pulse width must be selected carefully to match with the output compressed repetition-rate.Furthermore, these techniques are not compatible with the temporal compression of arbitrary optical waveforms.To overcome this difficulty, one peculiar approach consists in turning the time-lens technique backwards into a temporal focusing telescope.Using this phenomenon, Foster and co-workers have demonstrated a 27-compression factor for ns-waveforms and 10-Gbit/s data packets [3].However, the focusing capabilities of time-lens apparatus are fundamentally restricted by practical limitations of the focal length and lens aberrations due to high-order dispersion effects [3].In this contribution, inspired by the original idea of A. Starodumov described in ref. [4], we report on a temporal compression technique exploiting a four-wave-mixing (FWM) interaction occurring between counter-propagating signals within a polarization maintaining optical fibre (PMF) [5][6].We achieve ultrahigh temporal compression factors ranging from 3 to 4 orders of magnitude and provide a proof-of-principle demonstration by compressing and multiplexing a 10-Mbit/s data packet within a 10-Gbit/s data sequence by means of a ×4350-temporal reduction.
The basic principle is illustrated in Fig. 1a.From one side of a PMF, the signal to be compressed (red) is injected with a polarization state aligned at 45° with respect to the slow and fast birefringent axes.From the opposite end, a short readout pulse (black), propagates along the slow polarization axis.When the readout pulse collides with the incoming signal, a FWM interaction leads to the emergence of a new signal (blue), copropagating and orthogonally polarized with respect to the readout pulse.Due to birefringence, this new signal propagates at a different speed and progressively walks away from the readout pulse.The overall process repeats as the readout pulse sweeps along subsequent parts of the incident signal, thus generating an ultrafast replica of the slow input waveform.Injecting the readout pulse along the fast birefringent axis instead of the slow one will result in a time-reversal operation on top of the compression process.The compression factor M is mostly defined by the rate at which the generated signal walks away from the readout pulse, such that = 2 |∆| ⁄ , where n and Δn are respectively the group index and index difference between both axes of the PMF.Hence, standard PMFs lead to compression factors ranging from 10 3 to 10 4 .
Figure 1b displays the experimental setup, which mainly consists in a 103-m-long PMF (Δn=6.67×10−4 ), surrounded by 2 optical circulators.The 10-Mbit/s data packet to be compressed is first generated from a continuous-wave (CW) laser centred at 1551-nm and carved by an intensity modulator (IM).An Erbium-doped fibre amplifier (EDFA) as well as a polarization controller (PC) are then used to amplify the 10-Mbit/s signal to 12 W peak power, whilst aligning its polarization state at 45° with respect to the PMF axes.At the opposite side of the PMF, the readout pulse consists of 6-ps pulses generated from a mode-locked laser, whose repetition rate is adjusted to 2 MHz so that a single pulse can propagate in the PMF at any given time.A second EDFA boosts the peak power to 3 W and a polarizing beam splitter (PBS) is used select the axis of the PMF (fast or slow) along which the readout pulses are injected.In addition to the readout pulse, a 10-Gbit/s data sequence is combined along the orthogonal birefringent axis and is specifically designed with a 200-ps vacant time-slot to hold the temporally compressed replica of the 10-Mbit/s data packet.Finally, at the output of the system, the readout pulse and the resulting 10-Gbit/s data sequence are separated by a second PBS before detection using a 63-GHz real-time oscilloscope.For comparison, the autocorrelation trace of the readout pulse is also reported in black.In panel (b), we have reported the 10-Mbit/s data packet to be compressed, which extends along a sub-µs temporal window.Panel (c) first reports the output 10-Gbit/s signal when the polarization of the readout pulse is aligned along the slow birefringent axis of the PMF.This result clearly shows that the 10-Gbit/s data sequence holds a compressed replica of the counter-propagating 10-Mbit/s data packet, encapsulated inside the 200-ps time-slot.The compression factor was found to be M = 4350, in excellent agreement with the theoretical prediction of 4378.These experimental measurements have been also found in good agreement with numerical predictions (dashed lines in Fig. 2c-d) based on four coupled nonlinear Schrödinger equations.Finally, Fig. 2d illustrates the case for which the readout pulse now propagates along the fast birefringent axis of the PMF, whereas the 10-Gbit/s data sequence is swapped along the slow one.The compressed replica is now timereversed, whereas the rest of the 10-Gbit/s data sequence remains remarkably preserved.Though expected, it is remarkable that this simple swapping of polarization axes can lead to such a non-trivial functionality.In conclusion, we have reported a nonlinear temporal compression technique based on a counter-propagating FWM process occurring in birefringent optical fibres.Thanks to this phenomenon, we have demonstrated the extreme time-division multiplexing of a 10-Mbit/s data packet into a 10-Gbit/s data sequence with a record compression factor of ×4350 as well as time-reversal capabilities.The present method is fully scalable to different photonic platforms, including birefringent fibers and integrated photonics, which could offer scaling factors from 10 2 to 10 5 and support the generation of THzbandwidth optical signals from low-cost, low-bandwidth waveform sources.
Figure 2
Figure 2 summarizes our experimental results.Panel (a) depicts the initial 10-Gbit/s data sequence (yellow) featuring a vacant 200-ps time-slot intended for timeinterleaving of the compressed 10-Mbit/s data packet.For comparison, the autocorrelation trace of the readout pulse is also reported in black.In panel (b), we have reported the 10-Mbit/s data packet to be compressed, which extends along a sub-µs temporal window.Panel (c) first reports the output 10-Gbit/s signal when the polarization of the readout pulse is aligned along the slow birefringent axis of the PMF.This result clearly shows that the 10-Gbit/s data sequence holds a compressed replica of the counter-propagating 10-Mbit/s data packet, encapsulated inside the 200-ps time-slot.The compression factor was found to be M = 4350, in excellent agreement with the theoretical prediction of 4378.These experimental measurements have been also found in good agreement with numerical predictions (dashed lines in Fig.2c-d) based on four coupled nonlinear Schrödinger equations.Finally, Fig.2dillustrates the case for which the readout pulse now propagates along the fast birefringent axis of the PMF, whereas the 10-Gbit/s data sequence is swapped along the slow one.The compressed replica is now timereversed, whereas the rest of the 10-Gbit/s data sequence remains remarkably preserved.Though expected, it is remarkable that this simple swapping of polarization axes can lead to such a non-trivial functionality.
Fig. 2 .
Fig. 2. (a) Initial 10-Gbit/s data sequence (yellow) and input readout pulse (black).(b) Incident counter-propagating 10-Mbit/s data packet.(c) Intensity profile (blue) of the output 10-Gbit/s data sequence when the polarization of the readout pulse is aligned on the slow axis of the PMF, demonstrating the interleaving of a ×4350 compressed replica of the 10-Mbit/s data packet.Dark dashed-lines correspond to numerical simulations.(d) Same as (c) when the polarization states of both the readout pulse and the 10-Gbit/s sequence are swapped, demonstrating time-reversal capabilities on top of temporal compression. | 1,913.8 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Examining the molecular mechanisms contributing to the success of an invasive species across different ecosystems
Abstract Invasive species provide an opportune system to investigate how populations respond to new environments. Baby's breath (Gypsophila paniculata) was introduced to North America in the 1800s and has since spread throughout the United States and western Canada. We used an RNA‐seq approach to explore how molecular processes contribute to the success of invasive populations with similar genetic backgrounds across distinct habitats. Transcription profiles were constructed from seedlings collected from a sand dune ecosystem in Petoskey, MI (PSMI), and a sagebrush ecosystem in Chelan, WA (CHWA). We assessed differential gene expression and identified SNPs within differentially expressed genes. We identified 1,146 differentially expressed transcripts across all sampled tissues between the two populations. GO processes enriched in PSMI were associated with nutrient starvation, while enriched processes in CHWA were associated with abiotic stress. Only 7.4% of the differentially expressed transcripts contained SNPs differing in allele frequencies of at least 0.5 between populations. Common garden studies found the two populations differed in germination rate and seedling emergence success. Our results suggest the success of G. paniculata in these two environments is likely due to plasticity in specific molecular processes responding to different environmental conditions, although some genetic divergence may be contributing to these differences.
| INTRODUC TI ON
The ability of invasive species to invade, adapt, and thrive in novel ecosystems has long been a focus of ecological research. Coined the "paradox of invasions," examining how invasive populations respond to novel environmental stressors after an assumed reduction in population size during introduction has become an entire field of scientific inquiry (Dlugosch, Anderson, Braasch, Cang, & Gillette, 2015;Sax & Brown, 2000;Sork, 2018). However, this paradox has been called into question as research shows that while many invasive populations may undergo a reduction in demographic and/or effective population size after an invasion event, this is not always linked with a subsequent reduction in genetic diversity (Dlugosch et al., 2015;Frankham, 2005). Additionally, differences between the total genetic diversity of a population and the adaptive variation of a population can be large (Leinonen, O'Hara, Cano, & Merilä, 2008;McKay & Latta, 2002). For these reasons, using total genetic diversity as a measure of invasive potential can be complex and potentially misleading. Instead, a better approach may be to examine how invasive species functionally respond to novel environments and assess how specific molecular processes may be contributing to invasive success (Kawecki & Ebert, 2004;Lande, 2015;Sork, 2018).
Local adaptive evolution and phenotypic plasticity represent two strategies for coping with novel environmental stressors, although they are not mutually exclusive (Kawecki & Ebert, 2004;Lande, 2015). Phenotypic plasticity can be adaptive, maladaptive, or neutral, and can occur independently or in conjunction with shifts in allele frequencies that also alter mean trait values (Ghalambor, McKay, Carroll, & Reznick, 2007;Van Kleunen & Fischer, 2005).
When phenotypic plasticity is adaptive, the population's trait value moves closer to the new environment's optimum. This can allow populations to persist through the sudden application of strong directional selection that often accompanies an introduction, particularly a founder event, without the more time-consuming process of having to wait for fortuitous mutations to arise (Conover & Schultz, 1995;Ghalambor et al., 2007;López-Maury, Marguerat, & Bähler, 2008;Van Tienderen, 1997). Over time, if there are population distributional changes in allele frequencies associated with fitness, then the invasive population will have on average a phenotype that is more fit in its current range than it would be in other environments, including the native range. Regardless of the mechanism, these shifts in fitness-related traits are the difference between persistence and perishing for an introduced population (Joshi, 2001;Kawecki & Ebert, 2004;Richards, Bossdorf, Muth, Gurevitch, & Pigliucci, 2006).
In the study of invasive species, the ability to examine molecular processes associated with phenotypically plastic responses (e.g., through environmentally driven gene expression differences) and those indicative of local adaptive evolution (e.g., through changes in allele frequencies) is often limited by the relative lack of background genetic data available, particularly for nonmodel species (Ekblom & Galindo, 2011). Examining these two processes can be further complicated when traditional methods used to assess local adaptation, such as reciprocal translocation experiments, bring up ethical concerns since moving invasive populations to new locations may increase their potential spread (Bunting & Coleman, 2014). This concern may be especially true for highly prolific invasive species. However, with the development of technologies such as RNA-seq, which allows for the assembly of transcriptomes de novo, gene expression and sequence data have become more widely available for nonmodel systems (Ekblom & Galindo, 2011;Sork, 2018;Wang, Gerstein, & Snyder, 2009). RNA-seq-derived gene expression data can be used to answer questions related to how different environments influence changes in gene expression, which can help address how plastic these responses may be (Des Marais, Hernandez, & Juenger, 2013;Lande, 2015;Via & Lande, 2006). In addition, because RNA-seq also produces sequence data, we can assess allele frequency differences for genes that are differentially expressed, which can give initial insight into population divergence and potential processes driving local adaptive evolution (Costa, Angelini, De Feis, & Ciccodicola, 2010). Thus, the combination of expression and sequence data produced from RNA-seq methods can allow researchers to estimate the prevalence of plasticity in response to novel environmental stressors and begin to address questions about how invasive species adapt to their introduced environments (Lande, 2015;Sork, 2018).
In this study, we take advantage of RNA-seq technology to examine changes in different molecular processes that may allow invasive populations with similar genetic backgrounds to establish across different ecosystems. The system we are using to explore this question is invasive populations of baby's breath (Gypsophila paniculata L.; Caryophyllaceae), which inhabits different regions of the continental United States and Canada. Gypsophila paniculata is a perennial forb native to Eurasia. It is thought to be a long-lived herbaceous perennial (at least 7 years, C. G. Partridge, personal observation), although the full life span has not been assessed, and flowers are not produced until the second or third year of growing (Darwent & Coupland, 1966). As is characteristic of most members of the genus Gypsophila, it thrives in environments with dry, well-draining, calcareous soils with warm summers and cool winters (Barkoudah, 1962). However, it has one of the largest geographic distributions of the genus, stretching from eastern Europe to North China (Barkoudah, 1962;CABI, 2015). Originally introduced into North America in the late 1800s for use in the floral industry (Darwent, 1975;Darwent & Coupland, 1966), G. paniculata quickly spread and can now be found growing in diverse ecosystems across North America, often outcompeting and crowding out the native species (Baskett, Emery, & Rudgers, 2011). While relatively little is known about the history of invasive baby's breath populations in the United States, a recent population genetic analysis using 14 microsatellite markers identified at least two distinct population clusters, with one of these clusters including populations that span from the upper portion of Michigan's lower peninsula to the eastern side of the Cascade Mountains (Lamar & Partridge, 2019). The environments that these populations occur in range from quartz-sand dunes in Michigan to disturbed roadsides in Minnesota, prairies in North Dakota, and sagebrush steppes in eastern Washington.
While these populations may share a similar genetic background, understanding how they are responding to different environments will help shed light on how this invasive is able to thrive across distinct habitats.
For this study, we examined differential gene expression and identified single nucleotide polymorphisms (SNPs) within differentially expressed genes from two G. paniculata populations within the same genetic cluster that inhabit divergent ecosystems: (1) the coastal sand dunes in Petoskey, Michigan, and (2) sagebrush steppe regions around Chelan, WA. These two habitats were chosen because they represent ecologically distinct ecosystems, with divergent environmental characteristics (see results). In addition, we conducted a common garden growth trial to examine differences in germination rates, seedling emergence success, and above-and belowground tissue allocation between these two populations. We predict that the populations will differ in gene expression patterns and that those differences will be reflective of the environment in which they inhabit. Given that baby's breath established in these environments approximately 100 years ago (Lamar & Partridge, 2019), we also predict that this should be enough time to see divergence in allele frequencies for genes that are important to these distinct habitats. This will allow us to identify potential targets of local adaptive evolution for future testing. Finally, we hypothesize that different environmental conditions (i.e., growing degree day, precipitation, and nutrient availability (see Section 3)) between these two habitats have likely led to differences in growth responses. Therefore, we predict that these populations will differ in certain phenotypic traits, such as germination rate, seedling emergence success, and above-and belowground tissue allocation, when grown in a common garden environment. Thus, the overall goal of this work was to examine how G. paniculata populations that have shared genetic backgrounds but differ in their invaded habitats (i.e., sand dunes in Petoskey, Michigan, and sagebrush steppe in Chelan, Washington) are responding to these different environments and to explore how different molecular processes are contributing to their success as an invasive species.
| Study site characterization
Petoskey, Michigan (PSMI), is -located along Lake Michigan's primary-successional quartz-sand dune system. Vegetation is sparse and is chiefly comprised of Ammophila breviligulata (dune grass), Silene vulgaris (bladder campion), Juniperus horizontalis (creeping juniper), J. communis (common juniper), and Cirsium pitcheri (Pitcher's thistle; Figure 1a,b). Chelan, Washington (CHWA), is a disturbed habitat situated on slopes surrounding Lake Chelan and dominated by sagebrush (Artemisia spp.; Figure 1a,c). Average climate data for these two locations were collected from stations operated by the National Oceanic and Atmospheric Organization (NOAA) in Petoskey, MI, and Entiat, WA (near Chelan, WA), and are summarized in Table 1.
| Soil analysis
In June 2018, we collected soil samples from PSMI (45.4037°N 84.9121°W) and CHWA (47.7421°N 120.2177°W; Figure 1a-c). In PSMI, we collected soil from 10 cm, 50 cm, and 1 m, while in CHWA, we collected soil from 10, 25, and 50 cm depths. Sampling locations differed in collection depths due to soil characteristics in CHWA that made deeper collection impossible (large boulders, hard soil). At both locations, we collected two sets of soil samples from all depths.
We stored samples in airtight plastic bags and maintained them at 4°C until analysis.
We sent soil samples collected from all depths at PSMI and CHWA to A&L Great Lakes Laboratories (Fort Wayne, IN) for nutrient analysis.
Samples were tested for organic matter (%), phosphorus (P), potassium (K), magnesium (Mg), calcium (Ca), soil pH, total nitrogen (N), cation exchange capacity (CEC), and percent cation saturation of K, Mg, and Ca. At the laboratories, samples were dried overnight at 40°C before being crushed and filtered through a 2-mm sieve. The following methods were then used for each analysis: organic matter content (loss on ignition at 360°C), pH (pH meter), phosphorus, potassium, magnesium, and calcium content (Mehlich III extraction and inductively coupled plasma mass spectrometry). Total nitrogen was determined using the Dumas method (thermal conductance). Results of nutrient testing were analyzed using a principal component analysis (PCA) in the statistical program R v3.6.0 (R Development Core Team, 2017).
We then dissected seedlings into three tissue types (root, stem, and leaf), placed tissue in RNAlater™ (Thermo Fisher Scientific), and flash-froze them in an ethanol and dry ice bath. Samples were kept on dry ice for transport and maintained at −80°C until RNA extractions were performed.
We extracted total RNA from frozen tissue using a standard TRIzol ® (Thermo Fisher Scientific) extraction protocol (https:// F I G U R E 1 (a) Map identifying sample locations for Gypsophila paniculata populations used in this study. (b) Petoskey, Michigan (PSMI), study site, and (c) Chelan, Washington (CHWA), study site assets.therm ofish er.com/TFS-Asset s/LSG/manua ls/trizol_reage nt.pdf). We resuspended the extracted RNA pellet in DNase/RNasefree water. The samples were then treated with DNase to remove any residual DNA using a DNA-Free Kit (Invitrogen). We assessed RNA quality with a Bioanalyzer 2100 (Agilent Technologies) and NanoDrop™ 2000 (Thermo Fisher Scientific). RNA integrity number (RIN) values for individuals used in this study ranged from 6.1 to 8.3.
However, because both chloroplast and mitochondrial rRNA can artificially deflate RIN values in plant leaf tissue, we deemed these values to be sufficient for further analysis based upon visualization of the 18S and 28S fragment peaks (Babu & Gassmann, 2016). This resulted in high-quality total RNA from 10 PSMI leaf, 10 PSMI stem, 10 PSMI root, 10 CHWA leaf, 9 CHWA stem, and 10 CHWA root samples. Finally, we submitted the total RNA samples to the Van Andel Research Institute for cDNA library construction and sequencing.
| cDNA library construction and sequencing
Prior to sequencing, all samples were treated with a Ribo-Zero rRNA Removal Kit (Illumina). cDNA libraries were constructed using the Collibri Stranded Library Prep Kit (Thermo Fisher Scientific) before being sequenced on a NovaSeq 6000 (Illumina) using S1 and S2 flow cells. Sequencing was performed using a 2 × 100 bp paired-end read format and produced approximately 60 million reads per sample, with 94% of reads having a Q-score > 30 (Table S1).
| Differential expression
To quantify transcript expression, reads were mapped back to the assembly using bowtie and quantified using the RSEM method as implemented in Trinity. Counts were generated for genes and TA B L E 1 Location and climate data for sampling sites, taken from National Oceanic and Atmospheric Organization (NOAA) weather stations in Petoskey, MI, and Entiat, WA (near Chelan, WA) To be considered significantly differentially expressed, transcripts needed to have an adjusted p-value (BH method [Benjamini & Hochberg, 1995]) below 0.05 and a log2 fold change greater than 2.
For transcripts that were differentially expressed, we identified Gene Ontology (GO) biological processes that were either over-or underrepresented using the PANTHER classification system v14.
| Single nucleotide polymorphism variant calling
We used the HaplotypeCaller tool from GATK4 to identify potential single nucleotide polymorphisms (SNPs) that were present in transcripts that were differentially expressed between populations (Depristo, 2011;McKenna, 2010). The bowtie mapped files were used to jointly genotype all 59 samples simultaneously with a minimum base quality and mapping quality of 30. Variant data were visualized using the vcfR package v1.8.0 (Knaus & Grünwald, 2017).
We identified variants associated with nonsynonymous SNPs, synonymous SNPs, 5ʹ and 3ʹ UTR SNPs, 5ʹ and 3ʹ UTR indels, frameshift and in-frame indels, premature or changes in stop codons and changes in start codons, and calculated population diversity estimates for all SNP types. The effect prediction was done using custom scripts (which can be found in the Dryad repository) and the TransDecoder predicted annotation in conjunction with the base change. We set a hard filter for the SNPs so that only those with QD scores > 2, MQ scores > 50, SOR scores < 3, and read postrank sums between −5 and 3 passed. We then calculated the allele frequencies for each SNP within PSMI and CHWA.
For the subsequent evaluation, we focused on SNPs that had potential functional effects (i.e., they were not listed as "synonymous" or "unclassified"), were in transcripts differentially expressed between PSMI and CHWA across all three tissues, and that exhibited differences in SNP allele frequencies between the populations by at least 0.5. We used the R package metacoder v0.3.3 (Foster et al., 2017) to visualize the GO biological process hierarchies associated with transcripts containing these SNPs.
| Common garden trials
Finally, to examine whether environmental differences between these two locations has led to different growth responses, we conducted common garden trials to examine differences in germination rate (functionally defined as radicle emergence (Baskin & Baskin, 2001)), seedling emergence success (defined as successful cotyledon emergence from the soil), and the ratio of above-and belowground tissue allocation between the populations.
| Germination trial
On 11 August 2018, we returned to our sample sites in CHWA and PSMI and collected seeds from 20 plants per location. This date was chosen because it was previously determined that this collection time can yield over 90% seed germination for G. paniculata collected from Empire, MI (Rice, Martínez-Oquendo, & McNair, 2019). To collect seeds, we manually broke seed pods off and placed them inside paper envelopes in bags half-filled with silica beads. We stored bags in the dark at 20 to 23°C until the germination trial began one month later.
We counted one hundred seeds from twenty plants per population and placed them in a petri dish lined with wet filter paper (n = 2,000 seeds per population). We established a control dish using 100 seeds from the "Early Snowball" commercial cultivar (G.
| Growth trials
To examine population differences on seedling emergence success and above-and belowground tissue allocation, we planted 6 seeds collected from 20 individual plants per population (n = 120 per population, n = 240 total). All seeds were planted on the same day to a standardized depth of 5 mm in a sand/potting soil mixture.
Greenhouse conditions were set at 7:17-hr dark:light photoperiod.
Relative humidity and temperature settings during the day were 55% and 21°C, while nighttime conditions were 60% and 15.5°C. Each day we watered plants until the soil appeared fully wet and we randomized plant position to prevent bias in temperature, light, or water regime. At the end of the seven-week trial period, we carefully removed plants from the soil and measured the length of tissue above and below the caudex using a caliper.
To compare the proportion of seedlings that successfully emerged between the populations, we ran a two-sided proportion test in the R statistical program v3.6.0 (R Development Core Team, 2017). We analyzed differences in the ratio of above-and belowground tissue between populations for seedlings that successfully emerged and examined the presence or absence of family effects using a completely randomized design with subsampling ANOVA in SAS v9.4 (SAS Institute Inc., 2013).
| Habitat characterization
Climate data collected from NOAA monitoring stations revealed differences in mean temperature, precipitation, and growing degree day (GDD) between our two sampling locations. CHWA had a 3°C
| Differential gene expression
Across all three tissue types, there were 1,146 transcripts that were differentially expressed between the PSMI and CHWA populations ( Figure 3a, Table S3), with the majority of the differences in expression being driven by sampling location and tissue type (Figure 3b).
Root tissue contained the highest number of differentially expressed transcripts between the two populations (8,135 transcripts, Table S4), followed by leaf tissue (5,666 transcripts, Table S5) and stem tissue (5,374 transcripts; Figure 3a, Table S6).
| Enriched GO processes in CHWA
GO biological processes that were enriched with transcripts displaying higher expression in CHWA relative to PSMI across all three tissue types were primarily associated with different stress responses (
| Enriched GO processes in PSMI
For the PSMI population, GO terms that were enriched with transcripts that showed significantly higher expression across all three tissues were associated with nutrient response, development, and transcriptome processes (Table 2) were specifically enriched in CHWA root tissue (Table S7). For the PSMI population, GO terms specifically associated with root tissue included cellular response to nitrogen starvation (GO:0006995), nitrate assimilation (GO:0042128), and organophosphate metabolic processes (GO:0019637; Table S8).
| Stem tissue
There were 5,374 differentially expressed transcripts in stem tissue collected from CHWA and PSMI (Figure 3a) and systemic acquired resistance (GO:0009627; Table S10).
| Leaf tissue
Of the 5,666 transcripts that were differentially expressed between leaf tissues from CHWA and PSMI (Figure 3a PSMI. Some of the enriched GO terms that were specific to leaf tissue from the CHWA population included fatty acid beta-oxidation (GO:0006635) and positive regulation of salicylic acid-mediated signaling pathway (GO:0080151; Table S11). The enriched GO terms that were specific to PSMI leaf tissue included vitamin biosynthetic process (GO:0009110), long-day photoperiodism, flowering (GO:0048574), and response to UV-A (GO:0070141; Table S12).
| Comparison of gene expression and SNP GO biological processes
Of the transcripts that were differentially expressed between CHWA and PSMI across all three tissues, 85 (7.4%) of those transcripts contained potentially functional SNPs, which displayed allele frequencies that differed between the two populations by at least 0.5 (Table S13). Enrichment analysis did not identify any GO processes that were statistically enriched for these 85 transcripts, although GO biological terms associated with these transcripts can be viewed in Figure 4b.
| Germination trial
Results of a log-rank test comparing time-to-germination curves for each locality indicated strong statistical differences between seeds collected from PSMI and CHWA, with seeds from CHWA germinating more quickly (p < 2.0 × 10 -16 ; Figure 5). While there was a difference in germination curves, both localities reached 90% germination by the end of the germination trial. Log-rank tests looking at homogeneity within groups found strong statistical support for variation among time-to-germination curves for seeds from different parent plants for both populations (both p < 2.0 × 10 -16 ), suggesting potential family effects.
| Growth trial
A two-sided proportion test indicated a significant difference in the total number of seedlings that emerged between seeds collected from CHWA and PSMI, with CHWA seedlings emerging more often than PSMI (p < 0.0002; Figure 6a). When excluding plants that did not emerge, ANOVA results indicated no significant difference in the ratio of above-and belowground tissue allocation between populations (p = 0.61; Figure 6b). However, there were significant family effects in above-and belowground tissue allocation (p = 0.03; Figure S1).
| D ISCUSS I ON
The primary drivers that allow invasive species to adapt to novel environments over relatively short periods of evolutionary time is a process not yet fully understood. To better understand these mech- (Lamar & Partridge, 2019). Using RNA-seq data (which gives orders of magnitude more informative data than microsatellites), we found that there were a number of transcripts differentially expressed between these populations and that many of these genes were involved in processes directly related to their different environments, particularly those associated with abiotic stress response in CHWA and nutrient starvation in PSMI. Of the genes that were differentially expressed across all three tissues, only 7.4% contained potential SNPs that differed in frequency by at least 0.5 between the populations. In addition, while we identified differences in germination rates and seedling emergence success between the two populations in a common garden experiment, we did not observe differences in above-and belowground tissue allocation as we initially predicted. From these data, we suggest that the success of invasive G. paniculata across these distinct ecosystems is likely the result of plasticity in molecular processes responding to these different environmental conditions, although some genetic divergence over the past 100 years may also be contributing to these differences.
F I G U R E 4
Heat trees displaying (a) GO biological processes that are enriched with transcripts with significant differential expression between each population, and (b) GO biological processes that are represented by transcripts differentially expressed between the two populations and contain SNPs that differ in allele frequency by at least 0.5. The size of each node is representative of the number of transcripts assigned to each GO term. The color of each branch represents increased expression, with green displaying higher expression in Petoskey, Michigan (PSMI), and brown displaying higher expression in Chelan, Washington (CHWA)
| Stress response in CHWA
The sagebrush ecosystem of the eastern Cascade Mountains is characterized by a semi-arid, temperate environment with a drought-resistant plant community (Miller et al., 2011). The environmental data obtained from our sampling regions suggest that the CHWA population experiences less precipitation and higher temperatures than G. paniculata growing in PSMI. As such, many of the enriched GO processes with higher expression in the CHWA population were related to a suite of stress responses indicative of abiotic stress. Some of these included response to abscisic acid (ABA), response to reactive oxygen species, response to heat, response to salt stress, response to water deprivation, and response to topologically incorrect folded proteins (Table 2, Figure 4a).
During abiotic stress, many of these processes interact with one another to help maintain cellular homeostasis (Shinozaki & Yamaguchi-Shinozaki, 2000;Tuteja, 2007). In our data, transcripts that were associated with protein folding GO processes mainly corresponded to heat-shock proteins (Hsps). While Hsps are most notably involved in protein stability during heat stress, they can also respond when plants experience osmotic, cold, or oxidative stress (Boston, Viitanen, & Vierling, 1996;Vierling, 1991;Wang, Vinocur, Shoseyov, & Altman, 2004;Waters, Lee, & Vierling, 1996). Hsps can also interact with ABA, often considered a "plant stress hormone" because it can be induced by multiple abiotic stressors (Mahajan & Tuteja, 2005;Swamy & Smith, 1999). Arabidopsis mutants that are deficient in ABA do less well under drought or osmotic stress conditions than those with sufficient ABA (Tuteja, 2007). Under heat and drought stress, increased production of ABA can lead to higher levels of hydrogen peroxide and result in oxidative stress.
But, this effect can be mediated as increased oxidative stress triggers synthesis of Hsp70, which upregulates antioxidant enzymes that control reactive oxygen species and protects against oxidative injury (Fauconneau, Petegnief, Sanfeliu, Piriou, & Planas, 2002;Hu et al., 2010). Thus, the enrichment of genes involved in these interacting processes suggests CHWA populations are under higher levels of abiotic stress, particularly heat and drought stress, compared with PSMI populations, and these data provide insight into the molecular response to these stressors.
When examining leaf, root, and stem tissue from CHWA seedlings separately, additional GO processes related to stress responses were observed. "Response to salicylic acid" was enriched in both the leaf and root tissue. Salicylic acid (SA) is a phytohormone that is involved in immunity and defense response to pathogens (Dempsey, Shah, & Klessig, 1999;Vlot, Dempsey, & Klessig, 2009). It also plays an important role in a plant's response to abiotic stress, including metal, salinity, ozone, UV-B radiation, temperature, and drought stress (Khan, Fatma, Per, Anjum, & Khan, 2015). For example, in Mitragyna speciose, the application of SA led to increased expression of chaperone proteins F I G U R E 5 Germination curves for Gypsophila paniculata seeds collected from Petoskey, Michigan (PSMI, n = 2,000), and Chelan, Washington (CHWA, n = 2,000), on 11 August 2018 and incubated for 12 days. Burpee commercial cultivar seeds (n = 100) known to have germination success in excess of 90% were used for an experimental control F I G U R E 6 Results of a common garden growth trial of Gypsophila paniculata plants conducted for seven weeks (n = 120 per population).
(a) Seedling emergence per sampling location, (b) ratio of aboveground: belowground tissue allocation per sampling location. Location codes: Chelan, Washington (CHWA); Petoskey, Michigan (PSMI). and heat-shock proteins when plants were in drought conditions (Jumali, Said, Ismail, & Zainal, 2011;Khan et al., 2015). As previously stated, the arid environment of the sagebrush ecosystem is likely to result in higher drought stress, and increased expression of genes associated with SA pathways may be an additional mediating factor allowing invasive G. paniculata to thrive in this system.
While a number of genes involved in abiotic stress response
showed higher expression in CHWA, the majority of these genes did not have SNPs with divergent allele frequencies between the two populations, suggesting that some of this response is likely due to plasticity. However, a few genes involved in different stress responses and chaperon-mediated protein folding processes did have SNPs that differed in allele frequency by at least 0.5 or greater. One of the genes involved in oxidative stress was caffeoylshikimate esterase (CSE). CSE is an important enzyme in the synthesis of lignin, a major component of the cell wall (Vanholme, 2013). Plants with mutations in the CSE gene display increased sensitivity to hydrogen peroxide and oxidative stress, which were enriched in our GO analysis (Gao, Li, Xiao, & Chye, 2010). In addition, another transcript that displayed divergent allele frequencies was peptidyl-prolyl cis-trans isomerase (FKBP62), which is involved in chaperone-mediate protein folding. FKBP62 interacts with the heat-shock protein 90 (HSP90.1) complex to positively regulate thermotolerance in Arabidopsis (Meiri & Breiman, 2009).
Expression of this gene is induced in Arabidopsis during heat stress, and those that overexpress this gene show higher survival at temperatures above 45°C after a 37°C acclimation period (Meiri & Breiman, 2009). This increased heat tolerance could be helpful in the warmer, arid climate of CHWA. Differences in allele frequencies between PSMI and CHWA associated with these genes suggest that there could be local adaptive evolution occurring due to different selection pressures associated with abiotic stress.
However, additional work needs to be conducted to more thoroughly examine these distinct SNPs and fully assess their relationship to population divergence and local adaptive evolution.
| Nutrient starvation in PSMI
The G. paniculata population in PSMI is located in the coastal sand dunes of northwest Michigan. This area is a primary-successional dune habitat where G. paniculata grows in the foredune region.
The sand dune environment can present strong selection pressure on plants in the form of sand burial, limited soil moisture, and lack of nutrients (Maun, 1994). One of the main limiting factors for seedling success in dune systems is nutrient deficiency, especially nitrogen, phosphorus, and potassium (Hawke & Maun, 1988;Willis & Yemm, 1961). Our soil analysis shows that PSMI soil contained low concentrations of organic matter, total nitrogen, phosphorus, and potassium, suggesting this is a very nutrient-limited environment. In conjunction with these environmental differences, the GO enrichment analysis showed that "regulation of response to nutrient levels" and "cellular response to phosphate starvation" were both significantly enriched in PSMI in all three tissues compared with CHWA. In addition, there were a number of processes associated with nitrate regulation (nitrate assimilation and nitrogen cycle metabolic process) specifically enriched in the root tissue from PSMI. Some of the differentially expressed genes associated with these processes included phospholipase D zeta 2 (PLPZ2), transcription factor HRS1 (HRS1), and SPX domain-containing protein 3 (SPX3). In Arabidopsis thaliana, PLPZ2 can aid in phosphate recycling and has been shown to be upregulated during phosphate starvation (Misson, 2005). Additionally, SPX3 helps regulate phosphate homeostasis (Secco et al., 2012;Shi et al., 2014), while HRS1 is a major regulator of both nitrogen and phosphate starvation (Kiba, 2018). The increased expression of these genes may help G.
paniculata survive in PSMI, where the limited levels of nitrate and phosphorus in the soil make this ecosystem a challenge for many plant species. However, these specific genes did not display SNPs that differed in frequency between our populations, suggesting that expression differences related to nutrient deprivation are environmentally driven, potentially epigenetically maintained, and/ or are regulated by nontranscribed regions, and these differences exist in response to the low nitrogen and phosphorus environment experienced in the dune system.
When examining PSMI GO processes enriched with differentially expressed genes that contain SNPs differing in frequency between the two populations, the only nutrient-associated process was "phosphorus metabolic processes." The gene involved in this process was CDP-diacylglycerol-glycerol-3-phosphate 3-phosphatidyltransferase 1 (PGPS1), which is involved in phosphatidylglycerol (PG) biosynthesis (Müller & Frentzen, 2001). While this gene itself has not directly been associated with nutrient homeostasis, PG can be used as a phosphate reserve during phosphate starvation, and rapidly decreases in cells when phosphate is limited (Jouhet, Maréchal, Bligny, Joyard, & Block, 2003;Nakamura, 2013). Thus, it is possible that the increase in PGPS1 may be needed to maintain PG levels under these nutrient-limited environments. However, further analysis needs to be performed to determine whether the SNPs identified alter the function of this gene.
| Circadian rhythm expression in PSMI
There were also a number of enriched GO processes in PSMI related to different timing processes, including circadian rhythm and flowering-associated photoperiod. These two processes can be linked, with the circadian clock mechanisms that drive 24-hr cycles also significantly influencing plant phenology (Salmela, McMinn, Guadagno, Ewers, & Weinig, 2018). Ideally, circadian cycles should be optimized to match environmental parameters (West & Bechtold, 2015;Yerushalmi & Green, 2009), and a disruption in circadian rhythm cycles can result in decreased fitness (Green, Tingay, Wang, & Tobin, 2002;Michael et al., 2003). Given differences in both latitude and growing degree days between PSMI and CHWA, we would expect there to be differences in phenology between the populations, and this was evident during our collecting period. Even though we collected from both populations within one week of each other, and we tried to sample from both locations at the same time of day, some mature plants in CHWA were already budding, while mature plants in PSMI were still in the growth stage of their yearly life cycle. For most of the transcripts involved in these processes, there was not a corresponding SNP between the populations, suggesting these differences may be environmentally driven. However, a transcript associated with early flowering 3 protein (ELF3) displayed increased expression in the CHWA population and contained a SNP that differed in frequency between these populations. ELF3 has been shown to modulate both flowering time and circadian rhythm (Carré, 2002), and interestingly, it can also lead to increased salt tolerance (osmotic stress) in Arabidopsis (Sakuraba, Bülbül, Piao, Choi, & Paek, 2017).
These results suggest that environmental factors eliciting changes in timing and phenology may be helping to maintain these invasive populations.
| Phenotypic comparisons: germination and growth trials
To see what effects environmental factors might be having on different life-history traits of our populations, we set up common garden growth trials. Different environmental factors can have varying selective pressures on germination rates, seedling emergence success, and above-and belowground tissue allocation (Chauhan & Johnson, 2008;Taylor et al., 1995). In our common garden experiments, we initially observed that seeds collected from CHWA germinated quicker and had higher seedling emergence success than those collected from PSMI. The better performance of the CHWA population could be due to release from the abiotic stress factors that were indicated by our gene expression data. Improved performance when a species is removed from an environment imposing abiotic stressors is a common hypothesis and is used as one explanation for the success of invasive species (Catford, Jansson, & Nilsson, 2009 We saw no differences in above-and belowground tissue allocation after seedling emergence between populations, suggesting there are no genetic differences between these populations in relation to these growth measures. We expected the nutrient limitation in PSMI to have an influence on the above-and belowground tissue allocation of seedlings. In environments where nitrogen and phosphorus are the main limiting nutrients, root growth can be favored in seedlings relative to aboveground growth (Ericsson, 1995).
In contrast, shortage of Ca, which was present in higher quantities in PSMI than in CHWA, has been found to have little or no influence on above-and belowground tissue allocation in laboratory experiments (Ericsson, 1995). The lack of difference observed in root:shoot ratios in our plants could indicate that these factors do not influence tissue allocation resources in G. paniculata seedlings, or that these differences are not seen when G. paniculata is grown in a nutrient-sufficient environment.
For our common garden trials, in addition to some of the population differences identified, we also observed significant family effects in germination rate, seedling emergence success, and aboveand belowground tissue allocation ratios, suggesting the potential for genetic effects. Variation in these traits is known to be driven, in part, by genetic factors in other plants. For example, for Brassica oleracea, heritability estimates of mean seed germination time and root:shoot length are approximately 14% and 12%, respectively.
Looking to our gene expression data from our field-collected seedlings, we did not observe differential expression of candidate genes proposed to be involved in germination timing (i.e., AHG1, ANAC060, PDF1 (Footitt et al., 2020)); however, this could be due to the age of the seedlings upon collection. While the family effects we observed could be a function of genetic differences between seeds from dif- the demographic history of these populations. Thus, the genetic differences that we are observing may be confounded by the past history of these populations prior to initial introduction to these areas. Secondly, in this study we only examined one population within a sand dune habitat and one population from a sagebrush habitat. Again, because demographic history can be a confounding factor, we cannot explicitly state that differences between these environments are solely driving the differences in gene expression patterns we observed or that SNP differences between these populations are not simply due to genetic drift. In the future, we plan to include more populations from each habitat, as well as additional prairie habitats, to explore this further. However, given the close relationship between the environmental characteristics of these habitats and the GO processes that were enriched within each population, we think that these processes are worthy of further evaluation of how molecular mechanisms may be driving the success of G. paniculata in these distinct ecosystems. Third, while RNA-seq analysis allowed us to examine SNPs in differentially expressed genes, there could also be genetic differences in nontranscribed regions that regulate gene expression between these populations. In these cases, some of the differential gene expression that we are observing could still be due to genetic differences between these populations, even though no SNPs were observed between the transcripts. To capture this information, further genetic analysis comparing these two populations would need to be conducted. Fourth, while we only identified a small number of differentially expressed genes with potentially functional SNPs that differed in allele frequency by 0.5 between the two populations, we acknowledge that this is a conservative cutoff and we have not considered the potential pleiotropic effects these genes may have on the different enriched processes. Additionally, further work needs to be conducted to identify any functional effects of these identified SNP differences and assess whether they drive differences between populations. Finally, to fully assess local adaptation, more traditional approaches such as reciprocal transplant experiments are needed. Although given that G. paniculata is a prolific reproducer, transplanting more individuals into these sensitive habitats may bring significant ethical concerns. However, by identifying SNPs in differentially expressed genes that are divergent between these populations these data can provide an initial starting point to identify potential candidate genes that may be involved in adaptation to these novel habitats. Thus, regardless of these caveats, we feel that this work provides a good starting point toward identifying how different molecular processes influence G.
paniculata's success across these distinct ecosystems.
In conclusion, we found that G. paniculata seedlings from CHWA and PSMI displayed differential gene expression that was characteristic of the environment in which they were collected. In the nutrient-limited sand dune ecosystem, genes involved in responding to nutrients and phosphate starvation were upregulated. In the arid sagebrush ecosystem, genes involved in regulating responses to abiotic stress were upregulated. Given the small number of differentially expressed transcripts that contained divergent SNPs, we suggest that the majority of the expression differences associated with these enriched GO processes are likely driven by plastic responses to these different environments. Genetic divergence, however, cannot be completely dismissed given the differences in germination rates and seedling emergence success between the two populations in the common garden setting, although these seeds were collected from wild populations and maternal, environmental, and epigenetic variables could be contributing factors. Overall, this study reveals how variation in molecular processes can aid invasive species in adapting to a wide range of environmental conditions and stressors found in their introduced range.
ACK N OWLED G M ENTS
We would like to thank Emma Rice and Hailee Leimbach-Maus for assistance during seed collection and with the germination study.
We would also like to thank
CO N FLI C T O F I NTE R E S T
Thermo Fisher Scientific funded the majority of the sequencing and bioinformatics costs for this study. In exchange, we provided Thermo Fisher with QA/QC data regarding the sequencing performance obtained with their Collibri Stranded Library Prep Kits.
Thermo Fisher did not have any input regarding the design of the study, analysis of the data, interpretation of the data, or development of the manuscript.
DATA AVA I L A B I L I T Y S TAT E M E N T
All raw sequence reads associated with these data were deposited to the Sequence Read Archives (BioProject Accession #: PRJNA606240). Raw growth, germination data files, and R code for differential expression analysis and SNP identification are available on Dryad for review and will be made public once the manuscript is available (https://datad ryad.org/stash/ share/ 7XUtU P1t7w bBU6d d-hYM-FQss7 XOknK 2lf-HSCw9oSQ). | 9,392.6 | 2020-03-25T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Industrial-or Residential-Dominant Development ? A Comparative Analysis of Maritime Industrial Development Areas of Liaoning , China
This paper adopts a case-comparison method to study the spatial layout features of maritime industrial development areas (MIDAs) in Liaoning, China, in reference to similar projects in other Asian countries including Japan, South Korea and Singapore. Our study focuses on industry-city spatial relationship, land position and proportion, coastline utilization intensity and industrial land organization. We show that supplementary residential and recreational land has primarily occupied the high-quality coastlines, and resulted in limited industrial access to marine resources. Our theoretical and empirical analyses connect this feature to local government finances, purchase restriction policy and an investment-driven surge in demand for coastal residential housing. Many areas now exhibit low utilization of industrial land accompanied by the emergence of “ghost cities” phenomenon, which are critical factors that the policymakers should consider in the future planning of coastal development. Interviews with local developers, housing authority personnel, relocated employees and residents confirm our findings. We conclude with policy recommendations for promoting long-term sustainable development in the coastal area.
Overview
China is endowed with abundant marine resources and has recently oriented towards coastal industrial development to facilitate international trade through marine shipping 1 .Substantial national planning has been set in place for the establishment of maritime industrial development areas (MIDAs) and supplementary residential areas in the province of Liaoning, utilizing the natural coastal advantage of Bohai Bay and existing heavy-industrial production base.Nonetheless, large-scale construction and relocation of industrial plants to MIDAs revealed serious planning deficiencies and subsequently led to low production efficiency, poor resource utilization and surplus supply of residential housing.Liaoning's GDP has contracted by 11.4 % in 2016 compared to China's national average growth of 6.9%, ranking the lowest among all provinces and autonomous cities.This paper unfolds the spatial planning deficiencies in theoretical and qualitative perspectives, and empirically assesses the contributing role of local governments in shaping Liaoning's coastal developmental patterns.
theoretical and empirical analyses.Using the planner's problem, we first identify prominent conflict and challenges in the projects' development goals, with focus on the lack of planning for long-term sustainability and emerging public health concerns.Many areas now exhibit low utilization of industrial land accompanied by "ghost cities" phenomenon, which are critical factors that policymakers should consider in the future planning of coastal development.We further explore the economic incentive of local governments and identify an interconnection between regional land development and economic policies, government finances and real estate market in Liaoning, China.Our results reveal a causal relationship between real estate investment and local government fiscal revenue, which would reorient industrial expansion to luxury residential development.We conclude with policy suggestions to central planners aimed to enhance economic efficiency and promote sustainable development in the long-run.
Historical and Socioeconomic Background
In the past decade, the Chinese government has issued a series of economic and industrial policies to promote marine resource utilization and resulted in the expansion and relocation of numerous industrial plants to coastal areas 2 .To provide accommodation for relocated employees, residential towns have been developed near the plants intended to provide convenient access to local services, schools and other essential needs.
Situated in the northeastern region of China, Liaoning has traditionally been a pillar province for heavy industrial production.Petrochemical, mining, machine and equipment manufacturing form its main production base.Liaoning possesses large deposits of iron ore and crude oil, and is a leading producer of natural gas, oil and steel in China.Nonetheless, this province has recently suffered from significant economic contractions possibly due to the combination of the 2008 global recession, low commodity prices and rampant corruptions.Since 1978, China has adopted an open-door policy and focused on producing labor-intensive goods to promote export growth.The accession to the World Trade Organization (WTO) in 2001 has further promoted the expansion of light manufacturing industries, whereas traditional heavy industries have become less prominent in driving China's economic growth.Figure 1 shows Liaoning's GDP and GDP growth with visible declining trends.Despite economic downturns, cargo throughput of Liaoning's ports has shown significant growth over the past decade.Liaoning has a geographical advantage over other inland provinces in northern China by occupying more than 35% of the Bohai Bay coastlines.Shipping ports in Liaoning have shown steady growth in container volume, primarily servicing trade between China and neighboring countries including Russia, Japan, North Korea and South Korea.Port of Dalian is the second largest container transshipment hub in Mainland China and ranked as the 7 th busiest container port in 2016 by handling more than 9.58 million TEU (twenty-foot equivalent units).It also possesses the world's largest crude oil terminal, and is responsible for processing all foreign-traded vehicles in northeastern China.Port of Yingkou ranked the 9 th busiest port in China with more than 6 million TEU in 2016.Liaoning serves as a critical hub that facilitates trade in the Pacific Rim and provides essential support to foreign trade of raw materials and heavy industrial output.Song et al. (2017) -15 199319951997199920012003200520072009201120132015 GDP (in billions RMB) GDP Growth (%) policies that promoted coastal development.Figure 2a depicts the annual port throughput in China and Liaoning from 2000 to 2014, with average annual growth rates of 5.62% and 6.29%, respectively.Figure 2b shows the trend of growth in port length in Liaoning and its selected cities. Liaoning's coastline has grown from 1885km in 2000 to 2447km in 2014, mostly through land reclamation and massive construction of new shipping ports (Ma et al., 2014).
Literature Review
This paper is related to three strands of literature.The first explores various forms of urbanization, and the role of MIDAs and marine economy in driving economic growth.MIDAs first emerged in Western Europe and Japan in the 1950s and 1960s, aimed at providing a spatial integration of land-sea production and transportation system 3 .Wang et al. (2017) argue that the effectiveness of MIDAs is highly sensitive to government policies and global economic conditions.Robinson (1985), Todd and Hsueh (1990) and Sonn (2005) present case studies of South Korea's heavy industrial complexes and find that central planning and support of local government agencies are essential to the success of MIDAs.Our paper complements these works by investigating the specific spatial features of selected East Asian MIDAs and closely examine Liaoning's projects within the context of China's economic transitions.MIDAs in China are more broadly connected to urbanization and provide an anchor point to study regional development disparities.Empirical researchers find that rapid industrialization and urbanization of eastern coastal areas in China is closely connected to the growth of international trade and substantial transformation of rural farm land (Zhang et al., 2016, Mody and Wang, 1997, Chen and Feng, 2000, and Liu, 2007).Han and Yan (1999) provide an overview of China's coastal cities development strategies and conclude that China faces a series of challenges in finding appropriate balance between economic growth and sustainability in resource use.Wang et al. (2014) find that coastal wetland area has been reduced by over 50% in recent decades, with significant loss of biodiversity and destruction of natural habitats, with Bohai Bay being one of the most critically affected regions experiencing an increasing demand for large-scale reclamation.Song et al. (2017) further question the long-term strategic planning of MIDAs in Liaoning with serious concerns over the presence of large state-owned enterprises, lack of sufficient research and development, slow technology growth, shrinking population and unbalanced industrial expansion.
Another important strand of literature studies urban land governance and land politics in the context of China.
The central question of study is how land politics have shaped China's land markets and transformed municipal governments as major market actors with the control of revenue from urban development (Hsing, 2006 and2010).The author finds that real estate development generated a significant amount of land profits and increased the urban-rural tension in property rights.
Land Coverage
The geographical location of all MIDAs under study is notably away from metropolises.Figure 3 plots the major development areas established since 2011 with many situate far away from city centers.The red dots indicate established cities, the pink dots illustrate coastal industrial plants and the red lines depict coastlines that are developed or under development.Nonetheless, the land coverage of Liaoning's MIDAs appears to be massive compared to other cases.
The city of Pohang in South Korea covers a landmass of 68.4 km 2 , and the land area of Pohang Industrial Zone and Port of Pohang combined is about 16 km 2 .In Singapore, Jurong island forms a total area of about 32 km 2 including areas from land reclamation.Most of Japan's coastal industrial zones span less than 20 km 2 except for Keihin and Keiyo with land areas of 43 and 60 km 2 , respectively.Table 1 lists some of the major coastal industrial areas in Japan with an average land coverage of 16.61 km 2 .(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015).
The newly developed MIDAs in Liaoning have substantial landmass.Table 2 provides an overview of the scale of recent major development projects in coastal Liaoning.For the nine projects understudy, landmass varies from 34.5 km 2 to 349.5 km 2 , with a mean and median of 182.1 km 2 and 161 km 2 , respectively.Large-scale land reclamation in some areas is necessary to provide the desired land coverage.For example, the Liaodong Bay New Area (developed under the city of Panjin) occupies a total area of 306 km 2 including 120 km 2 from land reclamation, which is about 4 times the landmass of Jurong Island.The Changxing Island Coastal Industrial Area (developed under the city of Dalian) is consisted of five individual islands with a total landmass of 349.5 km 2 , significantly greater than any of the coastal development areas in Japan, South Korea and Singapore.Note.Data is obtained from GoogleEarth and manually labelled by the authors.
Residential Land-Use and Urban Planning
Two features related to residential land planning are identified.First, although away from metropolises, small but well-functioning towns surround MIDAs in Japan, South Korea and Singapore.However, Liaoning's MIDAs are too remotely located with no nearby towns or cities.Secondly, all MIDAs under study show clear separation between industrial and residential land, with visibly small land coverage for residential land use.In contrast, Liaoning's residential land area appears to be in similar or even greater scale than industrially zoned property, indicating over-development that could lead to low occupancy and ghost towns.
Residential land-use patterns vary slightly across Japan, South Korea and Singapore.Cases in Japan and South Korea show that residential areas are located inland near the industrial complex but separated by service land or designated green space.This segregation is designed to minimize the possibility of pollution (air, noise and water) that may affect the quality of life of surrounding citizens.The Puhang port has an alternative approach to maintain industrial-residential balance.With the production plants and shipping port located in the heart of Pohang, residential housings are located nearby on the two sides of the city and separated by river and greenspace.Convenient transportation has been established to facilitate short daily commute for workers while ensuring high-utilization of coastal land for industrial production.The case of Jurong Island in Singapore appears to be distinctive.Separated by only 10 km from the nearest city, no separate residential areas have been established.Convenient ground transportation allows worker to maintain city life while working for industrial complexes on the island.
Liaoning's MIDAs are uniquely planned with a close integration between residential and industrial land.The intention may have been to minimize commute time but could lead to serious concerns over pollution.In addition, no nearby towns or cities could be utilized to support residential life.In Table 2, column "distance from city center (km)" presents the linear distance measured from the new areas to established city or town centers, with an average of 67.56 km for the nine cases we have examined.New towns have to be built from scratch to support tens of thousands of relocated employees.Provincial planning involves the construction of large residential complex with supporting public infrastructure and facilities, with the goal of establishing fully-functioning towns in a short horizon.The accompanied development of residential housing units should have allowed employees to minimize daily commute and maintain a healthy work-life balance.Service facilities such as hospitals, schools, local government buildings and other essential services were budgeted and constructed to improve the quality of life.
In our 2017 site visits, Bayuquan that began its development ten years ago still showed very little progress.
Many public facilities such as schools and hospitals were still under construction.However, an upscale golf course recently opened near the steel plants occupying substantial landmass.Davis and Fung (2014) report on the empty and unfinished housing projects in Bayuquan, documenting at least two dozen 20-story abandoned apartment buildings and five years' worth of unsold apartments.The emergence of new "ghost cities" indicates a mismatch between planned infrastructure and residential necessities, which could seriously hinder the quality of life of industrial workers working at these MIDAs.
Theoretical and Empirical Analyses
The previous section studies the planning and land-use patterns of Liaoning's MIDAs in comparison to similar projects from neighboring countries.Large land mass, loosely integrated coastline, shipping ports and industrial plants, along with inadequate residential development have all contributed to the low industrial productivity in Liaoning.What are the driving forces behind these planning deficiencies?What are the planning goals and embedded challenges and potential conflicts?We first explore these questions qualitatively by adopting Campbell's planner's problem model.Subsequently, we follow up with an empirical assessment on the role of urban land governance and local government finances in shaping the developmental features of Liaoning's coastal projects.
The Planner's Square-A Theoretical Approach
Liaoning's MIDAs emerged from national, provincial and municipal strategic plans, with central planners assuming the vital role in project planning and facilitation.In his 1996 paper, Scott Campbell constructed an overarching model called the "planner's triangle", which examines three primary conflicts between the social, economic, and environmental aspects in pursuit of sustainable development (Campbell, 1996).This simple and visual framework attempts to identify the points of conflict and potential complementarities, and informs planners on their role as mediators in developing collaborative strategies to promote sustainable cities. Unique to Liaoning's case, central planners not only serve as mediators but have substantial power over resources allocation and project orientation.This characteristic enables us to dig further to examine whether the revealed planning deficiencies stemmed from unsolved conflicts.
Our interviews with local company managers, residents and workers discovered their rising concern over public health, which motivated us to extend the "planner's triangle" model (Figure 7a) to incorporate it as an additional planning priority to address five points of conflict 6 beyond the original three (as shown in Figure 7b).We thereafter apply our "planner's square" model to coastal industrial projects in Liaoning and closely examine the main planning conflicts as well as the associated concerns and challenges.Lastly, we conclude with policy recommendations aiming to promote sustainable development.
Our model explores four planning priorities: economic development, environmental protection, social justice, and public health.
Economic Development: the model assumes that economic development is the primary objective of planners who view the economy as a marketplace to facilitate production and consumption.Economic growth relies on efficiency and innovation, and utilizes economic space to accommodate infrastructure, transportation and market needs.
Environmental protection: this is a complementary goal where planners treat the economy as a consumer of resources and a producer of wastes.Natural resources are scarce but essential to facilitate production, and ecological space requires careful planning to maintain its proper balance.
Social justice: this planning priority promotes equitable distribution of resources, services and opportunities among different social groups with distinct needs.
Public health: manpower is the ultimate driver of economic progression and requires planners to properly establish the connection between urban planning and life-threatening diseases.Industrial pollution (air, noise and water) could lead to cardiovascular, respiratory and infectious diseases, while urban lifestyle changes may lead to mental health, obesity and other associated issues.
Stemmed from the four pillars of planning priorities, five prominent conflicts can be identified as shown in Figure 7b and are analyzed in greater detail below.Consistent with the ideology put forth in Campbell (1996), the property conflict exists between economic development and social justice.As the economy strives for growth, there are competing claims on the use of land resources between different parties of interest.More specifically to our case study, there appears to be an increasing tension between industrial expansion and residential development as both compete for coastal frontage.The original planning objective of large-scale industrial development projects in costal Liaoning was to obtain convenient access to marine resources and enhance production efficiency.The significant landmass of these projects has been used as a major achievement by local government to promote their dedication and accomplishment towards economic development.However, the economic fundamentals mismatched the size of planning and the surplus oceanfront land sold to residential developers generates higher sales and tax revenue for the local government.Large-scale industrial and residential development typically involves relocation of long-time residents and may further distort social equity as the preference for building luxury (high-valued) residences limits housing affordability to low-income groups.In this case, the conflict between private interest and public goods is more prominent as local authorities endorse economic development as a planning priority.
Residential planning in Liaoning's MIDAs is notably leaning towards high-end and luxury lifestyle.Our site visits to Bayuquan in 2016 find high-rise apartment complexes with mostly three-bedroom units (with floor area around 90m 2 ) remain vacant.Local officials at the Bureau of Property Management in Yingkou reveal that the occupancy rate of residential dwellings at the industrial areas were only between 10% to 30%.Due to China's one-child policy, most working families have three household members and commonly reside in one-or two-bedroom units.Since apartment complexes are sold on price per square meter, developers could generate greater revenue from larger units.Nonetheless, the employees in these industrial plants are mostly blue-collar, low to middle income groups that cannot afford expensive dwelling units, even with companies' relocation subsidy.Their housing residences in older cities before relocation were mostly provided at a low cost under the socialist planning era (before the 1999 housing reform), and are very small in size (less than 50m 2 ).In our interview with steel plant workers at Bayuquan, 20 out of 25 workers have established families in Yingkou (100km roundtrip) and commute regularly.18 workers expressed that economic feasibility was the top reason for why they did not permanently relocate.Our further study of the central planning documents shows no indication of residential planning for low income groups or subsidized living, aggravating the conflict between economic development and social justice.
The Resource Conflict [Economic Development Environmental Protection]
The pursuit of economic development conflicts with natural resource preservation.The concept of economic-ecological conflict builds on the fundamental struggle between civilization and wilderness.Natural capital excavation allows mankind to develop modern, economically efficient, civilized living habitat but disorders the ecosystem by destructing landscapes, depleting fisheries, emitting greenhouse gases and producing harmful and toxic compounds.The fight for resources arises naturally as they are essential inputs to drive economic growth.Nonetheless, as economic development accelerates, climate change becomes inevitable and the associated high-cost natural disasters greatly distort social sustainability.
Resource allocation in Liaoning under central planning is beyond the demand at the local level and unsuitable for sustainable development.At Changxing Island Coastal Industrial Area, the gigantic landmass of 349.5 km 2 fails to generate meaningful economic productivity for the region.It was established in 2005 as a national-level economic and technological development area, designed to facilitate major petrochemical production.Ten years after the initial construction, Dalian Statistical Yearbook reported that the gross product of Changxing amounts to RMB 10.2 billion in 2015, which was equivalent to about $1.5 billion USD.Moreover, the total value of exported goods was reported to be $0.28 billion USD, significantly below similar developments in Japan (i.e.Keiyo's total value of exports in 2009 was $68 billion, yet occupying only 60 km 2 of land).
The MIDAs not only occupy a significant industrial landmass, they also have ample residential buildings planned and constructed.The small-in-population residential towns consisted of a surplus of coastal residential land, high vacancy and many abandoned projects which have led to serious waste of resources.Instead of planning for essential needs such as public schools, hospitals, commercial districts and local recreational facilities, Liaoning focused on pioneering high-end public infrastructures.In Panjin's Liaodong Bay New Area, national-level sports stadium, upscale shopping malls, lavish art centers and sports complexes have been completed ahead of other supportive facilities.PGA level golf course adjacent to Bayuquan's steel plant also fails to support local residential needs.Massive land reclamation and degradation of coastal wetland in the case of Liaoning could lead to serious biodiversity loss and ecological imbalance.The two competing values need to be carefully incorporated in urban planning to allow human to have a long-term harmonious interaction with the nature, while promoting economic expansion to improve standards of living.
The Development Conflict [Social Justice Environmental Protection]
Environmental ethic conflict arises when protective policies become incompatible with social justice.In developing countries, environmental protection may limit economic growth and affect mostly the bottom of society, exacerbating wealth inequalities between rich and poor.Further, in resource-dependent communities such as our case of interest, some workers view environmental preservation as a roadblock to economic opportunities.More specifically, industrial productions in coastal Liaoning is heavily concentrated in oil, gas and petrochemical sectors, and high compliance cost to meet elevated pollution standards may result in compressed labor wage, non-wage benefits and other welfare to employees.
In addition, Liaoning's current adopted measures such as retaining large-scale greenspace in the city center for urban beautification and environmental protection purpose drastically limit the effective land supply in residential areas, leading to higher housing prices with detrimental effect to the working class.A local real estate developer in Liaodong Bay told us that the mandatory greenspace per dwelling unit requirement meant that they had to budget for 20% increase in areas from land reclamation, reduce the number of dwelling units by increasing the floor area per unit, and plan for multiple rooftop/balcony terraces per building to meet the government requirement.Although residential land appears to be abundant in contrast to cases from Japan, South Korea and Singapore, the property prices were still pushed up burdening the cost of planning deficiencies.Carbon tax on emission also raises energy price, once again challenging life affordability of low-income groups.
Urban planners have to acknowledge that protecting the environment and delivering social justice may not always be compatible yet always indispensable to each other, which requires articulate policy design to establish a plausible middle ground.
The Lifestyle Conflict [Economic Development Public Health]
In general, economic growth is associated with greater income and subsequent higher health investments that improves public health conditions.The health of a population is critical to its sustainable development, and has inevitable effect on improving economic efficiency.Nonetheless, the connection between economic development and public health appears more complicated in the modern context.Planning conflict emerges as diseases triggered by obesity and sedentary lifestyles are tightly linked to urban planning and economic development.
More specifically, China's rapid economic growth in the recent decades has sharply increased its obese population, leading to rampant cases of diabetes and other related chronic cardiovascular diseases (Zhang, 2015).
Local authorities may initiate healthy living programs to encourage balanced lifestyles and formally incorporate obesity epidemic issues in public policy, but more importantly, urban planners have to consider public health as a primary planning objective in promoting sustainable development.
Another serious concern arising from our study is that industrial development along the coastlines of Liaoning further deteriorates public health conditions as massive relocation of production facilities to suburban areas resulted in extreme long-distance commute (>3 hours) for employees on a daily basis.Dalian Petrochemical Company has 44 shuttle buses operating daily to transport nearly 2000 workers from the city of Dalian to Pine Island Industrial Park.The buses depart at 6:40am from Dalian and leave Pine Island at 4:30pm, with the average commuting time exceeding 100 minutes each way.Similar phenomenon is identified in both Panjin's Liaodong Bay and Yingkou's Bayuquan.Long commute presents serious challenge to the general health and living quality of employees.In the long-run, stressful commute may lead to both physical and mental health concerns and require local planners to adopt effective measures to rebuild healthy lifestyles.Nie and Souza-Poza (2016) categorize commute time of more than one hour daily to be "extreme", which is found to be associated with lower levels of life satisfaction and happiness in the long-run.The long commute time also suppresses family leisure time, aggravates family separation and hinders happiness and life satisfaction.Furthermore, some of major real estate development projects have been abandoned after years of low sale and lack of infrastructural support.
The Quality of Living [Public Health Environmental Protection]
Environmental protection is essential to shield the society from negative externalities of industrialization.
Policies that control air, noise and water pollution would reduce associated respiratory and infectious diseases, but the healthcare system itself is energy-intensive and a major emitter of pollution (Eckelman and Sherman, 2016).Another conflict arises from measures adopted to improve public health that could undermine environmental protection.For example, to control the widespread of mosquito-borne viruses such as malaria, ZIKA and west nile, heavy use of chemical pesticides (DDT especially) may also destruct pollinators and subsequently cause adverse ecological distortions.
One of the planning objectives of the large-scale relocation of industrial plants from inland to coastal areas in Liaoning was to effectively use marine resources to control for pollution.The remote location and inadequate support for local living forced many employees to continue long-term commute that causes mental and physical destruction.In addition, the relocation has resulted in other environmental disruptions to local fisheries, natural habitats and other species.Newly developed residential towns also face shortages of public goods such as emergency medical centers, specialized hospital facilities and professional therapy services.In Bayuquan and Liaodong Bay, large-scale town squares were constructed to allow residents to engage in daily exercise, square dancing (popular group activity in China), and other recreational activities.Aimed at promoting public health through physical exercise, these well-designed, showcase level town squares involved significant farm land conversion and hinders environmental balance.Pesticides have to be applied to local ponds and greenspace to maintain beautification, which could affect public health.
The paradox is further embedded in land zoning and planning.To promote a better work-life balance and reduce commute time, many residential properties are planned with great proximity to the industrial plants with no visible separation by greenspace or non-industrial land (Figure 6b, c, and d).Excessive air and noise pollution from petrochemical plants lead to serious health concerns for local residents.In August 2016, we have conducted a random street interview near a residential complex located at Yingkou's Bayuquan industrial area, asking general questions about the residents' quality of living.Out of 12 residents interviewed, 10 have expressed a change of air quality since the completion of the adjacent steel plants (500 meters away).Ocean breeze carried dust and haze to their dwelling units, even preventing them from opening any window.Workers also have to wear facemasks to work to avoid inhaling excessive dust.This could also help to explain why workers were reluctant to relocate to the new plants and the emergence of the ghost towns.Although environmental protection and public health policies are commonly perceived to go hand-in-hand and mutually reinforcing, planners have to be more cautious to find ways to mediate and resolve rising conflicts between the two values.
Descriptive and Econometric Analysis
Our theoretical analysis reveals five planning conflicts in Liaoning's MIDAs but we are still puzzled by how these conflicts emerged and remained unresolved in light of failing economic performances and rising social tensions.Our interview with local town officials pointed us to land governance and the economic incentives behind excessive residential development.Fushun, Fuxin, Huludao, Jinzhou, Liaoyang, Panjin, Tieling andYingkou (1995 -2015).This is consistent with our previous conjecture that fiscal conditions in coastal cities are tightly connected to real estate activities, which provides local government with the incentive to initiate large-scale coastal industrial expansion to subsequently promote commercial and residential land development.In contrast, non-coastal cities exhibit a loose link between real estate investment and fiscal revenue.The non-coastal subpanel weakly rejects the first null hypothesis that changes in fiscal revenue do not Granger cause changes in real estate investment, at the 10% significance level.Furthermore, the test fails to reject the second null hypothesis that changes in real estate investment do not Granger cause changes in fiscal revenue.This finding implies that real estate development in non-coastal has a weak causal effect on fiscal conditions, which may mean less incentive for land development at a massive level.
Investment Condition and Market Speculation
China's housing market has sustained double-digit growth in average prices over the past decade, with some cities like Beijing and Shanghai experiencing more than 500% growth since 2003.To prevent potential bubbles and crises, the national government has initiated a series of housing purchase restrictions since 2010.For example, a "one-unit-per-household" policy has been administered in Dalian in 2010 limiting each household to purchase only one residential dwelling unit.For owners with more than one unit, the down payment requirement for the primary residence has increased to 30%, and 50% for the second housing unit.Lending for the third and more residential units has ceased altogether.Nonetheless, local governments were given some flexibility in administering purchase restriction policies and often excluded new economic development areas.In Dalian, the purchase restrictions were placed on urban residences only and largely excluded the Pine Island and Changxing Island coastal development areas.Investment demand and market speculation surged in these newly developed coastal areas and resulted in a sharp increase in new dwelling units constructed.Due to the lack of economic fundamentals and slow industrial growth, many residential projects have been abandoned and become unoccupied.Monetary tightening in 2015 has also added financial strains on real estate developers leading to construction delay and abandonment.News report of "ghost cities" in suburban areas of Dalian, Panjin and Yingkou have appeared since 2014, which was associated with a significant loss of economic efficiency and productivity as many of these developments occupy quality coastal land preventing industrial plants to access valuable marine resources.
Discussion
Our paper investigates a series of recent large-scale coastal industrial development projects in Liaoning, China, and analyzes the land-use pattern, industrial-residential balance, as well as economic activities and urban land development.We adopt a case study methodology to compare MIDAs in Japan, South Korea and Singapore, and find that similar projects in Liaoning have 1) much greater landmass and coverage; 2) high level of segregation between production plants and shipping port leading to low production efficiency; 3) shown prevalent characteristics of residential-dominant land development, possibly due to tight link to local government fiscal revenue; and 4) exhibited early signs of the emergence of "ghost cities" in newly established residential areas.
Our theoretical research contributes to identification of prominent issues and planning conflicts in Liaoning's coastal projects.Continued research is required to investigate the possible issue of over-supply in residential housing units and the inefficient use of prime coastal resources for industrial purposes.
We further conduct econometric analysis to examine the two-way causality relationship between local government fiscal revenue and real estate development.Using the panel Granger causality test approach, our results show that the two-way Granger causality is present in the whole-province panel of 14 cities, but there exists some coastal and non-costal separation in the relationship.More specifically, we confirm the presence of the two-way causality relationship for the subpanel of six coastal cities but could only weakly reject the null of changes of fiscal revenue do not Granger cause changes in real estate investment in non-coastal cities.
The role of urban planners also extends far beyond that of a mediator, but an innovative pioneer in guiding the society to become a greener, healthier, more advanced yet equitable built environment.We advise that planning authorities in Liaoning need to work closely with the local communities to first enhance public infrastructures of the newly-developed residential towns, expand service and supportive sectors to enhance the quality of living for relocated residents.Moreover, residential projects in planning stage must be carefully reviewed to avoid formation of "ghost towns" and provide sufficient affordable housing units to low-income families.In addition, industrial production requires increased integration to marine resources to improve efficiency and environmental preservation.Large-scale land reclamation projects in planning stage must also be critically assessed with respect to economic feasibility and potential ecological impact.The cooperation of authorities, environmental activists, enterprises and local residents is essential to promote sustainable development in the long-run.
Figure 1 .
Figure 1.GDP and GDP growth of Liaoning, China from 1993 to 2016 Note.Data is obtained from Liaoning Statistical Yearbook (1993 to 2015) and CEInet Statistics Database.
document the steady growth of shipping port lengths and volume in Liaoning since 2005 as the result of the aforementioned economic
2. 2
Land Development and Planning Despite individual characteristics, most MIDAs in Japan, South Korea and Singapore exhibit industrial-dominant development features including closely integrated shipping ports and industrial plants organized in clusters.In contrast, Liaoning's MIDAs show a strong residential-dominant land use pattern with limited utilization of marine resources planned for industrial plants.Figure 4a to Figure 4g illustrate the land use patterns of seven comparable MIDAs from Japan.Shipping ports are centrally located and surrounded by industrial plants, leading to a highly integrated land planning that minimizes local transportation costs, reduces inventory holding and production cycle, and increases the overall operation turnover.Figure 4. Land distributions of selected coastal industrial development areas in Japan Note.Yellow shaded areas represent petrochemical plants, blue shaded areas represent iron-steel production
Figure 6 .
Figure 6.Land distribution of selected MIDAs in Liaoning
Another debate in this literature centers on how China's decentralization is counterbalanced by the rise of state control in urban land governance, along with real-estate
Table 1 .
Landmass of major coastal industrial areas in Japan
Prefecture Location of Coastal Industrial Areas Total Landmass (km 2 ) Average Landmass (km 2 )
Official data realeased by Department of Transportation of Japan and our own calculation using GoogleEarth is used to compile the table.
Table 2 .
Overview of new large-scale coastal industrial areas in Liaoning
Municipality Coastal Industrial Areas Distance from City Center (km) Land Coverage (km 2 ) Leading Industry
Note.Data is obtained from Statistical Yearbook of Dalian, Yingkou, Panjin and Jinzhou Our hypothesis is that the declining economic trends in Liaoning added fiscal pressure to local governments who resorted to land sales and real estate tax as supplementary incomes.Empirical analysis supports the claim and concludes that real estate investments are tightly linked to local fiscal revenue at the city level.Lastly, purchase restrictions and real estate market speculation served as catalytic role in driving up the demand for real estate investments.China has initiated a series of fiscal reform since 1994 that divided central and local administrative powers, and granted up to 75% of VAT to be replenished to the central government.Local government has since relied heavily on land financing, land leasing fees and transfer fees to fund local economic growth.Figure8shows Liaoning's urban/township land use tax and land value-added tax as a percentage of total fiscal revenue from 2003 to 2012.Urban and township land-use tax (LUT) involves a broad tax base that includes all domestic enterprises, work units, individuals and household businesses, and collected on the basis of actual size of land occupied.Land value-added tax (LVT) is collected on taxpayers' income derived from transference of use rights of state-owned land and property rights of buildings.It is evident from the data that the proportion of both taxes in Liaoning's fiscal revenue have increased significantly over the past decade.In 2003, LUT and LVT are reported to respectively account for 1.69% and 0.62% of the provincial fiscal revenue, but grew to respectively 7.16% and 6.14% in 2012.Dong and Wang (2016)report that government running consistent fiscal deficits sold more land to finance budgetary operations, and rely more extensively on real estate conditions to finance economic projects.Liaoning has been running a fiscal deficit since the late 1990s and since relied substantially on government loans and real estate income to finance government investments.The proportion of land-related tax in total fiscal revenue has increased significantly since 2003, along with deteriorating fiscal income (fiscal revenue net of fiscal expenses) evidently shown in Figure9.In addition, land sales for residential and commercial real estate development has also increased substantially from RMB 3.47 billion in 2000 to 52.26 billion in 2011.Persistent debt and worsening economic conditions provide increasing incentives for the provincial government to promote land sales and real estate development, which may have been a leading factor for the residential-dominant development.
Table 4 .
Panel unit root test results Note. denotes the first order difference.*, **, and *** respectively represent rejections of the null at the 1%, 5% and 10% significance level. | 8,478.4 | 2019-03-30T00:00:00.000 | [
"Economics"
] |
A novel single-channel edge computing LoRa gateway for real-time confirmed messaging
LoRaWAN has become the technology of choice for increasing Internet of Things applications owing to its long range and low power consumption characteristics. However, in the uplink confirmed messaging cases, the entire retransmission could take several seconds, so it cannot be used in scenarios that require rapid confirmed messaging, such as emergency alerting and real-time controlling applications. Nevertheless, there has been limited work targeting this issue. This study presents a novel LoRaWAN gateway using edge computing to expedite the confirmed messaging process by generating the acknowledgment (ACK) locally, so that the confirmed messaging time can be significantly reduced. Additionally, the resource utilization of the network server can also be decreased due to the use of edge computing. We verified the effectiveness of our solution through extensive simulations and experiments. The confirmed messaging time between the end nodes and the gateway averaged 43 ms for a maximum of 2 retransmissions. With the adoption of edge computing on the gateway, the network server’s central processing unit (CPU), memory, and bandwidth peak utilization decrease from 53.51 to 39.46, 73.88 to 72.11%, and 4422.68 kbps to 3271.27 kbps, respectively. In addition, the network server’s system load decreases from 2.15 to 1.69, while the gateway cost is reduced by almost \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\$$$\end{document}$38 compared to the benchmark products.
• We proposed a novel single-channel gateway design by selecting an appropriate microcontroller unit (MCU) and LoRa module to significantly reduce the cost.An MCU is a compact integrated circuit designed to govern a specific operation in an embedded system.A typical microcontroller includes a processor, memory, and input/output peripherals on a single chip.The cost of MCUs can vary widely, and the details of the gateway cost calculation are illustrated in Table 1.• We designed an edge-computing LoRaWAN gateway to attain real-time confirmed messaging.This edge computing capabilities allow for real-time data transmission, processing, and performing security mechanisms directly at the network's edge, reducing the need for cloud server resources in terms of CPU, memory, bandwidth, and system load.• We experimentally validated the confirmed messaging time reduction to the millisecond level for edge- computing LoRaWAN gateways and reduced the network server resource usages 7 .
To improve readability, Table 2 summarizes the abbreviations used in this article.
Related work
Traditional LoRaWAN employs gateways for transparent transmission, and all data process is conducted on the cloud-based network server, which results in increased communication latency, network congestion, and network server load.In 8 , a novel distributed computing model is introduced and a latency-aware algorithm is used for edge node task processing.Computing functions such as data preprocessing and machine learning model training are incorporated into the LoRa gateway, which effectively improves network reliability.Compared to the conventional LoRaWAN, it reduces the CPU and bandwidth usage without affecting the system throughput.However, the experiments were conducted under network simulations, rather than using real-world LoRa end nodes.Our proposed method advances beyond simulations by implementing a novel single-channel gateway, which is experimentally validated to reduce confirmed messaging time to the millisecond level and decrease network server resource usage, thereby offering a practical solution for real-time applications.
In 9 , a low-cost LoRa and edge-computing-based system architecture was applied to assess forest fire occurrence.Nonetheless, this study provides limited insights into the specific implementation of the edge gateway, primarily focusing on the description and benefits of the application system, and lacks a detailed explanation of the advantages of the edge computing gateway.Furthermore, hardware components such as the MCU and LoRa modules used in the LoRa gateway are not exhaustively detailed, making it challenging to evaluate their cost.While our method provides a concrete design and experimental validation of an edge-computing LoRaWAN gateway, emphasizing real-time messaging and cost efficiency, thus offering a more comprehensive solution with potential applicability in a wider range of scenarios including emergency response.
The authors of 10 presented an adaptive spreading factor selection scheme with an iterative spreading factor detection algorithm to reduce erroneous spreading factor selection for single-channel LoRa networks.Nevertheless, the authors briefly mentioned the results of other studies without providing an in-depth analysis.Regarding the experimental setup, the authors mentioned the use of a LoRa modem connected to a Raspberry Pi but provided limited specifications about other hardware components and the experimental environment.Additionally, although this study evaluates the performance of single-hop and multi-hop LoRa networks, it lacks a comprehensive analysis of the impact of different network topologies and sizes.As a comparison, our proposed method is based on low-cost hardware components which is an advantage for large-scale applications of single-channel LoRa networks.The authors of 11 suggested to use of multiple single-channel gateways to achieve fair transmission for all end nodes in the network by dynamically adjusting the channel allocation.However, this paper did not explicitly describe the MCU used, and it did not implement edge computing into the gateways.This drawback may lead to longer confirmed messaging time, rendering them unsuitable for emergency applications or scenarios requiring high real-time responses.Our proposed gateway design incorporates edge computing to directly acknowledge messages, significantly reducing confirmed messaging time and demonstrating a clear advantage in scenarios requiring rapid responses.
In 12 , LoRaWAN and Wi-Fi protocols were heterogeneously integrated into farm environment monitoring and edge computing technology was introduced to improve the transmission distance and farmland monitoring range.However, the gateway adopts Raspberry Pi as the hardware platform of the IoT gateway as well and the LoRa chip adopts SX1302 and two SX1255, which makes the manufacturing cost of the gateway high.Furthermore, based on the standard LoRaWAN protocol, the confirmed messaging time could still reaches several seconds although edge computing is adopted.Our proposed method emphasizes cost-efficiency and real-time messaging, showcasing a practical implementation that could potentially offer better scalability and applicationspecific adaptability at a lower cost.
A proof-of-concept of an edge-assisted IoT-based architecture was designed and implemented to optimize existing LoRa-based IoT applications through edge computing to achieve higher data rates, better scalability, and lower latency in 13 .Similarly, in 14 , an effective monitoring system model for agricultural facilities was created using edge computing and artificial intelligence methods.The machine learning-based monitoring system helps farmers monitor plant conditions by predicting deviations from normal factors, irrigating plants efficiently, and optimizing the use of expensive chemicals.This edge computing and machine learning-based monitoring system can be applied to home greenhouse and industrial greenhouse networks.In 15 , energy management system data is compressed by edge computing techniques and communicated by LoRa networks to a power operator at a given time with minimal energy consumption.These works illustrate the potential of application layer edge computing in optimizing LoRa-based IoT applications.In 16 , a long-range, energy-efficient vision node called EdgeEye is introduced for long-term edge computing.EdgeEye utilizes a low-power processor, GAP8, and a low-power camera to enable a smart IoT device that can continue to work for many years and be powered by a battery.The system architecture of EdgeEye, the design and performance evaluation of convolutional neural networks, the estimation of energy consumption and battery life, and the application of edge computing in IoT devices are also described in detail.A landslide monitoring and early warning system based on edge computing is introduced in 17 , including the common transmission methods for communication of geohazard monitoring devices, the analysis and comparison of the characteristics and advantages of multiple communication technologies, and the design and development of the intelligent superposition-triggered monitoring model based on the multi-parameter joint-triggered intelligent algorithm.These studies implement edge computing at the application layer, aiming to enhance specific IoT applications through localized data processing and decision-making.While beneficial for targeted use cases, this approach inherently limits the scope of improvement to the application layer, which may not universally optimize network performance across all potential IoT use cases.Our proposed method applies edge computing at the data link layer, which is a more foundational level within the network stack.This not only ensures that the enhancements are applicable across a wider range of IoT scenarios but also addresses systemic inefficiencies that are not application-specific.The universal applicability of our approach makes it a fundamentally more robust solution for enhancing IoT networks through edge computing.
System architecture
Figure 1 illustrates the system architecture, which comprises various components: a mobile application, management platform, server (either cloud-based server or local server), LoRa gateway, and end nodes.Both the mobile application and management platform enable users to operate the end nodes, monitor their status, and receive uplink messages.The server is a pivotal component of this system, which includes a main server, a LoRaWAN server, a message queuing telemetry transport (MQTT) 18 server, a device management server, and a database.It offers functions such as data storage, processing, network services, and security.The server can be deployed either locally or cloud-based.The MQTT protocol is used to connect the server and the gateway.
The LoRa gateway plays a critical role in the architecture, serving as a central hub for data relay, processing, and security.The gateway and server are connected via Ethernet or 5G.A star network 19 topology with advantages such as easy maintainability, high reliability, and scalability is used between the gateways and the end nodes.
End nodes report data to and execute the control commands from the server.These end nodes include, for instance, smart switches, motion infrared sensors, microwave radar sensors, and magnetic sensors.The versatility of the end nodes makes it suitable for a wide range of applications, including smart homes 20 , building automation 21 , smart agriculture 22 , smart cities 23 , and environmental monitoring 24 .
Overall design
We considered various factors during the gateway design, including user-friendliness, ease of installation, debugging, and maintenance, low cost, and low confirmed messaging time.Figure 2 illustrates the hardware design of the gateway.We employed a 12-24 V direct current to direct current (DC-to-DC) converter to provide the power for the gateway.The core control unit is the MTK7688 25 module, which enables MQTT communication with the server via an Ethernet connection.The serial peripheral interface (SPI) 26 is used to communicate with the SX1278 27 module, which provides LoRa connections.Furthermore, the gateway sends debugging information through both the serial and Ethernet ports and illustrates 3 statuses via light-emitting diodes (LEDs), which www.nature.com/scientificreports/are power supply, online or offline, and packet transmission and reception.The key is designed for resetting the gateway.
MTK7688 module
For the selection of a core control unit, the primary considerations are high clock frequency, adequate memory (i.e., for edge computing), and ease of use.We have selected the MTK7688 module from LILDA Group 28 , which is cost-effective at approximately $ 8. Compared to alternatives like the Raspberry Pi 29 , which retails for $14, the cost is reduced by $ 6. Figure 3a shows the module.The core processing unit of the MTK7688 module utilizes an MT7688AN controller which integrates a 580 MHz MIPS®24KEc TM CPU.Its clock signals are produced by an external 40 MHz crystal oscillator which is shown in Fig. 3b.The Nanya 30 NT5TU32M16FG-AC 31 chip provides 64 MB of double data rate 2 (DDR2) flash memory.For nonvolatile memory, we employed a 16 MB flash memory namely MXIC25L112835FM21-10G 32 from Huabang 33 .
SX1278 module
The SX1278 module from Nanjing Renyu 34 enables a single-channel LoRa communication between the gateway and the end nodes, which is illustrated in Fig. 4. Its radio frequency (RF) circuitry is designed to match a 50 impedance.Table 1 presents the relevant component pricing from Digikey 35 , revealing that SX1278 is available at a cost of only $1.5, whereas the conventional multi-channel gateway comprises a digital baseband chip (i.e., SX1301 or SX1302 36 ) with a cost of approximately $18.Additionally, the cost of the two pieces of Multi- PHY mode transceiver (i.e.SX1255) is approximately $16.Therefore, the cost of a SX1278-based single-channel gateway is less than one-twentieth of the multi-channel one.Since our novel gateway targets rapid confirmed messaging, in which an ACK is required for each uplink transmission, using the multi-channel gateway will not benefit the network.In addition, we chose to set the spreading factor, and bandwidth of the network to 7 and www.nature.com/scientificreports/500 kHz to attain the highest data rate, so that the confirmed messaging which involves multiple retransmissions can be completed as soon as possible (i.e. the lowest confirmed messaging time).However, boosting the data rate results shortened communication distance.Accordingly, the number of gateways deployed needs to be increased to retain the LoRa signal coverage compared to the conventional long-range multi-channel gateway.As a result, the cost of rapid confirmed messaging gateway should be substantially reduced to be affordable for practical use.
Ethernet
The Ethernet is used as the communication method between the gateway and the server, which is based on Pulse Electronics's 37 H1102NL physical layer (PHY) 38 chip complying with the IEEE 802.3 specification 39 .This PHY chip employs differential signaling 40 for the transmissions.
LED
In the context of gateway applications, it is essential to employ LED indicators to provide a clear indication of the gateway's current status which includes the power-on status, connectivity with the cloud-based server (i.e., online or offline), and the transmission and reception of the LoRa packets.Consequently, we have incorporated four LED indicators for these purposes.The arrangement of the LEDs on the PCB is shown in Fig. 5.The first red LED signifies the power-on state, indicated by a constant illumination of the LED.The second green LED indicates the connectivity status, with the LED blinking and remaining unlit to denote the online, and offline status, respectively.The third green LED is designated for indicating the LoRa transmission and reception status, with one and two blinks, respectively.Lastly, the fourth green LED has been reserved for potential future use.
Printed circuit board design
Figure 6a,b depict the front and back sides of the circuit layout, respectively.To maintain impedance matching in the RF section and support the differential circuitry for the Ethernet, a four-layer printed circuit board (PCB) design was employed.The front and back layers contain the pads and lines that connect the pins between components, while the middle two layers are the power and ground layers.www.nature.com/scientificreports/
Functional design
Overall software design The block diagram of the software design is shown in Fig. 7.We chose the Openwrt-Linux 41 as the operating system for our gateway, and implemented the hardware driver layer for SX1278, Ethernet, LEDs, and UARTs, and middleware components, such as LoRaWAN MAC and application program interface as shown in the figure.
Task scheduling for the entire system was accomplished through four processes: • Fork_core processes the application request data and generates responses to the end nodes according to user's configurations.• Fork_mqtt is used to connect to the server via MQTT.
• Fork_lorapkt manages LoRaWAN communication functions with the end nodes.
• Fork_dispatch acts as a message router between the MQTT and LoRaWAN, facilitating message transfers.
At the same time, we employed four main threads: • Thread _ up processes packets received from the end nodes.
• Thread _ down handles the downlink data sent from the cloud-based server.
• Thread _ jit polls for pending downlink packets originated from the cloud-based server and dispatches them to the end nodes.• insert _ queue _ thread is responsible for managing the incoming server's data, processing it, and queuing it for distribution to end devices.
The MQTT protocol is used to connect the gateways and the network server due to its advantages of minimal packet overhead, efficient distribution to multiple clients, and reliable message queuing.However, the adoption of MQTT can not expedite the end-to-end delay for the LoRa confirmed messaging since the acknowledgment is still generated in the cloud-based network server, which results a long round trip time (RTT) for each transmission attempt.
Physical layer settings
The settings for the physical layers of the LoRa transceiver are listed in Table 3.Note that we chose the lowest LoRa spreading factor (i.e., 7) and the widest bandwidth (i.e., 500 kHz) to attain the highest data rate.The uplink receiving frequency of the gateway was set to a fixed frequency.On the contrary, the downlink frequency of the gateway was automatically adjusted according to the device address of the end nodes, which differs from www.nature.com/scientificreports/ the uplink frequency of the gateway.This arrangement prevents the problem of frequency conflict between the simultaneous downlink and uplink when the same one is used.
Edge computing: acknowledging design
In LoRa communications, packet losses between end nodes and gateways sometimes occur.Therefore, a wellplanned interaction-timing design can significantly enhance the reliability of LoRa communication. Figure 8 illustrates the timing when the gateway acknowledges the end node.The node initiates the transmission and starts a timer (i.e., T reTX_wait_time ) after the end of its transmission.Upon receiving data, the gateway performs the essential data verification checks and generates an ACK within a predefined t transition_time .Subsequently, the ACK is transmitted to the node.If the node successfully receives the ACK before the T reTX_wait_time timer expires, the transmission is completed.If not, the end node will start the first retransmission after the expiration of timer T reTX_wait_time .This process is repeated for up to three attempts.Then, the confirmed messaging is completed whether or not the node receives the ACK.
This approach differs from that of conventional LoRaWAN systems, in which the ACK response function is designed within the cloud-based server.In our design, the ACK response of the node occurs at the gateway.This modification reduces LoRa's confirmed messaging time.In addition, it reduces the computing load and network bandwidth usage on the servers and thus, improves the scalability of the servers.In addition, the standard LoRaWAN ACK packet format is used in our design, which does not carry any payload if the net server do not have data to send to the end node.
Edge computing: gateway registration and replacement method
The use of quick-response (QR) code 42 offers a quick, convenient, and secure method for integrating devices into IoT, which however, is not adopted by the conventional LoRaWAN.This approach minimized the risk of human error and expedited deployment processes.We designed 6 bytes to indicate the gateway's identification (ID).An example of the QR code for the gateway ID 9F1000000001 is shown in Fig. 9.The process of gateway registration to the cloud-based network server is illustrated in Fig. 10.Its ID number is inserted into the gateway database in the server as a new device.After that, an associated node list is created and sent to the gateway when the node list is not vacant.
It is conceivable that gateways inevitably experience malfunction or damage during their operational lifespan for various reasons.The simple way to handle this problem is to replace them with new ones.However, replacing the gateway with edge computing is more complicated than a transparent one (i.e., the conventional LoRaWAN gateway).To address this issue, we designed the gateway replacement function which is shown in Fig. 11.To replace an old gateway, first scan its QR code, then scan the new gateway's, and finally click the 'Replace' button on the dedicated mobile application.With these user operations, the old gateway is substituted by the new one in the database of the network server.In addition, to ensure the seamless continuity of all original edge computing functions, the server transfers the node list associated with the old gateway to the new one once it is powered on and online.www.nature.com/scientificreports/Edge computing: node list synchronization In conventional LoRaWAN, the application server and the network server keep node information (i.e., a node list) while the gateway does not.Thus, the ACK is generated by the NS and disseminated by the gateway which is chosen by the NS.This will cause the end node to wait for the ACK for a long time after sending confirmed data, and the real-time confirmed messaging is not achievable.Thus, we suggest establishing an association between each end node and a chosen gateway by sending the node information to the gateway so that the ACK can be generated by the gateway directly.Such association is created and managed by the user and the list of associated nodes for each gateway is generated and sent by the network server if a gateway requests the list or the list is modified (i.e., the synchronization of node list between gateways and network server).The synchronization procedure that is shown in Fig. 12 is as follows: • Upon connection to the server, the gateway initiates a request to the server for its node list.Subsequently, the server sends the relevant list to the gateway.• In cases where a node (e.g., Node A) is added by the user, the server notifies the gateway, which then adds Node A to the node list.• If the user removes a node (e.g., Node B) in the network server, the server notifies the gateway to perform the same operation.Then, the gateway conducts the removing and reports the result to the server.• When a replacement operation is initiated by the user (e.g., use Node C to replace Node D), the server issues a replacement command to the gateway.After performing the replacement, the gateway will also report the result to the server.
This list synchronization process ensures that each gateway's end node list is identical to the one on the network server-side.
Edge computing: security mechanisms
Traditional LoRaWAN devices use a unique device identifier (DevEUI) and a pre-shared key (i.e., AppKey) to authenticate themselves to the network.During the initial join process, the device and the network server perform a mutual authentication process, and the session keys (i.e., NwkSKey and AppSKey) are derived 43 .The NwkSKey is used for securing messages on the network layer, ensuring the integrity and authenticity of the messages exchanged between the devices and the network server, while AppSKey is used for end-to-end encryption of the payload at the application layer.It ensures that the application data remains confidential between the end device and the application server.In addition to the node list, these two keys are also disseminated to the www.nature.com/scientificreports/associated edge computing gateway by the NS, so that the traditional security mechanism can be resumed in our proposed method.
Performance evaluation metrics
The LoRa network performance evaluation metrics include the confirmed messaging time, packet reception ratio (PRR), received signal strength indicator (RSSI), and signal-to-noise ratio (SNR).The performance of the network server which is deployed on AliCloud 44 is evaluated in terms of CPU and memory utilization, network bandwidth, and system load.
Confirmed messaging time
The confirmed messaging time refers to the time interval between an end node sending confirmed data and receiving ACK from the gateway or vice versa.Numerous factors contribute to the confirmed messaging time for our edge computing gateway: • Time required for parsing, checking, and verifying data after reception by gateway.
• Time consumed by other processes or threads on the gateway.
• Time taken by the gateway to generate the corresponding ACK.
• Parsing and verification of data by the node after receiving ACK.
• Packet transmission delay depends on the packet length and LoRa physical layer parameters.
PRR
The PRR is the ratio of the total number of acknowledged packets to the total number of transmitted packets.
The PRR quantifies the rate of successful packet reception.It is computed as follows: where N TX is the total number of gateway transmissions, and N RX is the total number of returned ACKs.Numerous factors affect the PRR, including communication distance, obstructions, antenna attenuation, transmitting and receiving antenna gain, and LoRa physical layer parameters of both the nodes and gateways.
LoRa RSSI Equation ( 2) defines the RSSI 45 , which is a widely used term in various wireless technologies 46 .In our experiments, the RSSI value is read directly from the LoRa chip's registers.
where P rx denotes the received signal power in mW.
SNR
By comparing the signal power to the noise power, the SNR calculates the signal quality 47 .An increased SNR indicates a stronger signal, which improves transmission efficiency and signal quality.The SNR in this paper is computed using the following formula: where P SP and P NP are the signal and noise powers, respectively.A SP denotes the amplitude of a signal in dBm, and A NP is the amplitude of noise.
Network server system load
In addition to the mentioned metrics to evaluate the performance of the network server, we also adopted the system load which is defined as the number of processes running on the server, including the number of processes running and waiting to run.System load can reflect how busy the server is.Specifically, the system load is rated in a range of 0 to 4 with the larger number indicating greater server resource usage.Moreover, instead of the instant system load, we focus on the average system load over a certain period, and we chose 3 periods: 1 min, 5 min, and 15 min.Then, the averaged system load denoted by L t, p for period p at the time instant t can be computed as follows: where r represents the instant rate of the system load.
Results
This section describes the obtained results of resource usage reduction on the network server and the gateway performance.In our study, we ensured comparability between tested environments by standardizing the experimental setup across all scenarios.This included using a consistent number of nodes, identical hardware specifications for the server, and a uniform software environment across tests.We simulated the network server's www.nature.com/scientificreports/load using a controlled set of applications to mimic real-world traffic and load patterns.By maintaining these consistent parameters, we aimed to accurately measure the impact of our edge-computing LoRa gateway on reducing CPU, memory, and bandwidth utilization, ensuring a fair comparison between the conventional cloudbased processing model and our proposed edge-computing approach.
Network server resources usage reduction
When conducting experiments to evaluate how our new LoRa gateway design could alleviate the network server's resource usage, we first used only 100 nodes.However, the results of the experiments fluctuated due to environmental factors, making it challenging to obtain the precise values of the network server's CPU, memory, and bandwidth utilization.Conversely, using numerous nodes, such as 10,000, for the experiment can significantly increase the costs.To address these issues, the reduced resource usage of the network server was measured using simulated nodes and gateways.Table 4 lists the configurations of the network server deployed in AliCloud.Then, we chose two categories of end nodes: switch and infrared sensor, which are shown in Fig. 13.An infrared sensor, a switch, a gateway, and the network server form an application as shown in Fig. 14.When the infrared sensor detects the movement of someone, it will transmit the movement-detected message to the gateway, which is forwarded by the gateway to the network server.After the message is received by the network server, it will send an activation command to the switch through the gateway as a response to the movement detection.
During the simulation, the numbers of switches, gateways, and infrared sensors were set to 10,000 each.As a result, they formed 10,000 concurrent applications which were divided equally into 600 concurrent groups and executed concurrently and asynchronously using multi-thread in the network server.The environment to run the simulation are shown in Table 5.
To compare the specific resource utilization of the network server, we conducted two sets of experiments.In the first set, the network server is responsible for generating both light activation commands (i.e., the application commands) and the acknowledgments for the confirmed messages sent by the infrared sensors.As a result, the gateway operates in the conventional way (i.e., transparent mode used in the current LoRaWAN standard).As a comparison, in the second set, the acknowledgments are generated by our edge computing gateway, and the network server is only responsible for the application command generation.Each experiment was conducted 100 times.
Figure 15a illustrates the CPU utilization of the two experiment sets.When the network server generates and sends ACKs for the 10,000 applications, the CPU utilization consistently exceeds 60%.In contrast, when the network server does not need to generate ACKs, the CPU utilization typically declines to approximately 50% or 40%.Figure 15b demonstrates that the memory utilization constantly stays at 78% with negligible fluctuations regardless of whether the network server generates and sends ACKs or not, This is due to that the network server currently has a memory of 64GB which is far more than the 10,000 applications need.
Figure 15c illustrates the bandwidth utilization for the input and output traffic when the network server whether nor not generates and sends ACKs.If it does, the peak input and output bandwidth consumption mostly exceeds 3M and can reach a level of approximately 4M.If it doesn't, these two metrics are mostly below 3M. Figure 15d shows the averaged system load of the two experiment sets with the 3 chosen averaging periods (i.e., denoted by 1-min, 5-min, and 15-min).When the network server generates and sends ACKs, its 1-min system load can be beyond level 3, while if it doesn't, the 1-min system load is always below level 2. If we observe the 5-min system load, generating the ACKs makes the system load of the network server slightly higher than that not.Furthermore, if the averaging period is extended to 15 min, the two experiment sets exhibit nearly identical system load, since the server is idle for the duration of 15 min.
In addition to the instant resource usage, we also analyzed its distribution over the entire experiment time.Figure 16a illustrates the CPU utilization distribution.As it can be seen, the CPU utilization concentrates at approximately 4.23% when no application needs to be executed (i.e., static as shown in the Fig. 16a).When the application threads without the generation of ACK are executed, the CPU utilization approximately evenly distributes in a range of 30% to 50%, with an average value of 39.46%.Conversely, when the network server generates ACKs, the CPU utilization varied between 45% and 60%, with an average of 53.51%.The difference between these two average values is 14.05%.
Figure 16b shows the memory utilization difference although it is very small.The memory usage of the static network server concentrates at approximately 72.11%.When the network server does not generate ACK, its memory utilization drops to a range of 73% to 75%, with an average of 73.88%.In the other case, memory usage increases to the range of 75% to 78%, with an average of 76.17%, which yields a difference of 2.29%, compared to the case of no ACK generated.
The bandwidth of the static network server is highly concentrated at 754.80 kbps as shown in Fig. 16c.The bandwidth usage without ACK generation varies between 2000 and 4000 kbps, with an average of 3271.27kbps.Conversely, when the network server generates ACKs, the bandwidth usage increases to a range of 3000 to 8000 kbps, with an average of 4422.68 kbps, which yields a difference of 1151.41 kbps with the former average value.
In Fig. 16d, the set of 1-min curves reveals the remarkable system load difference between the cases of static, with, and without ACK generation.In the case without ACK generation, the system load concentrates in a level range of 1 to 2, with an average value of 1.69.However, when the ACK is generated, the system load mostly stays in the level range of 2 to 3, with an average value of 2.15.This results in a difference of 0.46 between the two values.As a comparison, the set of the 5-min curves shows that the system load difference between with and without ACK generation is much smaller than that of the 1-min.This difference continues to be smaller when the averaging period extends to the setting of 15-min.In a word, these comparisons demonstrate that employing our edge computing gateway to fulfill the acknowledgment function for the network server can greatly reduce various resource usage.
Note that the observed percentage improvements shown in Figs. 15 and 16 are the result of implementing local acknowledgment generation for end nodes, which includes parsing uplink packets, handling retransmissions, and conducting security mechanisms.Parsing acknowledgment packets directly at the gateway reduces latency and processing overhead on the network server.Locally handling retransmission minimizes the need for data packets to travel back to the network server for retransmission decisions, thereby reducing bandwidth usage and improving throughput.Integrating security mechanisms at the edge (i.e., the gateway) attains data integrity and confidentiality without imposing significant additional load on the network server.
It is also worth pointing out that based on extensive observations over a prolonged period, the inherent variability in system resource consumption (i.e., CPU, memory, and bandwidth) has been found to be very low compared to the significant improvement in these resource utilization brought by our approach.By demonstrating reductions well beyond the normal fluctuations of server resource use, our findings underscore the
Gateway performance
For the gateway performance measurement, we used 8 nodes to transmit 100 packets each to a gateway, which are shown in Fig. 17.The overall PRR results are shown in Fig. 18.Its minimum and maximum values are 96%, and 98%, respectively, and the average value is 97.38%.The PRR of a specific end node is quite similar because we placed all 8 end nodes at the same distance to the gateway.
Figure 19a shows the confirmed messaging time, which is primarily within the range of 35 to 50 ms with an average value of approximately 43 ms.The highest distribution density of the time is around 37 ms for all end nodes.The explanation for such a phenomenon is that based on the LoRa transmission time calculation formula 27 , with the selected LoRa physical layer parameters, the node takes approximately 19 ms to send 39 bytes of data, whereas the gateway responds with a 19-bytes ACK in approximately 10 ms (i.e., 29 ms for LoRa packet time on air in total).The remaining time (i.e., approximately 8 ms) is consumed by other aforementioned tasks.In addition, it is worth noting that most of the confirmed messaging succeeded on the first attempt.Very few times are beyond 70 ms which succeeded within the second attempt.The third transmission attempt is never used during our experiment as none of the measured times is beyond 80 ms.As a comparison, in the traditional LoRaWAN Class A standard, it takes more than 6 s to complete a confirmed messaging for a maximum of 2 retransmission attempts since in each attempt, the retransmission timer expires in 2 s.
The RSSI values are illustrated in Fig. 19b, which are distributed between − 35 and − 15 dB.Despite being located in the same experimental locations, the eight nodes exhibited varying RSSI values obtained at the gateway, which can be attributed to hardware differences, such as the antennas.In contrast, the SNR value remained relatively stable at approximately 9 dB for all end nodes, which is illustrated in Fig. 19c.Such a high SNR value indicates that the signal quality between the end nodes and the gateway is very high, which is due to the close proximity between the end nodes and the gateway.The gained results are summarized in Tables 6 and 7.
Scalability
The proposed edge computing-enhanced LoRa gateway is primarily designed and tested within the context of a star network topology, which inherently benefits from the centralized nature of communication between end nodes and the gateway.This design facilitates direct and efficient management of data traffic, simplifies network management, and enhances scalability by allowing straightforward addition of end nodes without significantly affecting the existing network infrastructure or performance.Transitioning to a mesh network topology introduces the capability for end nodes to dynamically route messages through multiple hops to reach the gateway.While this can enhance coverage and redundancy, it also complicates network management and could impact the gateway's performance due to increased computational overhead for handling routing information and potentially higher data traffic, which restricts the scalability of a mesh network.Note that LoRa has long-range communication capability.Thus, the multi-hop feature is not urgent, and we don't recommend using this topology.Last, our LoRa star topology suffers the same scalability problem as bus topology, which is caused by sharing a single channel among the nodes with the gateway (i.e., packet collisions).However, this issue can be mitigated by the utilization of multiple parallel channels allocated for LoRa communications.This might be an additional task for the gateway to manage the use of the channels to avoid packet collisions.
Comparison with Wi-Fi technology
First, the proposed LoRa gateway is specifically designed for low-power operations, making it highly suitable for applications where devices need to operate on battery power for extended periods.This contrasts with Wi-Fi routers, which are designed for high throughput and are typically powered by a continuous power supply.Second, LoRa technology provides long-range communication capabilities, allowing the proposed gateway to connect devices over distances of several kilometers in rural or open areas.This is a significant advantage over Wi-Fi, which is limited to shorter ranges, typically within tens of meters.Third, due to the low bandwidth and data rate requirements of typical LoRa applications, the proposed LoRa gateway offers a cost-effective solution for deploying IoT networks.This is particularly beneficial for applications that do not require the high data throughput provided by Wi-Fi routers.
Hardware compatibility
The proposed edge-computing LoRa gateway is designed with a modular architecture, allowing for easy updates and integration with new technologies.This design philosophy ensures that as 5G small cell technology evolves, the gateway can be updated or adapted with minimal hardware modifications.The use of standard www.nature.com/scientificreports/communication interfaces (e.g., Ethernet, SPI) and protocols (MQTT for server communication) further enhances this compatibility, as these are widely adopted in 5G infrastructure.
Conclusion
In this paper, we have proposed and implemented an edge-computing single-channel LoRaWAN gateway using MTK7688 and SX1278 modules.In numerous emergency and control applications, real-time confirmed messaging is desired, which, however, is not supported by current gateways.To address this issue, we have proposed a novel edge computing gateway that is able to acknowledge the uplink confirmed message directly to achieve the lowest confirmed messaging time.In addition, transferring the acknowledgment function to edge-computing also relieves the CPU, memory usage, and system load on the network server.In our subsequent work, we will further integrate features such as user application functionalities and local command generation into edge-computing to enhance the edge computing capabilities of the gateway and further reduce resource usage for the network server.
Figure 1 .Figure 2 .
Figure 1.System architecture which comprises: a mobile application, a management platform, a server (cloudbased or local), LoRa gateways, and end nodes.Both the mobile application and management platform enable users to operate the end nodes, monitor their status, and receive messages.
Figure 3 .
Figure 3. (a) Assembled MTK7688 module.(b) MTK7688 module which comprises a 40 MHz external crystal oscillator, flash and DDR2 memories, and a group of peripheral interfaces.
Figure 5 .
Figure 5.The arrangement of the Led on the PCB.
Figure 6 .
Figure 6.The PCB of Gateway.(a) Front side of the PCB design.(b) Back side of the PCB design.
Figure 8 .Figure 9 .
Figure 8. Gateway ACK timing design.The end node initiates its transmission and awaits the gateway's ACK for a predefined time interval.
Figure 10 .
Figure 10.The gateway registration to the cloud-based network server through a QR code.Add the node list download process.
Figure 11 .
Figure 11.The gateway replacement function.To replace an old gateway, first scan its QR code, then scan the new gateway's, and finally click the 'Replace' button in a dedicated mobile application.
Figure 12 .
Figure 12.Process of synchronizing the node list on the gateway with the one on the network server.
Figure 13 .
Figure 13.The switch and infrared sensor.
Figure 14 .
Figure 14.An application involving an infrared sensor, a switch, a gateway, and the network server.
Figure 16 .
Figure 16.Server performance consumption for 100 Tests.(a) CPU utilization for 100 tests.(b) Memory utilization for 100 tests.(c) Bandwidth utilization for 100 tests.(d) System load for 100 tests.
Figure 17 .
Figure 17. 8 end nodes and the gateway.
Table 1 .
The cost of main components of traditional and proposed LoRaWAN gateways.
Table 2 .
List of abbreviations.
Table 7 .
Communication performance results of the proposed edge-computing gateway. | 9,220.4 | 2024-04-10T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
E ff ect of 0.8 at.% H on the Mechanical Properties and Microstructure Evolution of a Ti–45Al–9Nb Alloy Under Uniaxial Tension at High Temperature
: To investigate the e ff ect of hydrogen on the high-temperature deformation behaviors of TiAl-based alloys, the high-temperature tensile experiment was carried out on a Ti–45Al–9Nb (at.%) alloy with the H content of 0 and 0.8 at.%, respectively. Then, the e ff ect of hydrogen on the high-temperature mechanical properties of the as-cast alloy was studied, the constitutive relations among stress, temperature, and strain rate were established, and the microstructure was analyzed. The results indicated that, compared with the unhydrogenated alloy, the flow stress of the hydrogenated alloy was significantly reduced, and the peak stress of the hydrogenated alloy decreased by (16.28 ± 0.17)% deformed at 1150 ◦ C / 0.0004 s − 1 . Due to the presence of hydride (TiAl)H x in the alloy, the elongation showed a decline trend with increasing strain rate at the same deformation temperature. Compared with the unhydrogenated alloy, the elongation of the hydrogenated alloy reduced by (26.05 ± 0.45)% (0.0004 s − 1 ), (23.49 ± 0.38)% (0.001 s − 1 ), and (14.23 ± 0.19)% (0.0025 s − 1 ), respectively, indicating that 0.8 at.% H softened the Ti–45Al–9Nb alloy and reduced the high-temperature plastic deformability. Under the same deformation condition, the deformation extent of the hydrogenated alloy was less than that of the unhydrogenated alloy. There were more residual lamellae in the hydrogenated alloy, and the extent of dynamic recrystallization was lower than that of the unhydrogenated alloy.
Introduction
TiAl-based alloys are characterized by low density, high specific strength, excellent oxidation resistance, and creep resistance at high-temperature, so it is considered as one of the most promising high-temperature lightweight structural materials used in key components such as aerospace aircraft and automobile engines [1][2][3]. Due to the poor plasticity of TiAl-based alloys at room temperature, it is difficult to process and deform at room temperature. Furthermore, TiAl-based alloys have a high flow stress even in the thermal deformation process, so high performance dies and equipment are required in plastic forming [4,5]. Therefore, this requires that the dies and equipment can bear a higher load at temperatures of more than 1000 • C, and can work at a high-temperature for a long time, which virtually increases the cost and affects its practical process. Hence, these problems have become serious restrictions in the application of TiAl-based alloys.
In order to reduce the flow stress in the hot working process of TiAl-based alloys, researchers have mainly adopted an alloy composition design and microstructure control [6,7]. A large number of
Hydrothermal Treatment
The tensile samples first underwent high-temperature hydrogenation. The specific process was as follows: first, the samples were placed in acetone for ultrasonic cleaning for 20 min, and then placed in a tube furnace; after vacuum up to 10 −3 Pa, argon gas was filled; after the furnace temperature rose to 800 °C, the hydrogen was filled with an absolute pressure of 0.1-0. 15 MPa, and then the samples were soaked for 2 h. When the furnace was cooled to room temperature, samples with H content of 0.8 at.% (abbreviated as 0.8 H below) were finally obtained. The highest hydrogen content obtained by the current hydrogenation equipment was only 0.8 at.%, which had the most significant effect on the mechanical properties and microstructure evolution of the present alloy, and so the study focused on a 0.8 at.% hydrogen. The hydrogen content was examined by a LECO-ROH600 oxygen/hydrogen analyzer (LECO, St Joseph, MI, USA), with an accuracy of 0.01 ppm. The error of the hydrogen content was ±3%. In order to accurately compare and study the effect of H on the high-temperature tensile deformation behavior and microstructure evolution of the Ti-45Al-9Nb alloy, the samples without hydrogen had a vacuum heat treatment with the same heat treatment system (without hydrogen addition).
High-Temperature Tensile Test
High-temperature uniaxial tensile test of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloy samples was carried out on an MTS 880 universal tensile test machine by using the equivalent strain rate tensile method. The specific experimental process was as follows: first, the surface of the samples was sprayed with antioxidant alumina to prevent the surface oxidation; second, the samples were heated to the test temperature in a three-section circular resistance furnace, and had heat preservation for 10 min. The test temperatures were 1050, 1100, and 1150 °C and the strain rates were 0.0004, 0.001, and 0.0025 s −1 , respectively, and water-quenching was carried out immediately after the test. Finally, the stress-strain curves and deformed samples were obtained. The deformation behaviors of the alloy at the temperature of 1150 °C and the strain rate of 0.0004-0.0025 s −1 were mainly investigated in this paper.
Microstructural Analysis
The gauge part of samples after tensile deformation was wire-electrode cut, and then the surface to be observed was ground with 240, 400, and 600-grit SiC papers. Finally, electropolishing was carried out. The electrolytic polishing solution was 60% methanol + 34% n-butanol + 6% perchloric acid, the power supply voltage was adjusted to 20 V, the current maintained at 0.5-0.6 A, and the electrolysis time was 50 s.
Scanning electron microscopy (SEM) (JSM-7800F, Jeol, Tokyo, Japan) was used to analyze the microstructure of the gauge part of samples after high-temperature tension.
Hydrothermal Treatment
The tensile samples first underwent high-temperature hydrogenation. The specific process was as follows: first, the samples were placed in acetone for ultrasonic cleaning for 20 min, and then placed in a tube furnace; after vacuum up to 10 −3 Pa, argon gas was filled; after the furnace temperature rose to 800 • C, the hydrogen was filled with an absolute pressure of 0.1-0.15 MPa, and then the samples were soaked for 2 h. When the furnace was cooled to room temperature, samples with H content of 0.8 at.% (abbreviated as 0.8 H below) were finally obtained. The highest hydrogen content obtained by the current hydrogenation equipment was only 0.8 at.%, which had the most significant effect on the mechanical properties and microstructure evolution of the present alloy, and so the study focused on a 0.8 at.% hydrogen. The hydrogen content was examined by a LECO-ROH600 oxygen/hydrogen analyzer (LECO, St Joseph, MI, USA), with an accuracy of 0.01 ppm. The error of the hydrogen content was ±3%. In order to accurately compare and study the effect of H on the high-temperature tensile deformation behavior and microstructure evolution of the Ti-45Al-9Nb alloy, the samples without hydrogen had a vacuum heat treatment with the same heat treatment system (without hydrogen addition).
High-Temperature Tensile Test
High-temperature uniaxial tensile test of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloy samples was carried out on an MTS 880 universal tensile test machine by using the equivalent strain rate tensile method. The specific experimental process was as follows: first, the surface of the samples was sprayed with antioxidant alumina to prevent the surface oxidation; second, the samples were heated to the test temperature in a three-section circular resistance furnace, and had heat preservation for 10 min. The test temperatures were 1050, 1100, and 1150 • C and the strain rates were 0.0004, 0.001, and 0.0025 s −1 , respectively, and water-quenching was carried out immediately after the test. Finally, the stress-strain curves and deformed samples were obtained. The deformation behaviors of the alloy at the temperature of 1150 • C and the strain rate of 0.0004-0.0025 s −1 were mainly investigated in this paper.
Microstructural Analysis
The gauge part of samples after tensile deformation was wire-electrode cut, and then the surface to be observed was ground with 240, 400, and 600-grit SiC papers. Finally, electropolishing was carried out. The electrolytic polishing solution was 60% methanol + 34% n-butanol + 6% perchloric acid, the power supply voltage was adjusted to 20 V, the current maintained at 0.5-0.6 A, and the electrolysis time was 50 s.
Scanning electron microscopy (SEM) (JSM-7800F, Jeol, Tokyo, Japan) was used to analyze the microstructure of the gauge part of samples after high-temperature tension. An X-ray diffractometer (XRD) (BRUKER Company, Karlsruhe, Germany, model: D8 ADVANCE) was used to analyze the phase. The radiation light used in the experiment was Cu Kα with a wavelength of 1.5418 Å, the generator's power was 1.6 kW (40 kV, 40 mA), the continuous scanning range was 10 • -90 • , the scanning rate was 0.2 • /s, the step scanning step length was 0.02 • , and each step lasted for one second.
Results and Discussion
3.1. Effect of Hydrogen on the Microstructure of Alloy at Room Temperature SEM microstructures of the unhydrogenated and hydrogenated alloys are shown in Figure 2. Note that the dark gray phase is the γ phase, the light gray phase is the α 2 phase, and the bright white phase is the B2 phase. The α 2 phase is the ordered phase of the α phase at low temperature, and the B2 phase is the ordered phase of the β phase at low temperature. The microstructures of both alloys were near-lamellar, which were mainly composed of γ/α 2 -lamellar colonies with an average size of about 800 µm. In addition, a small number of equiaxed γ grains and irregular B2 grains were distributed along the lamellar boundaries. An X-ray diffractometer (XRD) (BRUKER Company, Karlsruhe, Germany, model: D8 ADVANCE) was used to analyze the phase. The radiation light used in the experiment was Cu Kα with a wavelength of 1.5418 Å, the generator's power was 1.6 kW (40 kV, 40 mA), the continuous scanning range was 10°-90°, the scanning rate was 0.2°/s, the step scanning step length was 0.02°, and each step lasted for one second.
Effect of Hydrogen on the Microstructure of Alloy at Room Temperature
SEM microstructures of the unhydrogenated and hydrogenated alloys are shown in Figure 2. Note that the dark gray phase is the γ phase, the light gray phase is the α2 phase, and the bright white phase is the B2 phase. The α2 phase is the ordered phase of the α phase at low temperature, and the B2 phase is the ordered phase of the β phase at low temperature. The microstructures of both alloys were near-lamellar, which were mainly composed of γ/α2-lamellar colonies with an average size of about 800 μm. In addition, a small number of equiaxed γ grains and irregular B2 grains were distributed along the lamellar boundaries. Figure 3 shows the X-ray diffraction patterns of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloys. Both the unhydrogenated and hydrogenated alloys were composed of a large amount of the γ phase (L10 crystal structure, a = b = 0.4005 nm, c = 0.407 nm, a/c = 0.984), a certain amount of the α2 phase (D019 crystal structure, a = b = 0.578 nm, c = 0.465 nm, a/c = 1.243), and a very small amount of the B2 phase (CsC1 crystal structure, a = b = c = 0.316 nm). The diffraction peaks of the α2 phase, γ phase, and B2 phase were basically unchanged after hydrogen addition, and the intensity of the diffraction peaks of the α2 phase and B2 phase was slightly stronger than that of the unhydrogenated alloy, indicating that hydrogen increased the content of the α2 phase and B2 phase. In addition, the diffraction peak of the (TiAl)Hx hydride was found at 2θ = 35.46° after hydrogen addition. Meanwhile, the hydride had a tetragonal crystal structure with lattice constants a = 0.452 nm, c Figure 3 shows the X-ray diffraction patterns of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloys. Both the unhydrogenated and hydrogenated alloys were composed of a large amount of the γ phase (L1 0 crystal structure, a = b = 0.4005 nm, c = 0.407 nm, a/c = 0.984), a certain amount of the α 2 phase (D0 19 crystal structure, a = b = 0.578 nm, c = 0.465 nm, a/c = 1.243), and a very small amount of the B2 phase (CsC1 crystal structure, a = b = c = 0.316 nm). The diffraction peaks of the α 2 phase, γ phase, and B2 phase were basically unchanged after hydrogen addition, and the intensity of the diffraction peaks of the α 2 phase and B2 phase was slightly stronger than that of the unhydrogenated alloy, indicating that hydrogen increased the content of the α 2 phase and B2 phase. In addition, the diffraction peak of the (TiAl)H x hydride was found at 2θ = 35.46 • after hydrogen addition. Meanwhile, the hydride had a tetragonal crystal structure with lattice constants a = 0.452 nm, c = 0.326 nm, and c/a = 0.721 [16]. The hydrogenation treatment of TiAl-based alloys was achieved by the diffusion of hydrogen atoms. In the diffusion process, hydrogen was first decomposed into hydrogen atoms and bumped into the surface of the samples. Due to a large number of defects and higher energy in the grain boundary or phase boundary, a channel was provided for the diffusion of hydrogen atoms. Therefore, hydrogen atoms preferentially diffused in a short range along the grain boundary or phase boundary, so the hydrogen concentration at the grain boundary or phase boundary reached saturation in a short time. Therefore, the concentration of hydrogen atoms at the grain boundary or phase boundary was relatively high, which could easily meet the requirements of composition fluctuation and energy fluctuation for hydride nucleation. When the hydrogen content exceeded its saturated solid solubility, the hydrogen combined with titanium aluminum to form titanium aluminum hydride.
Coatings 2020, 10, x FOR PEER REVIEW 5 of 17 = 0.326 nm, and c/a = 0.721 [16]. The hydrogenation treatment of TiAl-based alloys was achieved by the diffusion of hydrogen atoms. In the diffusion process, hydrogen was first decomposed into hydrogen atoms and bumped into the surface of the samples. Due to a large number of defects and higher energy in the grain boundary or phase boundary, a channel was provided for the diffusion of hydrogen atoms. Therefore, hydrogen atoms preferentially diffused in a short range along the grain boundary or phase boundary, so the hydrogen concentration at the grain boundary or phase boundary reached saturation in a short time. Therefore, the concentration of hydrogen atoms at the grain boundary or phase boundary was relatively high, which could easily meet the requirements of composition fluctuation and energy fluctuation for hydride nucleation. When the hydrogen content exceeded its saturated solid solubility, the hydrogen combined with titanium aluminum to form titanium aluminum hydride. In order to quantitatively study the content of each phase in the unhydrogenated and hydrogenated alloys, a quantitative XRD analysis was conducted. Figure 4 shows the relative volume fraction of the γ, α2, B2, and (TiAl)Hx phases in the unhydrogenated and hydrogenated alloys, which are calculated based on the integration area of the diffraction peaks. The contents of the γ, α2, and B2 phases in the unhydrogenated alloy were 77.54%, 20.89%, and 1.57%, respectively, while the contents of the γ, α2, and B2 phase in the hydrogenated alloy were 69.31%, 26.18%, and 2.85%, respectively. In general, hydrogen treatment can reduce the content of the γ phase because adding H can effectively promote the diffusion of elements and distort the γ phase lattice, thus promoting the γ → α2 phase transformation [17]. In addition, the content of the B2 phase in the hydrogenated alloy was also slightly increased, indicating that hydrogen can stabilize the B2 phase and promote its precipitation. In order to quantitatively study the content of each phase in the unhydrogenated and hydrogenated alloys, a quantitative XRD analysis was conducted. Figure 4 shows the relative volume fraction of the γ, α 2 , B2, and (TiAl)H x phases in the unhydrogenated and hydrogenated alloys, which are calculated based on the integration area of the diffraction peaks. The contents of the γ, α 2 , and B2 phases in the unhydrogenated alloy were 77.54%, 20.89%, and 1.57%, respectively, while the contents of the γ, α 2 , and B2 phase in the hydrogenated alloy were 69.31%, 26.18%, and 2.85%, respectively. In general, hydrogen treatment can reduce the content of the γ phase because adding H can effectively promote the diffusion of elements and distort the γ phase lattice, thus promoting the γ → α 2 phase transformation [17]. In addition, the content of the B2 phase in the hydrogenated alloy was also slightly increased, indicating that hydrogen can stabilize the B2 phase and promote its precipitation.
Coatings 2020, 10, x FOR PEER REVIEW 5 of 17 = 0.326 nm, and c/a = 0.721 [16]. The hydrogenation treatment of TiAl-based alloys was achieved by the diffusion of hydrogen atoms. In the diffusion process, hydrogen was first decomposed into hydrogen atoms and bumped into the surface of the samples. Due to a large number of defects and higher energy in the grain boundary or phase boundary, a channel was provided for the diffusion of hydrogen atoms. Therefore, hydrogen atoms preferentially diffused in a short range along the grain boundary or phase boundary, so the hydrogen concentration at the grain boundary or phase boundary reached saturation in a short time. Therefore, the concentration of hydrogen atoms at the grain boundary or phase boundary was relatively high, which could easily meet the requirements of composition fluctuation and energy fluctuation for hydride nucleation. When the hydrogen content exceeded its saturated solid solubility, the hydrogen combined with titanium aluminum to form titanium aluminum hydride. In order to quantitatively study the content of each phase in the unhydrogenated and hydrogenated alloys, a quantitative XRD analysis was conducted. Figure 4 shows the relative volume fraction of the γ, α2, B2, and (TiAl)Hx phases in the unhydrogenated and hydrogenated alloys, which are calculated based on the integration area of the diffraction peaks. The contents of the γ, α2, and B2 phases in the unhydrogenated alloy were 77.54%, 20.89%, and 1.57%, respectively, while the contents of the γ, α2, and B2 phase in the hydrogenated alloy were 69.31%, 26.18%, and 2.85%, respectively. In general, hydrogen treatment can reduce the content of the γ phase because adding H can effectively promote the diffusion of elements and distort the γ phase lattice, thus promoting the γ → α2 phase transformation [17]. In addition, the content of the B2 phase in the hydrogenated alloy was also slightly increased, indicating that hydrogen can stabilize the B2 phase and promote its precipitation. Figure 5 shows the tensile deformed specimens and true stress-true strain curves of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloys. Figure 5a shows the deformed specimens. Deformed at 1150 • C, all samples underwent a certain plastic deformation. Under the same deformation condition, the plastic deformation degree of the unhydrogenated alloy was greater than that of the hydrogenated alloy. Figure 5b-d show the true stress-true strain curves of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloy samples deformed at 1150 • C, with strain rates of 0.0004, 0.001 and 0.0025 s −1 , respectively. Works have reported that if the stress dramatically drops after peak stress with increasing strain, the stress-strain curve is related to dynamic recrystallization [18]. Accordingly, the stress-strain curves shown in Figure 5b-d are supposed to be related to dynamic recrystallization. In the early stage of deformation, dislocation movement was gradually obstructed by increasing the dislocation propagation, resulting in a dislocation pileup, which increased the dislocation density and formed dislocation tangles. Meanwhile, a great stress concentration would be generated at the junction of lamellar colonies. This dislocation tangle and stress concentration would increase the flow stress and lead to work hardening, and macroscopically, the stress increased rapidly with the increase in strain until the stress reached its peak [19]. Subsequently, the stress decreased with an increase in the strain, which was mainly attributed to the increase of dynamic recrystallization. Dynamic recrystallization softening and the work hardening phase offset each other. When the softening effect of dynamic recrystallization was greater than the hardening effect of hot working, the strain tended to decrease significantly with the increase of the strain.
True Stress-True Strain Curves and Their Characteristics
Coatings 2020, 10, x FOR PEER REVIEW 6 of 17 Figure 5 shows the tensile deformed specimens and true stress-true strain curves of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloys. Figure 5a shows the deformed specimens. Deformed at 1150 °C, all samples underwent a certain plastic deformation. Under the same deformation condition, the plastic deformation degree of the unhydrogenated alloy was greater than that of the hydrogenated alloy. Figure 5b-d show the true stress-true strain curves of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloy samples deformed at 1150 °C, with strain rates of 0.0004, 0.001 and 0.0025 s -1 , respectively. Works have reported that if the stress dramatically drops after peak stress with increasing strain, the stress-strain curve is related to dynamic recrystallization [18]. Accordingly, the stress-strain curves shown in Figure 5b-d are supposed to be related to dynamic recrystallization. In the early stage of deformation, dislocation movement was gradually obstructed by increasing the dislocation propagation, resulting in a dislocation pileup, which increased the dislocation density and formed dislocation tangles. Meanwhile, a great stress concentration would be generated at the junction of lamellar colonies. This dislocation tangle and stress concentration would increase the flow stress and lead to work hardening, and macroscopically, the stress increased rapidly with the increase in strain until the stress reached its peak [19]. Subsequently, the stress decreased with an increase in the strain, which was mainly attributed to the increase of dynamic recrystallization. Dynamic recrystallization softening and the work hardening phase offset each other. When the softening effect of dynamic recrystallization was greater than the hardening effect of hot working, the strain tended to decrease significantly with the increase of the strain. In addition, under the same deformation condition, the stress level of the hydrogenated alloy was lower than that of the unhydrogenated alloy, and the peak strain (the strain corresponded by the peak Coatings 2020, 10, 52 7 of 16 stress) of the hydrogenated alloy was lower than that of the unhydrogenated alloy. The smaller the peak strain, the sooner the dynamic recrystallization occurred [20]. The effects of hydrogen on the peak stress and elongation of the Ti-45Al-9Nb alloy at different strain rates are discussed in detail below. Figure 6 shows the peak stress and elongation of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloys deformed in a strain rate range 0.0004-0.0025 s −1 , with a temperature of 1150 • C. As can be seen from Figure 6a,b, the peak stresses of the hydrogenated alloy and hydrogen decreased with the decrease in the strain rate, and the decrease trend tended to become flatter with the decrease in strain rate. At the same strain rate, the stress level of the hydrogenated alloy was lower than that of the unhydrogenated alloy, and the decrease rate of the peak stress increased with the decrease in the strain rate. Obviously, the addition of hydrogen caused flow softening, and the effect of hydrogen-induced softening was more obvious when the strain rate was lower. When deformed at 0.0004 s −1 , the decrease rate of the peak stress was more obvious, which was (16.28 ± 0.17)% lower than that of the unhydrogenated alloy. The softening mechanism of the hydrogenated alloy mainly includes dynamic recovery and dynamic recrystallization. In the plastic deformation of TiAl-based alloys, dislocation slip and climbing usually occur [21]. When the deformation temperature is constant, dislocation glide and climb gain more time with the decrease in strain rate, which allows the dynamic recrystallization nucleation to take place more easily to some extent. Therefore, the peak stress of both the hydrogenated and hydrogenated alloys decreased with the decrease in strain rate.
Effect of Strain Rate on Flow Stress and Elongation
In addition, from the perspective of dislocation velocity and critical shear stress, the increase in strain rate would increase the dislocation movement and further increase the critical shear stress of the dislocation movement. Their relationship can be expressed as follows [22]: where v is the velocity dislocation movement; v 0 is the sound's propagation speed in titanium aluminum alloy; C is the material constant; T is the absolute temperature; and τ is the critical shear stress of dislocation movement.
According to Equation (1), under the condition of constant deformation temperature, the increase of v inevitably leads to the increase of τ, that is, the flow stress increases. Figure 6c,d show the reduction rate of the elongation of the hydrogenated alloy when the temperature was 1150 • C and the strain rate was 0.0004-0.0025 s −1 . As can be seen from Figure 6c,d, the elongation of the alloy decreased with the increase in the strain rate. At the same deformation temperature, the elongation of the hydrogenated alloy was lower than that of the unhydrogenated alloy, which decreased by (26.05 ± 0.45)% (0.0004 s −1 ), (23.49 ± 0.38)% (0.001 s −1 ), and (14.23 ± 0.19)% (0.0025 s −1 ), respectively, indicating that 0.8 at.% H reduced the high-temperature plasticity of Ti-45Al-9Nb under this deformation condition. This might be due to the fact that, in the hydrogenated alloy, hydrogen did not always exist in the alloy in the form of a solid solution, and that some hydrogen combined with alloy atoms to form hydride (TiAl)H x . The results indicated that the hydride itself was a brittle phase, and that it could easily become a crack source and promote the generation of cracks [23,24]. Therefore, the plasticity of the hydrogenated alloy was reduced due to the existence of hydride (TiAl)H x . In addition, as shown in Figure 6d, the reduction rate of hydrogenated elongation became smaller and smaller with the increase in the strain rate. This was mainly because with the increase in strain rate, the deformation time of the unhydrogenated alloy became shorter and shorter, and the dynamic recrystallization grain nucleation and growth were less likely to take place. Meanwhile, when the layer of lamellar colonies inside the plastic deformation was small, the pressure under the action of relative rotation took place between lamellar colonies, which easily caused the stress concentration in the process of the rotation of the lamellar colonies, resulting in too early deformation and instability of Coatings 2020, 10, 52 8 of 16 the alloy. Therefore, the hydrogenated alloy showed a significant reduction trend of elongation in the macroscopic view.
Coatings 2020, 10, x FOR PEER REVIEW 8 of 17 caused the stress concentration in the process of the rotation of the lamellar colonies, resulting in too early deformation and instability of the alloy. Therefore, the hydrogenated alloy showed a significant reduction trend of elongation in the macroscopic view.
Constitutive Equation at High Temperature
A large number of works have indicated that the high-temperature deformation process of metal materials such as steel and iron materials, aluminum alloy, titanium alloy and TiAl-based alloy is through the thermal activation process. High-temperature flow behavior is controlled by deformation temperature and strain rate, and there is a constitutive relationship between flow stress σ, deformation temperature T, and strain rate ε (i.e., hyperbolic sine function), as shown in Equation (2) [25,26]: The relationship among σ, ε , and T at a low stress (ασ < 0.8) is expressed as an exponential function: The relationship among σ, ε , and T at a high stress (ασ > 0.8) is expressed as an exponential function:
Constitutive Equation at High Temperature
A large number of works have indicated that the high-temperature deformation process of metal materials such as steel and iron materials, aluminum alloy, titanium alloy and TiAl-based alloy is through the thermal activation process. High-temperature flow behavior is controlled by deformation temperature and strain rate, and there is a constitutive relationship between flow stress σ, deformation temperature T, and strain rate . ε (i.e., hyperbolic sine function), as shown in Equation (2) [25,26]: The relationship among σ, . ε, and T at a low stress (ασ < 0.8) is expressed as an exponential function: The relationship among σ, . ε, and T at a high stress (ασ > 0.8) is expressed as an exponential function: where n and n 1 are stress exponents, A, A 1 , A 2 ; α and β are the material constants, among which α, β, and n 1 satisfies the relationship α = β/n 1 ; R is the gas constant; Q is the thermal deformation activation energy (KJ/mol); and T is the absolute temperature (K).
Assuming that deformation activation energy Q is independent of the deformation temperature T, and natural logarithms of both sides of the above three equations can be obtained as follows: The partial derivatives of both sides of Equations (2)-(4) were obtained, and then n 1 , β, and n can be expressed as follows: The partial derivatives of both sides of Equations (5)- (7) were obtained, and then substituted into Equations (8)- (10) to calculate the expressions of n 1 , β, and n, and the activation energy Q can be calculated.
All stress conditions: Low stress: High stress: The hyperbolic sine function (see Equation (2)) is more suitable to express the relationship between the peak stress and strain rate of the unhydrogenated and hydrogenated TiAl-based alloys [27]. According to the above-mentioned equations and experimental data, the relationships between the peak stress and strain rate of the unhydrogenated and hydrogenated Ti-45Al-9Nb alloys can be built. Figure 7 shows the relationships between σ p and . ε of the unhydrogenated and hydrogenated alloys. According to the curve fitting results, unary linear regression was carried out, the constant n 1 and β of the hydrogenated and unhydrogenated alloys could be obtained, where their n 1 were 4.55 ± 0.17 and 4.17 ± 0.13, respectively, and their β were 0.04 ± 0.02 and 0.05 ± 0.02, respectively. Later, the value of the α of the hydrogenated and unhydrogenated alloys can be calculated by the equation α = β/n 1 as 0.009 ± 0.001 and 0.011 ± 0.001, respectively.
value of the α of the hydrogenated and unhydrogenated alloys can be calculated by the equation α = β/n1 as 0.009 ± 0.001 and 0.011 ± 0.001, respectively. α can be used to obtain the relation curve of ln ε -ln[sinh(ασp)], as shown in Figure 7c. According to Equation (8), the value of n can be calculated. The n values of the hydrogenated and unhydrogenated alloys were 3.37 ± 0.12 and 3.15 ± 0.11, respectively. Finally, the deformation activation energy Q can be calculated from the relationship between temperature T and strain rate ε of the unhydrogenated and hydrogenated alloys, as shown in Table 1. The Q values of the hydrogenated and unhydrogenated alloys were (584.31 ± 5.34) KJ/mol and (556.95 ± 4.15) KJ/mol, respectively. α can be used to obtain the relation curve of ln . ε-ln[sinh(ασ p )], as shown in Figure 7c. According to Equation (8), the value of n can be calculated. The n values of the hydrogenated and unhydrogenated alloys were 3.37 ± 0.12 and 3.15 ± 0.11, respectively. Finally, the deformation activation energy Q can be calculated from the relationship between temperature T and strain rate . ε of the unhydrogenated and hydrogenated alloys, as shown in Table 1. The Q values of the hydrogenated and unhydrogenated alloys were (584.31 ± 5.34) KJ/mol and (556.95 ± 4.15) KJ/mol, respectively.
The above analysis indicates that for the Ti-45Al-9Nb alloy, the stress and deformation condition satisfied a hyperbolic sinusoidal relationship at high-temperature, and the deformation activation energy was significantly higher than the energy required for self-diffusion (290-345 KJ/mol) [28]. Therefore, the main softening mechanism for the unhydrogenated and hydrogenated alloys was dynamic recrystallization. Hydrogen decreased the deformation activation energy, which was mainly attributed to the fact that hydrogen could reduce the diffusion barrier, increase the diffusion coefficient of atoms, and coordinate deformation during thermal deformation. At the same time, hydrogen could promote dislocation movement, which led to a decrease in the thermal deformation activation energy. Figure 8 shows the SEM images of the unhydrogenated and hydrogenated alloys deformed at 1150 • C and in the strain rate range of 0.0004-0025 s −1 . With the decrease in the strain rate, the deformation and the extent of dynamic recrystallization increased gradually. This is because the alloying element diffused sufficiently at a lower strain rate, which facilitated the decomposition of α 2 and γ phase lamellae. Therefore, the dynamic recrystallization took place more sufficiently. The decomposition of lamellae can also be regarded as a special dynamic recrystallization (DRX) process. It also includes nucleation (lamellar fragmentation) and growth (lamellar spheroidization) [29]. When the strain rate was 0.0025 s −1 , some lamellar colonies of the unhydrogenated and hydrogenated alloys were bent to a certain extent, showing a wavy shape. The orientation of the bent and deformed α 2 /γ lamellar was a medium orientation, that is, the lamellar direction was perpendicular to the tensile direction. A small amount of block-shaped α 2 phase was formed at the grain boundary of the lamellar colonies in the unhydrogenated alloy. However, the hydrogenated alloy had fractured when the deformation was very small, thus it was difficult to observe the fracture or dynamic recrystallization structure. When the strain rate decreased to 0.001 s −1 , there were not only bending lamellar, but also a certain amount of lamellar fragmentation and spheroidized structures as well as fine dynamic recrystallization grains in the unhydrogenated alloy. It can be seen from Figure 8b,e that some lamellar colonies in the unhydrogenated alloy increased in spacing along the direction of stress, that is, the lamellar was obviously coarsened and accompanied by lamellar bending. The γ phase recrystallization grains occurred in the α 2 /γ lamellar interface. As the number of α 2 -phase slip systems was less than that of the γ phase, uncoordinated deformation and stress concentration easily occurred in the lamellar interface, which caused the lamellar interface to have local small deformation and provided driving energy for the recrystallization nucleation [30]. However, only a few lamellae were coarsened and the recrystallization grains were relatively less. When the strain rate dropped to 0.0004 s −1 , there were many bent and elongated lamellae. The spacing of the lamellar colonies was larger, the fragmentation of the lamellar colonies was enhanced, and the number and size of the recrystallization grains at the interface of the lamellar colonies increased obviously. In comparison, there were more residual lamellae and the extent of recrystallization was lower than that of the unhydrogenated alloy. According to the above analysis, dynamic recrystallization was very sensitive to the strain rate, and the smaller the strain rate, the more obvious the dynamic recrystallization. Under the same deformation condition, the deformation extent of the hydrogenated alloy was less than that of the unhydrogenated alloy, which was consistent with the elongation decrease in the hydrogenated alloy above-mentioned.
Microstructure Evolution
Through a comparison with previous work, it was found that the influence of hydrogen on the mechanical properties of the alloy was different during the high-temperature plastic deformation of the hydrogenated TiAl alloys. When the specimen was subject to tensile stress, hydrogen deteriorated the elongation of the Ti-45Al-9Nb alloy at high temperature. The main reason is that there are two forms of hydrogen in the alloys. On one hand, hydrogen atoms are dissolved in the lattice interstices. On the other hand, hydrogen atoms combine with alloy atoms to form hydride (i.e., (TiAl)H x ). After hydrogen entered the lattice sites, it weakens the binding force between the Ti and Al atoms, and reduces the binding energy [31]. The aggregation of hydrogen atoms along the grain boundary or phase boundary decreases the driving force for dislocation emission and movement, which leads to local plastic deformation and reduces the toughness. Meanwhile, the hydride gathers along the grain boundary, which is a source of cracking. The tensile stress accelerates the emergence and propagation of the grain boundary cracks, and finally results in deformation instability. Therefore, the elongation of the hydrogenated alloy was less than that of the unhydrogenated alloy. For most hydrogenated TiAl alloys using the solid hydrogenation technique, there is no hydride when compressive tests are conducted at high temperature [13][14][15]. When such hydrogenated specimens are compressed at high temperature, hydrogen can improve the plastic deformability, which is mainly due to hydrogen-induced dislocation movement, hydrogen-promoted dynamic recrystallization and twinning, and hydrogen-increased β phase content. Through a comparison with previous work, it was found that the influence of hydrogen on the mechanical properties of the alloy was different during the high-temperature plastic deformation of the hydrogenated TiAl alloys. When the specimen was subject to tensile stress, hydrogen deteriorated the elongation of the Ti-45Al-9Nb alloy at high temperature. The main reason is that there are two forms of hydrogen in the alloys. On one hand, hydrogen atoms are dissolved in the lattice interstices. On the other hand, hydrogen atoms combine with alloy atoms to form hydride (i.e., (TiAl)Hx). After hydrogen entered the lattice sites, it weakens the binding force between the Ti and Al atoms, and In order to study the mechanism of crack generation and propagation in hydrogen-containing alloys, we observed the cracks generated by the alloy under different deformation conditions and found that the internal crack propagation mechanism of the alloy was similar under this test condition. Therefore, crack generation and propagation under the deformation condition of 1150 • C/0.001 s −1 was chosen as the main research object. Figure 9 shows the cracking modes of the hydrogenated alloy deformed at 1150 • C/0.001 s −1 . The cracks mainly occurred in the internal lamellar colonies or at the boundary of the lamellar colonies. Among them, the internal cracks in the lamellar colonies were mainly generated at the α 2 /γ lamellar boundary, which could be divided into inter-lamellar and trans-lamellar cracks, according to the different propagation techniques of cracks. These two kinds of cracks were mostly wedge cracks, which were mainly generated at the α 2 /γ lamellar boundary and were caused by the deformation disharmony between the α 2 and γ lamellae. After crack nucleation was completed along the flat α 2 /γ lamellar boundary, the crack continued to propagate along the α 2 /γ lamellar boundary until reaching the lamellar colony boundaries, as shown in Figure 9a. Trans-lamellar cracks nucleated in the crooked α 2 /γ lamellar boundary, and the crooked α 2 /γ phase boundary retarded the propagation of cracks to some extent. These types of cracks were usually accompanied by bridging structures, as shown in Figure 9b. The cracks along the lamellar colony boundary were mainly generated in the α 2 /γ lamellar colony boundary or the γ/γ lamellar colony boundary. Such cracks in the hydrogenated alloy were more than that in the unhydrogenated alloy. The accumulation of solid dissolved hydrogen and precipitated hydride at the grain boundary caused the stress concentration at the grain boundary, reduced the binding force at the boundary, and weakened lamellar colony boundaries, thus leading to more along-lamellar colony boundary cracks [32], as shown in Figure 9c. The propagation of such cracks was different from that of the inter-lamellar cracks. The along-lamellar colony boundary cracks formed a cavity after completing nucleation. Due to the occurrence of deformation in the alloy, crystal defects such as vacancies continuously increased, and in order to reduce the surface energy, these vacancies gathered along a certain direction under external stress, which made the cavity gradually grow and form a series of "cavity beads" along the lamellar colony boundaries. The cavities mutually combined to complete the crack propagation. The crack shown in Figure 9d is a combination of the above three types of cracks. Figure 10 is a schematic diagram of inter-lamellar, trans-lamellar, and along-lamellar colony boundary cracking propagation of the hydrogenated alloy.
Conclusions
• Through hydrogen treatment, the phase content of the α2 and B2 phases in the hydrogenated alloy increased when compared with that of the unhydrogenated alloy. In the hydrogenated alloy, hydride (TiAl)Hx was observed, which led to cracks.
•
Compared with the unhydrogenated alloy, the flow stress of the hydrogen alloy was significantly reduced (i.e., hydrogen-induced softening). Hydrogen-induced softening was
Conclusions
• Through hydrogen treatment, the phase content of the α 2 and B2 phases in the hydrogenated alloy increased when compared with that of the unhydrogenated alloy. In the hydrogenated alloy, hydride (TiAl)H x was observed, which led to cracks.
•
Compared with the unhydrogenated alloy, the flow stress of the hydrogen alloy was significantly reduced (i.e., hydrogen-induced softening). Hydrogen-induced softening was enhanced with a decrease in the strain rate. Deformed at 1150 • C/0.0004 s −1 , the peak stress was decreased by (16.28 ± 0.17)% due to hydrogen addition. The elongation of the hydrogenated alloys was decreased by (26.05 ± 0.45)% (0.0004 s −1 ), (23.49 ± 0.38)% (0.001 s −1 ), and (14.23 ± 0.19)% (0.0025 s −1 ), indicating that the addition of 0.8 at.% H reduced the high-temperature plasticity of Ti-45Al-9Nb alloy. In addition, the deformation activation energy of the hydrogenated alloy was lower than that of the unhydrogenated alloy. • Under the same deformation condition, the deformation extent of the hydrogenated alloy was less than that of the unhydrogenated alloy. Accordingly, more residual lamellae and the lower extent of recrystallization were observed in the hydrogenated alloy. In addition, there were three types of cracks in the hydrogenated alloy (i.e., inter-lamellar, trans-lamellar, and along-lamellar colony boundary cracks). Furthermore, more along-lamellar colony boundary cracks occurred in the hydrogenated alloy. | 9,643.2 | 2020-01-07T00:00:00.000 | [
"Materials Science"
] |
THE HELMINTH PARASITOFAUNA OF BUFO REGULARIS (REUSS) IN AWKA, ANAMBRA STATE, NIGERIA
- The term "toad" tends to refer to the "True Toads".... which are members of the family Bufonidae , containing more than 300 species. One hundred specimens of Bufo regularis (67 males and 33 females) were collected between June 2006 and August 2006 in Awka metropolis of Anambra State of Nigeria and examined for helminth parasites or for non-protozoan gut and tissue parasites. Seventy one percent (71%) (48 males and 23 females) of the specimens were infected by five hundred and forty-three (543) parasitic helminthes made up of 475(89%) nematodes, 6(2%) pentastomids and 62(14%) trematodes. These seven species collected include Nematoda: Ascaridoid larva (12%), Rhabdias bufonis (30%), Camallanus sp.(10%), Amplicaecum africanum (31%), Ascaridoid(6%); Trematoda: Messocoelium monodi (14%); Pentastomida: Raillietiella sp.(6%). Amplicaecum africanum was most prevalent in males with 24% than in females 7%. Also Rhabdias bufonis was most prevalent in males with 19% than in females 11% and the differences were statistically significant. also varied with length and weight. Male toads in the length classes of 11.0-11.9cm and 12.0-12.9cm had the highest prevalence of 100% while those in 7.0-7.9cm length class had the least prevalence of 60%. Females in the 10.0-10.9cm length class had the highest prevalence of 81.82% while those in 9.0-9.9cm length class had the least prevalence of 50% (P<0.05). Males in 101-120g weight class had the highest prevalence of 100% while those in the 61-80g weight class had the least prevalence of 63.64%. Females in 141-160g weight class had the highest prevalence of 100%while those in the weight classes of 41-60g, 61-80g and 81-100g had the least prevalence of 75% and the differences were statistically significant.(P<0.05). All the helminths exhibited site preferences except one nematode, Amplicaecum africanum , recovered from rectum, intestine and stomach of both male and female toads. Parasite abundance was variable from one toad size class to another. It appeared that there was a general tendency for the prevalences to increase with increase in size of the host. s p and Amplicaecum africanum were found in the intestine of both male and female toads. Amplicaecum africanum was found in the rectum of both male and female toads. Rhabdias bufonis and Mesocoelium monodi were found in the lungs of both sexes. The differences were statistically significant.
Introduction
Toads are fat-bodied, have warts, can live in drier climates, where most frogs usually live in or near water. In particular, this toad has been recorded from the following countries: Senegal, Gambia, Guinea Bissau, Sierra Leone, Liberia, Guinea, Mali, Burkina Faso, Ivory Coast, Ghana, Benin, Niger, Nigeria, Cameroon, Equatorial Guinea, Gabon, Angola, Congo, R.D. Congo, Chad, Central African Republic, Algeria, Libya, Egypt, Sudan, Ethiopia, Eritrea, Uganda, Rwanda, Kenya. But they are most abundant in the tropical regions. This species is widespread in savanna regions south of the Sahara [1]. According to [13], typical B. regularis are found in a region stretching from Senegal through West Africa to Central Africa and through North Africa to Egypt. Amphibian parasitism has been used as a model for understanding very important issues pertaining to the evolution of parasites and their hosts, life cycles, host-parasite relationships, etc. Toads are less common inhabitants of water than frogs and thus are less exposed to infection by larval trematodes. Helminth parasites may result from factors such as dirty environment and poor quality of food taken. The study of parasites of B. regularis and other anurans have been undertaken by few parasitologists. The number of parasites necessary to cause harm to B. regularis varies considerably with the size of the host and health status [4]. The helminth parasites of anurans from the five locations in the savannah-mosaic and one in the transitional vegetation zone of Edo State of Nigeria were investigated [5] and reported a work done with a total of 200 adult male toads (B. regularis) captured from Assiul locality (Upper Egypt), the toads were examined for the testicular infection with Myxobulus sp. They discovered that ten toads showed various degree of infectivity detected by means of light and electron microscopy; [11] reported that arthropods and predominantly ants and termites as prey; [12] mainly found ants in dissected specimens [14] reports on a diet comprising ants, beetles, termites, spiders, orthopterans, butterflies and flies, with variations of the diet spectrum depending on the respective seasons and altitudes. If the weather happens to be rather wet, the proportion of termites increases considerably. According to [10], this species has specialized on ants.
CLASSIFICATION
Toads are classified in the phylum -Chordata, subphylum -Vertebrata, class -Amphibia, order -Anura. The genus that includes more than 300 species is Bufo of the family Bufonidae. Besides Bufo, the family includes 25 genera.
MATERIALS AND METHODS
Host sex was determined by observation of reproductive organs. All parasitic helminths were preserved, stained (when necessary), and mounted using standard techniques.
Terminology used in the explanation and for identification of parameters
Use of ecological terms follows [10]. Parameters include prevalence, intensity, mean intensity, mean abundance From July 2006 to August 2006, 100 specimens of Bufo regularis (33 females and 67 males ) were collected at night by hand picking with gloves on from three (3)
The measurement
The freshly killed toads were weighed using an electronic weighing balance. The snout-vent length of the toads were measured using a thread and meter rule.
Determination of sex of Bufo regularis
The sexes were determined by two methods. The first was through the color of the throat. The second was through the reproductive system [13]. The throat of the male is dark or green whereas that of the female is white. Females have coiled oviducts which are absent in males.
Killing of the Bufo regularis and preservation
The toads were sacrificed in a sealed jar using chloroform. They were dissected by pinning them with office pins to hold them in a waxed bowl and pouring water to cover the body of the specimen. The bodies of the B. regularis after dissection were preserved in 4% formaldehyde solution.
Examination of B. regularis for parasites
The body cavity, digestive tract,intestine, heart, lungs, gall bladder, stomach, rectum, liver and kidneys were examined for parasites. Each organ was excised and placed in a separate petri-dish which contained normal saline (0.9%w/v) and thoroughly searched for helminth parasites, using magnifying lens and a dissecting microscope.
Collection of parasites and preservation
Helminth parasites were picked with dissecting needle (forcep) and washed in normal saline. Trematodes and nematodes were immediately fixed in AFA ( Alcoholformalin-Acetic Acid) solution / were kept in 70% ethyl alcohol, while the Penntastomides were killed with hot 70% alcohol heated approximately to 60°C and were preserved in total 70% alcohol or in AFA. All recovered parasites were confirmed by specialists/ parasitologists.
Food materials collected
A variety of insects and other invertebrates including snails, beetles, earthworms were collected from the oesophagus and preserved in 4% formaldehyde solution.
DISCUSSION
In the infested B.regularis examined, the occurrence and prevalence of parasites recovered were four hundred and seventy-five (475) nematodes (89%), 62 trematodes (14%), and 6 pentastomids(2%). The study showed a higher prevalence of nematodes followed by trematode and pentastomid in both male and female toads. The higher prevalence of parasites in male toads (71.64%) than in female toads(69.70%) is in line with [1] who reported that out of 59 toads examined, 52 were infested (33 0f 36 males and 19 0f 23 females). The result of this research as reported in table 1, showed that six parasites made up of five nematodes, one trematode and a pentastomid infested the B. regularis population at Awka. This finding is in many respect similar to the result of [12] who recovered five nematodes, one cestode and one trematode in their study of helminth paasitofauna of the same amphibians in Ile-Ife, Nigeria. It would seem that nematodes are the known prevalent parasites of B.regularis in Nigeria. From table 1 of this study, Amplicaecum africanum with 31% prevalence was the most prevalent species infesting B.regularis in Awka. It also had the highest prevalence of 24% in table 3 with mean intensity of 5.04. This is in accordance with the observation of [11], [12] to the effect that it is the main species found in rain forest and mangrove forest of South Western Nigeria. From the findings in the present study (table 1) Rhabdias bufonis which is a species of nematode had the highest prevalence of 11% and a mean intensity of infection 6.18. the species was found only the lungs of the toads. This finding is in keeping with the report of [13] who noted that R. bufonis are lung parasites of amphibians. The result of this study as reported in table 4, showed that Raillietiella sp.,was recoverd from the lungs of female B. regularis . this is line with the findings of [11], on the endoparasites of amphibians from South western Nigeria who recovered Raillietiella sp.,from the lungs of some B. regularis . It would therefore seem that Raillietiella sp, is a widely occurring parasite of B.regularis in southern Nigeria. The result of this research as reported in tables 3 and 4, showed that A.africanum infested the stomach and intestine of both male and female toads. This is in line with [12] who recovered the same nematode from both stomach and intestine of B. regularis intestine in their study of helmith parasite fauna of B.regularis Amphibian in Ile-Ife,Nigeria; [14] reported on helminth parasites of anurans from the savannah mosaic zone of southern Nigeria. Polystoma prudhoei which was originally described from P.oxyrhynchus in B. regularis was a new host record for the parasite discovered. They were of the view that the larval Bioinfo Publications ascaridoid found in a number of the anuran host and the larvae of Abbreviata sp found in D. occipitalis are most probably parasites of snakes and other reptiles that use amphibians as transport hosts. Most of the parasite found in the study were being reported for the first time in Nigeria, but a number of them have a distribution in other African countries. According to [10], the prey of B. regularis often includes ants, beetles, bugs, insects, grubs, slugs, worms, and other invertebrates like other amphibians do. As tadpoles, they eat plants. Toads, as pets, will eat fruit or vegetables. But toads in the garden, as insect eaters, should be valued for their role in pest-control!
CONCLUSION AND RECOMMENDATION
This study has revealed that males of B.regularis are highly infested. Amplicaecum africanum is the most prevalent species of nematode infesting Bufo regularis in Awka. The absence of cestoda needs to investigated. Nematodes were the most prevalent followed by trematode and penastomid. Generally, the parasitofauna of Bufo regularis in Awka is similar to those of the parasite populations from the rain forest areas of Nigeria. This work comes in time to open the door for further research on amphibian parasitism. Amphibian parasitism has used as a model for understanding very important issues pertaining to the evolution of parasites and their hosts, life cycles, hostparasite relationships, etc. Toads are less common inhabitants of water than frogs and thus were lesser exposed to infection by larval trematodes. | 2,585.8 | 2011-12-30T00:00:00.000 | [
"Environmental Science",
"Biology",
"Medicine"
] |
Chronic kidney disease prediction using boosting techniques based on clinical parameters
Chronic kidney disease (CKD) has become a major global health crisis, causing millions of yearly deaths. Predicting the possibility of a person being affected by the disease will allow timely diagnosis and precautionary measures leading to preventive strategies for health. Machine learning techniques have been popularly applied in various disease diagnoses and predictions. Ensemble learning approaches have become useful for predicting many complex diseases. In this paper, we utilise the boosting method, one of the popular ensemble learnings, to achieve a higher prediction accuracy for CKD. Five boosting algorithms are employed: XGBoost, CatBoost, LightGBM, AdaBoost, and gradient boosting. We experimented with the CKD data set from the UCI machine learning repository. Various preprocessing steps are employed to achieve better prediction performance, along with suitable hyperparameter tuning and feature selection. We assessed the degree of importance of each feature in the dataset leading to CKD. The performance of each model was evaluated with accuracy, precision, recall, F1-score, Area under the curve-receiving operator characteristic (AUC-ROC), and runtime. AdaBoost was found to have the overall best performance among the five algorithms, scoring the highest in almost all the performance measures. It attained 100% and 98.47% accuracy for training and testing sets. This model also exhibited better precision, recall, and AUC-ROC curve performance.
Introduction
Chronic kidney disease (CKD) has become very common across races [1], resulting in millions of deaths worldwide annually [2].Proper diagnosis and timely treatment are major concerns in most developing countries.CKD mostly hits older people [3,4], and by 2050, the number of people aged 65 years and above is estimated to increase to 1.5 billion from 703 million in 2019, with a more than double growth rate [5].This will put a significant additional burden on healthcare services across the countries [6].
According to a study by the Center for Disease Control and Prevention, in 2017, approximately thirty million people in the U.S. alone were affected by CKD [7], which has been increased to 37 million in 2021 [8].Moreover, most people are not aware of being infected by CKD.Traditionally, doctors confirm the CKD for any patient based on some clinical tests such as estimating glomerular filtration rate (GFR) from a filtration marker (e.g., serum creatinine or cystatin C) or through a urine test, detecting the presence of albumin and/or protein [9][10][11].However, these tests may not always give accurate results, leading to the wrong diagnosis.
CKD can be mitigated to some extent if the possibility of it can be predicted beforehand for the suspected patients [12,13].This would allow healthcare professionals to deliver better services by embracing precautionary measures and early diagnosis and treatment.Machine learning algorithms have been popularly used in several disease diagnoses and predictions [14][15][16][17].For CKD prediction also, various such techniques have been explored [18][19][20][21][22]. Machine learning algorithms are powerful for analysing large and complex datasets and identifying patterns and relationships that may not be apparent to human experts.In the context of CKD prediction, machine learning has the potential to improve accuracy and reduce costs by identifying early signs of disease progression and predicting the risk of developing CKD in at-risk populations.
However, traditional machine learning techniques suffer from some crucial limitations, including [23,24]: • Overfitting, where the algorithm becomes too specialised to the training data and fails to generalise to new data.
• Large, high-quality datasets are needed to train and validate the algorithms, which can be challenging to obtain in some clinical settings.
• Training and evaluating machine learning algorithms may require considerable computational time and resources, especially for large datasets.
• High dependency on the quality and quantity of data available for training.If the data is incomplete, biased, or otherwise of poor quality, the resulting algorithm will be inaccurate or may not work at all.
• The machine learning algorithms can inadvertently incorporate biases present in the training data, leading to unfair or discriminatory outcomes.
Recently, ensemble learning techniques have shown great promise in improving the accuracy, robustness, and generalizability of predictive models, making them valuable in many fields, including healthcare, finance, marketing, social media analytics, etc.The ensemble learning approaches are gaining attention for disease prediction with higher accuracy [25][26][27][28][29][30][31].Among the ensemble learning techniques such as boosting, bagging, and stacking, boosting algorithms can reduce the training error (bias) and testing error (variance).
In this paper, we design a novel CKD prediction model using boosting algorithms.We aim to improve the performance of the disease prediction model over similar existing works.The contributions of this paper are summarised as follows.
• Exploratory data analysis is performed to transform the considered dataset for better experimental usability.
• Hyperparameter techniques, such as standardisation, normalisation, feature selection, and fine-tuning, are employed to achieve optimal results.
• The attribution of existing dataset features to disease prediction is assessed.
• Five boosting algorithms are individually applied to build the prediction model.
• The prediction performances of the five boosting algorithms are evaluated and compared.
• Our model achieved better accuracy and runtime than other machine learning-based CKD prediction models in method evaluation.
Related work
As mentioned above, machine learning has been extensively used for various disease diagnoses and predictions [17,32,33].To improve the performance of these models, several machine learning techniques are combined to extract the advantages of each of them.This ensemble approach has gained acceptance and popularity after successful implementations for the prediction, detection, diagnosis, and prognosis of different diseases, such as heart disease [34,35], breast cancer [36], skin disease [37], thyroid disease [38], myocardial infarction [39], Alzheimer's disease [40], etc.For CKD prediction, several prediction techniques and models have already been proposed [41].In the following, we briefly review some notable experiments for the diagnosis and prediction of CKD using ensemble learning techniques.For CKD prediction, Kumar et al. [42] proposed an ensemble learning approach that comprises a support vector machine (SVM), decision tree, C4.5 decision tree, particle swarm optimisation -multilayer perceptron (PSO-MLP), and artificial bee colony C4.5.The prediction process has two steps-i) in the first step, weak decision tree classifiers are obtained from C4.5, and ii) in the second step, the weak classifiers are combined with the weighted sum to get the final output from the classifier, attaining accuracy of 92.76%.Pal [43] developed a bagging ensemble method comprising a decision tree, SVM, and logistic regression to predict CKD.The best accuracy of 95.92% was achieved in the case of the decision tree.Hasan and Hasan [44] proposed an ensemble method for kidney disease diagnosis.They used adaptive boosting (AdaBoost), bootstrap aggregating, extra trees, gradient boosting, and random forest to build their prediction model.They performed tenfold cross-validation to validate the results.The highest accuracy of 99% was attained with adaptive boosting.For CKD detection, Wibawa et al. [45] developed an ensemble learning method that comprises three stages.In the first stage, base classifiers like Naive Bayes, SVM, and k nearest neighbour (kNN) were used.Correlation-based feature selection (CFS) was combined with the base classifiers mentioned above in the second stage.In the third stage, they used CFS with AdaBoost, achieving the highest accuracy of 98.01%.For CKD diagnosis, Jongbo et al. [1] built an ensemble learning model through bagging and random subspace based on three base classifiers-kNN, naïve Bayes, and decision tree.Data preprocessing was done to mitigate the missing value issue and data normalisation for scaling the independent variables within a certain range.The random subspace gained better performance than bagging in most performance measure metrics.It achieved an accuracy of 98.30% when combined with the decision tree method.To detect CKD, Ebiaredoh-Mienye et al. [46] combined the information-gain-based feature selection technique with the proposed cost-sensitive AdaBoost (C.S. AdaBoost), intending to save CKD screening time and cost.They trained the proposed C.S. AdaBoost with the reduced feature set, which attained a maximum accuracy of 99.8%.Emon et al. [47] used various boosting techniques to predict the risk of CKD progression among patients.The authors applied the principal component analysis (PCA) method to get the optimal feature set and attained the highest accuracy rate of 99.0% using random forest (R.F.).Ramaswamyreddy et al. [48] used wrapper methods along with bagging and boosting models to develop a CKD prediction model, attaining an accuracy of 99.0% with gradient boosting.However, the authors did not evaluate their model using other performance measure metrics.
Research methodology
This section briefly discusses the research steps followed and the ensemble learning techniques used in the experiment.
Research workflow
The workflow of the proposed work is shown in Fig 1 .We performed exploratory data analysis on the considered dataset for better quality assessment.In this phase, missing values are identified and replaced using data imputation methods.The interquartile range (IQR) method is used to detect outliers present in the dataset.Some other required libraries are executed to check the corrupt data, if any, in the dataset.Also, standardisation, normalisation, feature selection, and tuning are made during the prediction model development process using five boosting algorithms.The dataset was split into training (60%) and test (40%) subsets.The results are assessed through various performance metrics.
Boosting algorithms
Ensemble learning is a method that combines different traditional machine learning approaches to enhance the performance of the prediction model [49].Various ensemble learning approaches are proposed [50,51].Boosting algorithm is one of the effective approaches in the ensemble learning family.In the literature, several boosting algorithms can be found [52,53].In this experiment, specifically for CKD prediction, we considered the following five ensemble learning based boosting algorithms: XGBoost.XGBoost (eXtreme gradient boosting) works by combining different kinds of decision trees (weak learners) to calculate the similarity scores independently [54].It helps to overcome the problem of overfitting during the training phase by adapting the gradient descent and regularisation process.The mathematical formula for the XGBoost algorithm is shown in Eq 1.
where f θ (x) is XGBoost model with parameters θ,h m is the m th weak decision tree with parameters θ m , and γ m is the weight associated with m th tree.T denotes the number of decision trees, l denotes the loss function, and R jm is an indicator function that returns 1 if x is in region R jm , otherwise 0. CatBoost.CatBoost (categorical boosting) is faster than other boosting algorithms as it does not require the exploration of data preprocessing [55].It is used to deal with high cardinality categorical variables.For low cardinality variables, one-hot encoding techniques are used for conversion.The objective function for the CatBoost algorithm is defined using Eq 2.
Lðy
where y is the true label of the training set, f(x) is the predicted label, N is the number of training samples, l denotes the loss function, λ is the regularisation parameter used to penalise overfitting, P is the number of features and w is the weight associated with each feature of the dataset.
LightGBM.
LightGBM is an extension of a gradient boosting algorithm, capable of handling large datasets with less memory utilisation during the model evaluation process [56].Gradient-based one-side sampling method is used for splitting the data samples, reducing the number of features in sparse datasets during training.The objective function for the LightGBM algorithm is defined using Eq 3.
where θ is a set of model parameters, N is the number of training samples, l denotes the loss function, y i is the true label of i th sample, ŷl is the predicted label for the model, f j is the j th decision tree, T is the number of trees, and ω is the regularisation term.
AdaBoost.AdaBoost works by adjusting all the weights without prior knowledge of weak learners [57].The weakness of all the base learners is measured by the estimator's error rate while training the models.Decision tree stumps are widely used with the AdaBoost algorithm to solve classification and regression problems.The objective function for the AdaBoost algorithm is defined using Eq 4. where H(x i ) is the prediction of the classifier on the i th sample x i and y i is its corresponding true label in {-1, +1}and N denotes the number of training samples.Gradient boosting.In this method, the weak learners are trained sequentially, and all estimators are added one by one by adapting the weights [58].The gradient boosting algorithm focuses on predicting the residual errors of previous estimators and tries to minimise the difference between the predicted and actual values.The objective function for the gradient boosting algorithm is written using Eq 5.
where F is the ensemble model, n is the number of training examples, y i is the true label of the i th sample, l denotes the loss function, and Fmathbf(x i ) is the output of the ensemble model on example mathbf(x i ).
Dataset collection and manipulation
We used the CKD data set (https://archive.ics.uci.edu/ml/datasets/chronic_kidney_disease),publicly available at the UCI machine learning repository, for the experiment.The dataset was collected from Apollo Hospitals, Managiri, India.
Dataset description
The dataset contains 400 instances and 25 attributes.The first 24 attributes are predicate/independent, and the last one is a dependent/target attribute.Among the attributes, 11 are numeric, and 14 are categorical.The attributes are described in Table 1.It represents the information about considered attributes, the description of attributes, their measurements, and the range values.
Table 2 describes the attribute information with their measures like count of records, mean, standard deviation (std), minimum (min) value, and maximum (max) value.For example, the blood pressure (bp) attribute has a count value of 400, mean 76.175, std 13.769, min 50, and max 180, respectively.
Data preprocessing
We performed some preprocessing on the considered CKD dataset to make the dataset most usable.The purpose was to transform the available raw data into a format easily understood by the ensemble learning algorithms.We conducted the following steps as data preprocessing: Correlation coefficient analysis.To identify and plot the relationship among the dataset attributes, we used the correlation coefficient analysis (CCA) method.A strong association/ relationship between the set of independent and dependent attributes indicates a good-quality dataset.Data wrangling and cleaning.To clean the dataset, we identified the missing values using the isnull() method and then calculated the percentage of null values present in the dataset.We used the data imputation methods (mean, median, fill, and original) to replace the null values.The missing values were replaced using the column's mean, median, and mode.We used the IRQ method to detect the outliers and replace them using the Z-score method.The Zscore method shifts the distribution of all the data samples and makes the mean 0. Using data cleaning methods, we further checked for duplicate, inconsistent, and corrupt values in the dataset and neutralised them wherever applicable.Data standardisation and normalisation.We used the MinMaxScaler() for feature scaling.We scaled the data values using Eq 1 for standardisation and batch normalisation.Here, the data mean is set to 0 and the standard deviation to 6.
where N, X, x i , σ(x), x min , and x max denote the total sample in the data, i th attribute, the mean of the attributes, the sample variance of the attributes, the minimum value of the sample, and the maximum value of the sample, respectively.
Experiment, results, and discussion
In this section, we present the experimental details of this work and the obtained results by using the five boosting algorithms to predict CKD.We used 60% of the dataset to train the boosting algorithms and the rest 40% to test and validate their efficacy.The evaluations are extensively discussed in terms of accuracy, recall, precision, F1-score, micro-weighted, average-weighted, and AUC-ROC (area under curve-receiver operating characteristic) curve for each algorithm.
Hardware and software specifications
An HP Z60 workstation was used to carry out this research work.The hardware specification of the system is: Intel XEON 2.4 GHz CPU (12 core), 8 GB RAM, 1 T.B. hard disk, with Windows 10 pro-64-bit O.S. environment.As software requirements, we used the GUI-based Anaconda Navigator, the web-based computing platform Jupyter notebook, and Python as the programming language.
Feature importance
The feature importance is used to assess the contribution of an independent/predicate attribute that leads to CKD.Generally, not all attributes contribute to disease prediction.For instance, after running all five boosting algorithms on the original dataset, we found that the attributes-'ane', 'appet', 'ba', 'cad', 'pc', 'pcc', 'pe', 'su', and 'wc' have no role in CKD prediction.Hence, we eliminated these attributes from the dataset and kept only those that contributed at least for one algorithm, as shown in Fig 7.
We used the forward selection, a wrapper method, to calculate the feature importance [59].A higher F-score of a feature indicates more importance of an attribute.For example, in Fig 7, it can be seen that the haemoglobin (hemo) attribute has the highest contribution in the CKD prediction for all the algorithms.
Hyperparameter tuning
We used the grid search method for hypermeter tuning to achieve optimality in the proposed model's performance.By specifying a grid or a specified set of values for each hyperparameter, grid search enables methodically examining various combinations of hyperparameters.This ensures that all the options are tried to find the optimal values of the hyperparameters.The deterministic nature of grid search ensures consistency, i.e., it always yields the same outcomes when the same hyperparameters and data are used.This characteristic facilitates transparent testing and evaluation by making results simple to replicate and compare.One of the major advantages of grid search is that it is fairly straightforward to implement.Also, most of the machine learning frameworks and libraries provide built-in functions or modules for grid search.The best values of the hyperparameters found for each algorithm are shown in Table 3.The listed values for each parameter for the respective algorithm were found to be the best performers in our experiment.
Cross-validation scheme
Cross-validation is conducted to provide an unbiased evaluation of the prediction model.We performed the k-fold cross-validation to validate the performance of the proposed model on the training dataset.Here, we kept the value of k as 6.Based on the validation bias, the hyperparameters used in the experiment were tuned.
Performance evaluation
In this section, the performance of the proposed prediction model for the considered boosting algorithms is discussed in terms of different performance metrics.LightGBM, gradient boost, XGBoost, and CatBoost at 99.73%, 99.21%, 97.23%, and 96.97%, respectively on the training set, and 97.96%, 97.46%, 95.93%, and 96.44%, respectively on the test set.5.5.2Other measurements.In addition to accuracy, we calculated the precision, recall, F1-score, and support of the five boosting algorithms on the test set, as shown in Figs 10-13, respectively.In addition, the macro and weighted average were measured for both classes (0: no CKD, 1: CKD).As shown in those figures, AdaBoost produced the best precision in identifying the presence of CKD, while all algorithms identified the non-infection of CKD with equal precision.AdaBoost has a better recall and F1-score in confirming the absence of CKD.Regarding the case of support, i.e., the occurrence of class, AdaBoost performs slightly better than the other algorithms.
Comparative analysis
Table 4 presents a comparative analysis of the five boosting algorithms applied on the test dataset in terms of accuracy, misclassification rate, and runtimes.It can be observed that AdaBoost has the highest accuracy and least misclassification rate, but it has a slightly higher runtime than LightGBM and XGBoost.Since, in our experiment, we found AdaBoost to have the best overall performance in predicting CKD, we compared it with a few related research works in terms of accuracy, as shown in Table 5.The justification for achieving higher accuracy can be credited to the adopted procedures like data imputation for handling missing values, detection and replacing outliers, and effective data standardisation and normalisation.
Conclusion, limitations, and future directions
Diagnosis and prevention of chronic kidney disease have become challenging for healthcare professionals and other concerned authorities.It can be mitigated to some extent if it can be pre-diagnosed in well advance.In this paper, we attempted to predict CKD using an ensemble learning approach.Specifically, we used five boosting algorithms: XGBoost, CatBoost, LightGBM, AdaBoost, and gradient boosting.We employed different preprocessing techniques like the imputation method for handling missing values and min-max scalar and Zscore for data standardisation and normalisation.In addition, hyperparameter techniques like grid search were used to find the optimal parameter values.Furthermore, feature selection was carried out for each algorithm.AdaBoost emerged as the overall best performer in accuracy (99.17%), precision, recall, f1-score, and support in the experiment.AdaBoost also attained better results for AUC-ROC and misclassification rate.Comparing our proposed model with similar works, we found that our method outperformed others.Though the proposed model performed relatively well, it has some obvious limitations.The size of the considered dataset is small, which may limit the prediction model's performance in generic situations.It is observed that most of the features are having least contribution towards CKD.A more balanced dataset would lead to a better prediction model.
As an extension of this work, other ensemble learning techniques, like bagging, stacking, etc., can be explored to improve the results.Additionally, deep learning techniques can also be experimented with the exercised dataset.To validate the effectivity of the proposed model, additional and larger datasets are needed in future.Our proposed model can be applied to other disease datasets (e.g., diabetes) with common features.We expect more powerful disease prediction models to be developed and implemented in medical diagnosis and treatment.
Fig 1 .
Fig 1.The workflow of the proposed ensemble learning based CKD prediction.https://doi.org/10.1371/journal.pone.0295234.g001 a. Identify and replace duplicate values.b.Identify and replace missing values.c.Detect and replace the outliers.d.Convert categorical variables to numerical values using one-hot encoding.e. Perform data transformation (-1 to 1) and scaling (0 to 1).The results of the above steps are discussed below.Class balancing.The training dataset should be balanced of positive and negative instances to achieve reasonable prediction.From Fig 2(A), it can be observed that the considered dataset was highly biased toward the positive class, i.e., "patients having CKD" over the negative class, "patients not having CKD."To minimise this difference, we used SMOTE to balance the dataset.From Fig 2(B), it can be observed that the resultant dataset is fairly balanced.Exploratory data analysis.We used different data visualisation tools to visualise and analyse the distribution of the data samples.Fig 3 shows the normally distributed histograms that group all the attributes of the considered dataset within the range values.Here, the X-and Yaxes describe the input attributes, and their corresponding values, respectively.Fig 4 plots the probability density using the kernel density estimation (KDE) method.The X-and Y-axes denote each attribute's parameter value and probability density function, respectively.Fig 5 depicts the boxplot of all the considered attributes of the dataset.It provides a good indication of how the dispersion of values is spread out.To handle the outliers in the dataset, the IQR method was used.
Fig 6 presents the CCA of the dataset attributes used in the experiment.The relationship range lies between +1 to -1 along the X-and Y-axes.
5 . 5 . 1
Classification accuracy.The classification performances of the algorithms are evaluated using a confusion matrix.The confusion matrices of all five boosting algorithms applied on the test dataset are shown in Fig 8.The left upper and the right lower boxes denote the
Fig 8 .
Fig 8. Confusion matrices of the prediction performance on the test set for all the five boosting algorithms.https://doi.org/10.1371/journal.pone.0295234.g008
5 . 5 . 3
AUC-ROC curve.The AUC-ROC curve was used to show the prediction ability of the boosting algorithms at different thresholds.It represents a false-positive rate (FPR) vs. a true-positive rate (TPR) along the x-axis and y-axis.A larger AUC-ROC area suggests the model's ability to distinguish between 0's and 1's, leading to a better prediction.Also, an AUC value closer to 1 denotes a good separability measure, while in the case of an AUC of below 0.5, the model becomes ineffective in separating the classes, denoting the bad measure of disassociation.The AUC-ROC for the experiment is shown in Fig 14.It can be observed that Ada-Boost performs best while XGBoost performs worst.
Table 5 . Comparison of the proposed work with existing similar works. Research work Ensemble techniques adopted Dataset used Highest accuracy Precision Recall AUC/ ROC
https://doi.org/10.1371/journal.pone.0295234.t005 | 5,488 | 2023-12-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Metabolic response to three different diets in lean cats and cats predisposed to overweight
Background The existence of a genetic predisposition to obesity is commonly recognized in humans and rodents. Recently, a link between genetics and overweight was shown in cats. The goal of this study was to identify the effect of diet composition on plasma levels of glucose, insulin, free fatty acids and triglycerides in cats receiving different diets (high-carbohydrate, high-fat and high-protein diets). Results Insulin and leptin concentrations were significantly correlated with phenotype. Insulin levels were lower, whereas leptin levels were higher in cats predisposed to overweight. The other blood parameters were not correlated with phenotype. Intake of the high-carbohydrate diet resulted in higher insulin concentrations compared with the two other diets. Insulin levels were within the values described for non-obese cats in previous studies. Conclusions There was no difference in metabolic response between the two groups. As the high-carbohydrate diet led to the highest insulin blood concentrations, it might be useful to avoid such diets in cats predisposed to overweight. In addition, even cats with genetically linked obesity can regain insulin sensitivity after weight loss.
Background
Overweight in humans is one of the major health risks worldwide [1]. It is defined as abnormal or excessive accumulation of fat [1]. Pets, particularly dogs and cats, share the same environment as humans, in which they lack exercise and have unlimited access to a high-calorie diet [2], which are the reasons why overweight and obesity are among the major health problems in these species [3,4]. In humans overweight is a main risk factor for a number of secondary disorders, such as type 2 diabetes mellitus, cardiovascular diseases, orthopaedic problems, urogenital disorders, neoplasia and anaesthetic complications [5,6]. Similarly, overweight is a main risk factor for secondary disorders in cats. These include among others, diabetes mellitus, orthopaedic problems and anaesthetic complications [5]. The secondary disorders lead to reduced life span and quality of life [5,6]. Feline diabetes, which is associated with overweight, is very similar to human type 2 diabetes [2].
Risk factors for obesity in humans include the following: excessive intake of highly palatable and energy-rich food, a diet that does not meet all nutrient requirements and lack of physical activity [7]. Additionally, genetic background is an important risk factor for obesity in humans and laboratory animals as well as in cats [7][8][9].
Excessive body weight caused by excessive body fat content leads to reduced insulin sensitivity and can later lead to hyperglycaemia [10,11]. Both symptoms are warning signals for type 2 diabetes [11,12]. Reduced insulin sensitivity due to an increase in body fat is reversible with weight loss [3,13] in humans and cats. It is therefore important to prevent humans and cats from being overweight and to reduce existing excessive body weight. It is known that obese cats are insulin resistant, but weight loss normalizes this insulin resistance [12,14].
It is hypothesized that overweight is caused by several interacting genes and the environment [7]. The genetic background of obesity in cats has not been examined as thoroughly as in humans and mice [7,[15][16][17]. Häring et al. [9] found inheritance of obesity in an experimental cat population. In a later study, Wichert et al. [18] found that cats predisposed to overweight (po) show lower energy requirements and higher food intake even in ideal body condition. The authors identified one major gene model with a polygenic component linked to obesity. The analysis identified genomic regions associated with overweight [19]. In another study, a genetic analysis identified a missense mutation in the coding sequence of MC4R (MC4R:C.92 > T) related to diabetes mellitus in obese cats [20]. The same missense mutation is also involved in the development of human obesity and type 2 diabetes mellitus [21]. The cats used in the present study originated from the population that was phenotyped by Häring et al. [9].
Hoenig et al. also reported a decreased glucose effectiveness in obese cats [12].
The influence of macronutrient composition on energy metabolism and satiety is controversial in cats as well as in humans. Scarlett et al. [3] identified a high carbohydrate diet to be a risk factor for overweight in cats, as opposed to a high-fat canned diet. In contrast to this, Backus et al. [22] observed a negative correlation between carbohydrate content and body weight while a high-fat diet was a risk factor for overweight. It is also known that carbohydrate sources influence postprandial glucose and insulin levels [23]. In studies on the influence of nutrient components on metabolic reactions, different combinations of nutrient components were used and they are therefore not fully comparable. In other studies, protein rich diets caused higher metabolic rates than low protein diets [24] and conserved fat-free mass during weight loss [25,26] in overweight cats. There is also evidence in humans that high-protein diets cause higher weight loss and fat mass loss than high-fat or high carbohydrate diets [24,27]. The most important reasons for this beneficial effect are earlier and quicker satiety and lower energy intake after high-protein meals [27].
The goal of the present study was to identify whether a genetically caused predisposition to overweight influenced the plasma levels of glucose, insulin, leptin, free fatty acids and triglycerides in cats at ideal body condition when they were fed three different diets (high carbohydrate, high fat and high protein). It was hypothesized that an inherited predisposition to overweight influences plasma levels of these blood parameters in ideal body condition. If the influences of different genetic predispositions on plasma levels of these blood parameters were known, it might be easier to know which diets to feed in order to prevent obesity and the development of diabetes mellitus in cats.
Animals
Thirteen clinically healthy, intact adult (four to five years old) male European short-hair cats from the Institute's owned feline colony. The cats were divided into two groups (six po and seven lean (l) cats) based on classification by phenotype. The classification was determined by BCS [9] at the age of eight months. To reach ideal body condition, the cats of group po underwent a weight loss programme. During this weight loss programme, the cats were fed commercial canned food. At least four weeks before the beginning of the trial, they were fed to weight constancy. All cats had an ideal BCS of 5-5.5/9 [28] for at least 4 weeks before the beginning of the experimental trial. Body composition was measured using dual-energy X-ray absorptiometry (DXA) at the beginning and at the end of the study. Ethical approval for the experiments was obtained from the local Ethics Committee for Animal Experiments (Veterinaeramt des Kantons Zuerichs; licence number 83/2012).
Experimental design
Three non-commercial experimental diets were prepared: one with high carbohydrate content (HCH), one with high fat (HF), and another with high protein content (HP). The diets were fed in an order determined by a Latin square design. The diets contained beef, pork liver, lard and cooked white rice (only in HCH). The metabolizable energy (ME) and crude nutrient content are given in Table 1. The diets were composed according to adult feline requirements [29], with no nutrient deficiencies. Macro and trace elements as well as vitamins and taurine were added individually to each experimental diet.
All cats were fed for maintenance of body weight (BW). The leftovers were weighed after each meal, and food intake was adjusted to BW. After each feeding phase, the cats had a wash-out period of 14 days during which they were fed with adult canned food (dry matter (DM) 19%, crude protein 41% DM, crude fat 24% DM, crude fibre 2% DM, crude ash 2.5% DM). The cats were fed four times a day, at 8:30, 11:00, 13:30, and 16:00. On blood sampling days only, the meal at 11:00 was cancelled. The cats were fed separately for 15 min each. Each cat was weighed every morning before the first feeding. If its BW changed, the amount of food was adjusted in steps of 0.1 MJ ME per day. BCS was assessed at the beginning and the end of the feeding periods as well as between the feeding periods once a month.
After a fasting period of 16 h, a blood sample at time zero was taken. Then, the cats were fed and additional blood samples were taken as shown in the time table (Fig. 1). Blood samples were analysed for glucose, insulin, triglyceride, free fatty acid and leptin concentrations.
Each diet was homogenized and random samples were collected. The samples were analysed by proximate analysis (Table 1). The content of ME was estimated based on the results of the proximate analysis and the formulas of Kienzle et al. [31].
For blood collection, the cats were sedated with 10 mg/kg BW ketamine (Ketanarkon 100 ad. us. vet., Streuli Pharma AG, Uznach, Switzerland) and 0.2 mg/kg BW midazolam (Dormicum ®, Roche Pharma AG, Reinach, Switzerland) intramuscularly in a mixed syringe. A catheter (20 G) was used if more than one blood sample was taken from the vena cephalica. The sedation was maintained with propofol (Fresenius Kabi AG, Bad Homburg von der Höhe, Germany) as needed. It is known that this medication has only an insignificant effect on the measured blood parameters [32][33][34].
Blood glucose was analysed immediately after blood sampling using a portable glucometer (Ascensia Elite™, Bayer Corporation, Mishawaka, IN, USA) [35]. Blood samples were centrifuged and the plasma was stored at −80°C until further analysis. Plasma insulin levels were determined by an enzyme-linked immunosorbent assay (ELISA; Feline Insulin ELISA, Mercodia, Uppsala, Sweden). Strage et al. [36] validated this test kit for insulin measurement in cats and found intra-and inter-assay coefficients of variation of 2.0-4.2% and 7.6-14%, respectively. For the analysis of triglycerides (TRIG Diatools, Villmergen, Switzerland) and free fatty acids (NEFA-HR(2), Wako, Neuss, Germany) colourimetric measurements were performed with the help of a Cobas Mira® (Hoffmann-La Roche, Basel, Switzerland). Leptin content was measured using RIA (Multi-Species Leptin RIA Kit, Millipore, Missouri, USA). This test kit was developed to measure leptin in many species, and has been validated for use in cats [37]. The intra-and interassay coefficients of variation were 2.8-3.6% and 6.5-8.7%, respectively.
Insulin sensitivity was determined by homeostasis model assessment (HOMA) [38]. This measure is the product of insulin and glucose divided by 22.5. The HOMA index was developed to measure human insulin sensitivity and was validated for cats by Appleton et al. [39].
The results are presented as the mean ± standard error (SE). A multivariate analysis of variance (MANOVA) for repeated measurements was performed with group (lean or predisposed to overweight) as a cofactor included in the model. The impact of correlated factors was tested by linear regression analysis with help of SPSS® Statistics 20.0 (IBM Corporation, New York, United States). A ttest was performed to compare pairs of groups with help of Microsoft Office Excel 2013 (Microsoft Corporation, Redmond, WA, United States). Fig. 1 Experimental design. Two groups of cats cats, one lean (l, 7 cats) and one with predisposition to overweight (po, 6 cats). Three diets (highcarbohydrate (HCH), high-protein (HP) and high-fat (HF)), each fed for 7 days in an order determined by a Latin square. At day 8: blood sampling before (0) and at eight timepoints after (15,30,45, 60, 90, 120, 180 and 240 min) a meal of one of the three test diets. Fourteen-day washout period
Results
During the whole study, genetically lean cats had a mean BCS of 4.9 (± 0.01) and cats predisposed to overweight had a BCS of 5.1 (± 0.02). Mean BCS change during the whole study was 2.6%, and mean weight change during a single experimental period was 0.72% for all cats. The greatest change in weight was measured with diet HF (1.25%), and the smallest with diet HP (0.28%). At the beginning of the experiment, the cats of group po had a mean fat mass percentage of 8.5% (± 1.28) and a fatfree mass percentage (including bone) of 91.7% (± 2.06), while group l had a fat mass percentage of 6.0% (± 0.72) and a fat-free mass percentage of 91.5% (± 0.69). The BCS at this date was 5.24 (± 0.03) for group po and 4.99 (± 0.03) for group l. At the end of the experiment, group po had 4.0% (± 0.72) body fat mass percentage and 93.4% (± 0.69) fat-free mass percentage, while group l had a 5.2% (± 1.18) fat mass percentage and a 92.0% (± 1.17) fat-free mass percentage. At this date, the BCS was 5.00 (± 0.05) for group po and 4.97 (± 0.04) for group l.
For the HCH diet, the mean daily energy intake was 333.3 (SE 15.7) kJ ME per BW 0.67 in group po and 379.2 (SE 13.8) kJ ME per BW 0.67 in group l. For the HF diet, the mean daily intake was 352.7 (SE 14.6) kJ ME per BW 0.67 in group po and 363.9 (SE 10.9) kJ ME per BW 0.67 in group l. Finally, for the HP diet, the mean daily intake was 290.2 (SE 13.9) kJ ME per BW 0.67 in group po, and in group l it was 392.0 (SE 11.15) kJ ME per BW 0.67 .
The results of the regression analysis are given in Table 2. Between the two genetic phenotypes (l and po), no differences were detected in glucose or triglyceride concentrations in blood plasma, but insulin and leptin concentrations were significantly correlated with phenotype (p = 0.006, p = 0 0.01). Group l cats showed higher insulin concentrations than group po cats. Cats predisposed to overweight showed higher leptin concentrations than lean cats. Diet was significantly positively correlated with glucose, insulin, leptin and triglyceride levels (p < 0.05) but only weakly positively correlated with free fatty acids (p = 0.056). Time after feeding was significantly positively correlated with blood glucose, insulin and free fatty acid values (p < 0.002). Plasma concentrations increased after a meal. Food intake was positively correlated with leptin and triglycerides (p < 0.05). The detailed relationships of glucose, insulin, leptin and triglyceride levels with phenotype and diet are shown in Figs. 2, 3, 4, and 5.
Insulin sensitivity as determined by HOMA [38] showed no significant differences between the two groups (l and po). P-values were as follows for all diets: HCH (p = 0.18), HF (p = 0.18) and HP (p = 0.09).
In both groups feeding the HCH diet resulted in higher insulin concentrations than the other diets (Figs. 2, 3), although it was more pronounced in group l (p = 0.015) than in group po. On the HP diet, higher insulin concentrations were measured in group l than in group po (p < 0.0001) and on the HF diet, group l had a tendency toward higher concentrations than in group po (p = 0.066).
For all diets, the glucose plasma concentrations were not significantly different between the two groups (example after the HCH diet was fed - Fig. 4).
Plasma leptin concentrations are shown in Fig. 5. Cats in group po showed significantly higher leptin levels than cats in group l after an HF meal.
After intake of the HF diet, the highest increases in plasma triglyceride and free fatty acid values were seen 15 min after the meal. Intake of the HP diet led to significantly higher free fatty acid concentrations in group po than in group l (p = 0.007).
Discussion
The presented study is the first to analyse the metabolic response to different diets in healthy, normal-weight, sexually intact male cats with or without predisposition to overweight. The influence of neutering on body weight and metabolic response has been shown in several other studies [40][41][42][43]. In the present study we used intact male cats specifically to exclude the influence of neutering.
Although all cats had an ideal BCS between 5 and 5.5, BCS in group po was significantly higher than in group l. However, body fat content measured by DXA did not differ between the two groups. BCS is a subjective and semi-quantitative method [5,28], whereas DXA is an objective measurement. It can be speculated that the BCS of cats predisposed to overweight was overestimated because the abdominal fat (skin) pad, which leads to higher BCS, was misinterpreted and probably consisted of skin only. The measured body fat percentages were low in comparison to other studies with cats [44]. The machine's lower limit for body fat measurement was approximately 4% [18]. Measured body fat percentages depend on position and machine type [45]. The measured body fat percentages are comparable with other measured data in our cat population [18,46], which is highly important given the differences in the body fat content of the investigated cats in different phases of life. The concentrations of some blood parameters such as insulin, glucose and free fatty acids change during body weight loss or body weight gain over time due to physiological reactions during the intake of macronutrients [12,47]. Because of the four weeks maintenance of body weight before the experiment, the influence of food restriction on food intake and energy expenditure as well as metabolic response in the present study can be neglected. Wichert et al. [18] showed in an earlier study that energy requirements calculated per kg BW 0.67 were not significant different in the cats (l and po) used in the present study.
Interestingly, group l had higher insulin blood concentrations than group po, although no differences in body weight or body fat were measured. Because all cats showed normal glucose concentrations, it can be speculated that the maintenance of glucose concentrations is functional. One explanation for the relatively low insulin concentrations in group po in combination with normal glucose concentrations is a higher degree of insulin sensitivity, which means that less insulin is used for the control of normal glucose concentrations [48]. Another explanation could be the hepatic glucose production, which is still functioning [12]. However, insulin sensitivity as measured by HOMA [38] showed no significant differences between groups or diets during the whole experiment. The cats of the present study were measured earlier by Häring et al. [46], but the cats predisposed to overweight were overweight during the previous study (BCS >6) [39]. In that study, the cats of group po showed impaired insulin sensitivity in the glucose tolerance test, and the calculated HOMA indexes were higher than in the present study but nevertheless within the reference range [39]. From this observation, it can be speculated that if cats of the overweight phenotypic trait lose weight, their insulin sensitivity is regulated to a normal status and reaches the normal range again. This "normalizing" effect of weight loss on insulin sensitivity, already described in several publications on cats [12,13], is comparable to the reversal of insulin resistance by weight loss as described in humans [13]. The overweight-predisposed cats' insulin regulation in this study responded physiologically.
Another explanation for the relatively low insulin concentrations in group po could be lower insulin production by beta cells in the pancreas. Thus, glucose levels could be maintained by other compensatory mechanisms such as glucagon [49]. As glucagon was not measured in the present study, this question cannot be answered here. The higher the glucose load, the higher the insulin flow necessary to maintain the glucose levels within the normal range [50], but in our study there was no difference in food intake between the two groups. In the present study, at least a tendency to higher insulin concentrations in group l was observed. Calculating the food intake per kilogram body weight, group l had a significantly higher food intake of HCH and HP diet compared to group po. Therefore, this finding seems to explain best the higher insulin concentrations in group l. One reason for the lower energy requirement of the cats predisposed to overweight could be lower energy expenditure, which was assumed for these cats earlier by Wichert et al. [18]. In the present study, the cats were fed to maintain ideal body weight and to reach the requirement. For all nutrients, po cats have similar metabolic responses to l cats, except that they still have lower energy requirements. One of the major differences from other studies was the individual feeding regimen, as in other studies a determined amount of food was used for all cats. The different results could be explained by the different feeding system. The assumed phenotypic trait of the cats in the present study does not seem to affect triglyceride levels. In our study, the triglyceride concentrations are higher with the HF diet, which is consistent with the literature [47]. Thiess et al. [47] showed higher triglyceride levels and reduced insulin response due to a high-fat diet. In addition, Wei et al. [24] measured higher triglyceride concentrations when feeding a highprotein diet than a moderate-protein diet, but the carbohydrate content of this high-protein diet was higher than in our experiments.
High-protein diets are postulated to be beneficial in promoting weight loss and better glycaemic control with normalized insulin levels in obese humans, who have been hyperinsulinaemic before [27,28,51]. Overall, there is some evidence that the beneficial influence of high-protein diets for weight loss and glycaemic control is similar in humans and cats. However, the results in the literature concerning cats are inconsistent. A study on cats [52] described a tendency toward higher insulin levels on a high-protein diet than on a highcarbohydrate or high-fat diet. In contrast, Backus et al. [22] showed the highest insulin concentrations when feeding a high-fat diet, and Hewson-Hughes et al. [53] showed higher insulin concentrations with a high-starch diet than with moderate-or low-starch diets in lean, healthy cats. In the present study, as in the study of Hewson-Hughes et al. [53], the highest insulin concentrations were measured with the HCH diet. It is known, that carbohydrate source also has a great influence on glucose and insulin response [23], and white rice is a very high-glycaemic carbohydrate source [23]. The HP diet produced lower mean insulin concentrations than the other 2 diets (HCH and HF) fed in the present study. Therefore, further studies are needed to determine the influence of macronutrients on glycaemic control in cats.
The present study revealed higher levels of leptin with the HCH diet than with the two other diets. According to the literature, we had expected to find increased leptin levels with a high-fat diet [47]. In contrast to Thiess et al. [47], who showed minimally higher leptin values with a high-fat diet, our study showed higher leptin levels with the HCH diet. However, Thiess et al., [47] did not determine the cats' BCS and it is unclear whether the cats gained weight during the study. As shown by Backus et al. [22], leptin was not influenced by dietary fat content, but by body fat content. Since the DXA measurements showed no difference in the body fat content of the cats, the higher leptin levels with the HCH diet in the present study cannot be explained. Thus, the reason for the differences in the leptin concentrations in the present study remains unclear.
Conclusion
To improve cats' health, it is important to keep their weight within the normal range. This is more important for cats predisposed to overweight. The aim of the present study was to investigate whether cats predisposed to overweight react differently to diets with various macronutrient compositions (HCH, HP, HF) compared with lean cats. In the present study, no different metabolic response was measured between the predisposed to overweight and lean cats of the investigated cat colony. Only small differences in insulin levels could be shown. However, normal insulin sensitivity in the lean-state po cats was measured. It is still unclear why the po cats show same insulin sensitivity with lower plasma insulin concentrations. Thus, cats from our cat population, that return from overweight to normal weight have a minimized risk for insulin resistance and show normal insulin sensitivity, even if their insulin sensitivity was slightly decreased when they were overweight.
Due to these results in the present study, no differences in metabolic response in the measured blood parameters between cats with and without predisposition to overweight could be shown. The present data provide the first indications of beneficial effects on insulin sensitivity from avoiding high-carbohydrate diets with a high-glycaemic index, especially in cats with predisposition to overweight. As known from the literature, a high-carbohydrate diet with high-glycaemic carbohydrate sources increases the risk of weight gain. The results of the present study show that a highprotein diet and normal body weight could be advantageous for cats, consistent with its ability to prevent obesity and type 2 diabetes mellitus. This is another important hint that cats could be a useful model for obesity and type 2 diabetes development or prevention in human beings. | 5,747 | 2017-06-19T00:00:00.000 | [
"Biology",
"Medicine"
] |
Hypersonic FLEET velocimetry and uncertainty characterization in a tripped boundary layer
Femtosecond laser electronic excitation tagging (FLEET) velocimetry is applied in a hypersonic boundary layer behind an array of turbulence-inducing trips. One-dimensional mean velocity and root-mean-square (RMS) of velocity fluctuation profiles are extracted from FLEET emissions oriented across a 2.75∘ wedge and through a boundary layer above a flat plate in two test campaigns spanning 21 tunnel runs. The experiment was performed in the Texas A&M University Actively Controlled Expansion tunnel that operated near Mach 6.0 with a Reynolds number near 6 × 106 m−1 and a working fluid of air at a density near 2.5 × 10−2 kg m−3. Detailed analysis of random and systematic errors was performed using synthetic curves for error in the mean velocity due to emission decay and the error in the RMS velocity fluctuation due to random error. The boundary layer behind an array of turbulence-inducing trips is documented to show the breakdown of coherent structures. FLEET velocimetry is compared to the tunnel Data Acquisition System, Vibrationally Excited Nitric Oxide Monitoring results, and Reynolds-Averaged Navier–Stokes computational fluid dynamics to verify results.
Introduction
The transition to turbulence in hypersonic boundary layers plays a large role in the heating, entropy production, and generation of drag for hypersonic vehicles [1,2].One of the primary design tools available for predicting hypersonic boundary layers is computational fluid dynamics (CFD).Appropriate use of CFD design tools can reduce the time and the cost associated with the development of hypersonic vehicles by reducing manufacturing and testing costs of the vehicles and optimizing design strategy.In order to improve confidence in the CFD simulation accuracy, and thus confidence in hypersonic vehicle design, rigorous validation of CFD tools using highquality experimental data sets is required [3].As documented by Oberkampf and Smith, datasets used for CFD validation must have a high completeness level and well-quantified experimental uncertainty.For hypersonic flows, a dataset with high completeness will have high temporal and spatial resolution without significantly disturbing the flow.Experimental setup and test conditions must be carefully documented to ensure that data is reproducible.Sources of random and systematic errors must be quantified to bound validation efforts.Simple test articles often called canonical models are useful for understanding the flow phenomenon in boundary layers involving certain types of processes or interactions as discussed by Gaitonde [4].
In this paper, femtosecond laser electronic excitation tagging (FLEET) is chosen to perform one-dimensional velocimetry in a tripped hypersonic boundary layer with the goal of understanding the considerations and limitations of FLEET as a validation quality diagnostic method for CFD.The technique of FLEET has matured quickly since its inception a decade ago on bench top experiments reported by Michael et al and has quickly become a popular diagnostic technique for hypersonic flows [5].FLEET can be classified as a molecular tagging velocimetry (MTV) technique and offers nonintrusive, seedless velocimetry in pure nitrogen or air flows in a variety of configurations.FLEET is performed by imaging the longlived fluorescence of nitrogen molecules tagged by a focused femtosecond laser beam at various time delays to record gas displacement.The fluorescence of nitrogen molecules is the result of dissociation and ionization of nitrogen molecules into multiple excited states which recombine, emit photons, and return to the ground state in a rate-limited process [6][7][8].FLEET was first used in orthogonal detection configurations, but many alternative beam orientations have been used and are discussed hereafter.FLEET has been applied in the bore sight configuration where a short focal length lens is used to create an emissive spot to identify two-dimensional velocity [9].Multiple femtosecond beams have been used to generate crossing focused beams to track two-dimensional velocity and local vorticity [2].FLEET has also been applied using selective masking to produce a more continuous onedimensional velocity field [10].A single femtosecond beam was split into a grid pattern and imaged in a bore sight configuration to track two-dimensional velocity and vorticity [11].FLEET has also been applied in the wall-normal imaging orientation with the femtosecond beam terminating on the surface of the model or in a beam port in the model [12,13].Work by Limbach showed heating of the gas caused by traditional FLEET diagnostics [7,14], motivating the development of Selective two-photon absorptive resonance femtosecond laser electronic excitation tagging (STARFLEET).STARFLEET reduces the thermal energy deposited in the flow field while simultaneously increasing the signal and emission lifetime [15][16][17].FLEET has been successfully applied in the AEDC Hypervelocity Wind Tunnel and Sandia's Hypersonic Wind Tunnel which both use a working fluid of pure nitrogen [15,18].Experiments in the AFRL Mach-6 Ludwieg tube has further shown that FLEET diagnostics can produce reliable estimates of mean velocity in a hypersonic tunnel with a working fluid of air [19].
With sufficient signal, single-shot FLEET has been used to measure flow fluctuations.Burns measured the distribution of instantaneous velocities in the freestream of the NASA Langley 0.3 meter Transonic Cryogenic Tunnel [20].Dogariu and Hill have separately conducted campaigns wherein singleshot FLEET has been used to find velocity fluctuation above the surface of test articles in hypersonic tunnels [13,18].In each of these tests the measured instantaneous velocity distribution was used to calculate the one-dimensional velocity fluctuation.Measurement of accurate velocity fluctuation requires low single-shot error, as the distribution of instantaneous velocity can be significantly impacted by imprecise measurements.
FLEET also has several challenges associated with its application to hypersonic flow diagnostics.FLEET emission in air is substantially weaker and has a shorter lifetime than FLEET emission in nitrogen because of high quenching rates caused by oxygen [21].The intensity of FLEET emission decays bi-exponentially over time which will cause systematic under-prediction of the mean velocity if the exposure duration is substantial relative to the emission decay time scale [18,22].Measurement precision is the distribution of velocity caused by random measurement errors and has been investigated by Peters [8,21] who measured the distribution of singleshot FLEET velocity in a benchtop experiment.Precision was found to be dependent on the camera system, signal-to-noise ratio (SNR), and emission decay which all contribute to errors in a centroid fitting routine for images [8,22].Lower signal from FLEET in a working fluid of air, and a long integration time, resulted in larger random errors [5,16].Additionally, a short delay between successive camera exposures, required by short signal lifetime, amplified the impact of random errors [18].In test where precision is on the order of the RMS velocity fluctuation, FLEET measurements of velocity fluctuations are expected to be inaccurate.
The primary objective of this paper is the characterization of boundary layer transition to turbulence due to discrete roughness elements as a precursor to future studies [23,24] of shock-boundary layer interactions in the Actively Controlled Expansion (ACE) hypersonic wind tunnel at Texas A&M University.For this purpose, the authors have applied the FLEET MTV technique to measure velocity profiles behind the tripping array over 21 wind tunnel entries.The secondary objective of the paper is to quantify and correct for random and systematic errors due to low measurement precision and rapid emission decay respectively, which are substantial during operation of the ACE tunnel with air.Notably, this work also reports the first FLEET measurements ever obtained in the ACE tunnel at Texas A&M University.
Experimental methods
To investigate the mean and RMS velocity fluctuation profiles, two sets of experiments were conducted at the Texas A&M National Aerothermochemistry and Hypersonics Laboratory in collaboration with the Aerospace Laboratory for Lasers, ElectroMagnetics and Optics.The test facility, femtosecond laser, experimental methods, data collection system, and image processing program are discussed below.
ACE tunnel
All experiments were conducted in the ACE tunnel at the Texas A&M University NAL.This facility is a pressure-vacuum blow-down hypersonic wind tunnel with an operating fluid of dry air.A schematic of the ACE tunnel is shown in figure 1.Details on ACE design, calibration, and freestream turbulence are provided in [25][26][27][28].The test section cross-sectional area of ACE was 22.9 cm × 35.6 cm.The windows used for this experiment were 2.54 cm thick uncoated fused silica.The run conditions for the tunnel used in the two test campaigns are shown in table 1.The static pressure present in the freestream of the tunnel was 3.3 Torr, while freestream density was calculated to be 0.025 kg m −3 .
Test articles in the ACE tunnel
The first of the two test articles tested in ACE was a 2.75 • halfangle wedge article.Flat plate models in ACE have resulted in pressure differentials that cause streamlines to wrap over the sides of the test article, while the wedge test article produces a boundary layer with a more uniform pressure gradient [29].This test article has a length of 0.508 m and a width of 0.216 m.Extensive previous testing with the wedge test article has been conducted using oil flow, schlieren imaging, IR thermography, high-frequency pressure transducers, kulite sensors, pitot traverses, Optical Emission Spectroscopy, Planar Laser-Induced Flourescence themography and velocimetry [30], and Vibrationally Excited Nitric Oxide Measurements (VENOM) [29,31].Figure 2(a) shows the wedge model mounted in a cutaway of the test section of the ACE tunnel.FLEET measurement locations are reported relative to the trailing edge of the center of the tripping array.The streamwise distance was measured parallel to the model surface in the flow direction, while height was measured normal to the model surface.The wedge test article was fit with a row of tripping elements behind the leading edge to transition the boundary layer to turbulence.The tripping array is sometimes referred to as a set of 'pizza-box' trips [32].Shrestha and Candler used direct numerical simulation to investigate similar tripping elements which provide context to the results observed in this experiment [33][34][35].The tripping array used with the wedge test article was composed of 23 elements each with a square footprint with a diagonal length of 3.42 mm and inter-trip spacing of 3.42 mm.The height of each trip was 2.57 mm and designed to be 1.3 times the height of the incoming laminar boundary layer.
The second of the two models tested in ACE was the base plate of the canonical inlet model discussed by Limbach and coworkers [36].The canonical inlet model was designed with removable side walls and inserts in the plate.The model was run in the ACE tunnel in the configuration shown in figure 2(b), amounting to a flat plate with an array of tripping elements placed near the leading edge of the model.The dimensions of individual tripping elements used for the flat plate test article were identical to the ones used with the wedge test article.An insert with blind holes hereafter called beam ports was placed in the flat plate which was used for femtosecond beam routing.The beam ports in the insert were the only exposed holes on the surface of the model; all other holes were sealed for the run and were not expected to impact the flow.The beam port had a diameter of 1.95 mm and a depth of 8.5 mm.The diameter of the beam port used in this experiment was selected based on work done by Hill wherein a 3.81 mm port was utilized in a similar FLEET tagging experiment and was documented to not disturb the boundary layer [13].The depth of the port was limited by the geometry of the insert within the test article.A through-hole was avoided to prevent flow through the hole caused by the pressure differential across the plate.
Femtosecond laser and beam routing
FLEET measurements were performed using a Spectra-Physics Solstice ACE femtosecond laser system providing 90 fs pulses at a 1 kHz repetition rate.The laser operated at 811 nm with a maximum of 8 mJ pulse energy.Femtosecond laser pulses were routed approximately 6 m from the laser system to the ACE tunnel test section, over which the Gaussian spatial energy distribution was retained.In both test campaigns a single planoconvex lens was used to focus the femtosecond laser into a single beam in the tunnel.
In the spanwise measurement campaign, the femtosecond laser beam was used to perform diagnostics for 20 locations behind the tripping array in and near the boundary layer.The output pulses were limited to 2.5 mJ in the test section for all measurements using a waveplate-polarizer optical attenuator to limit supercontinuum generation in the uncoated fused silica windows.This energy was chosen based on signaloptimization conducted in atmosphere before the campaign.The laser had a diameter of roughly 18 mm at the final lens, and was focused with a 300 mm planoconvex lens.The streamwise locations of the measurements were chosen based on optical accessibility in both the side and top ACE tunnel windows.The height of the spanwise measurements were selected to begin just outside the boundary layer and continue down to as near the surface of the model as possible.The minimum measurement height during spanwise testing was 1.5 mm above the surface of the model due to the laser beam clipping the sides of the test article.Figure 3 shows the path of the laser beam through the test section.
In the wall-normal measurement campaign, the femtosecond beam was routed to penetrate the boundary layer and to terminate in a beam port 137.5 mm behind the tripping array to permit FLEET measurements as close to the surface of the model as possible, see figure 4. For this experiment, a 200 mm planoconvex lens was utilized to focus femtosecond pulses through a fused silica window to generate FLEET emissions up to 10 mm above the surface of the test article.Beam energy for this experiment was similar to the spanwise measurement campaign with 2.5 mJ expected in the test section.
Data collection
Data collection for both experiments was performed with a Photron Fastcam SA-Z complementary metal oxide semiconductor camera coupled with a LaVision HighSpeed intensified relay optics (IRO) image intensifier.The intensifier unit was equipped with an S25 photocathode which was well-suited to amplify the visible and near-infrared emission coming from FLEET.A ZEISS Milvus 100 mm Macro Lens collected light through a 750 nm lowpass filter with an optical density of 4 that suppressed scattered laser light into the lens.A BNC Model 577 Pulse Generator synchronized the camera-IRO system with the laser pulse.The image intensifier was run in the burst-gating mode to superimpose the initial and displaced FLEET signal onto a single camera image to improve experimental repeatability.
In the spanwise FLEET measurement campaign, the intensifier unit was run in burst mode to superimpose three separate gates to show the displacement of emission.Of the three gates used, only the first two were found to have sufficient SNR to be used for velocimetry.Table 2 shows the gate and delay pairs for the IRO during the run.An intensifier gain of 80% was chosen to amplify FLEET emissions without saturating the camera sensor.Reference images were taken at each diagnostic location to provide spatial calibration and location in a global coordinate system.The image resolution was approximately 25 µm per pixel but varied slightly between runs due to changes in optical alignment.
In the wall-normal FLEET measurement tests the delay between the initial gate and single displaced gate was maximized to increase the displacement between the captured emissions to improve near-wall velocimetry.Table 2 shows the settings of the IRO used for the wall-normal measurements.The camera and intensifier unit were declined 3.25 • to look down onto the plate, which improved the ability of the system to capture near-wall emissions.Image resolution was approximately 50 µm per pixel in this configuration due to the further standoff distance from the test article when capturing data through a window on the side of the ACE tunnel.
Data processing
The images of FLEET emission were processed based on similar MTV and other FLEET data processing techniques.
Each tagged line was fit to an assumed Gaussian line shape to identify the streamwise location of tagged molecules at a given time.Simpler cross-correlation methods were impossible with this data set because emissions were superimposed in the same image.A MATLAB program, described hereafter, fits singleshot images to identify instantaneous velocity distributions that are then used to calculate mean velocity and RMS velocity fluctuation.
Image preprocessing
The reference images were examined to identify the image scale, physical location, and orientation of the image plane relative to the test articles.Next, images were passed through a program described by Limbach that uses a pixel-wise histogram approach to identify and eliminate outliers, especially due to dust particles, in sequential images [7].The mean intensity of each pixel within images, after outlier rejection, was used to represent time-averaged FLEET.
The time-averaged emission images, representing the average from 1000 images evenly obtained from the run duration, were used to determine the bounds used to fit the instantaneous FLEET profiles.The bounds limited the Gaussian centroid locations to four times the standard deviation of intensity for each acceptable image.The time-averaged gate amplitudes were fit with a single-exponential decay equation to identify a representative decay constant for the FLEET emissions using the time-independent emission intensity to account for the unequal IRO gate duration.The SNR of the third gate window was so low that it was excluded in the fit.That is, the fitting used only undisplaced line and first displaced gate interval to identify a single-exponential decay constant τ .While real FLEET emission is expected to be modeled more accurately with a bi-exponential function [16,22], the use of a single-exponential fit has been used previously by Dogariu et al [18].The FWHM of the initial line width was approximately 310 µm, while the FWHM of the displaced line width was approximately 625 µm.
In the spanwise measurement campaign conducted using the wedge test article, light from the FLEET emissions was reflected off the surface of the model and was observed as a background behind the FLEET signal.The reflected light was proportional to the original FLEET emission intensity and decreased with measurement height above the plate.This reflected light was fit with a two-dimensional Gaussian surface, and subtracted from images in a test, to minimize the effects of the this background light on the results.Figure 5 shows single shot and time-averaged FLEET emissions for a single diagnostic location in the wake of a turbulence-inducing trip element.
The wall-normal FLEET images required special considerations to estimate the displacement of FLEET emissions near the wall where the displaced gate overlaps strongly with the initial gate as shown in figure 6. Near the beam ports, significant scattered light interfered with the FLEET signal.To address this, a two-dimensional Gaussian was fit to the reflected light from the beam port.The reflected light from the hole did not saturate the camera, permitting subtraction of their effect from images.The reflected light mask was identified for time-averaged image and applied by subtracting it from each single shot image.Variations in the beam port reflections caused the program to increase fluctuations near the surface.Second, the first gate emission was fit far from the wall and then extrapolated as a line to the surface of the test article.The extrapolated fitted values for the initial emissions and the subtraction of the reflected light from the hole injected additional noise near the surface, resulting in greater random error.
Centroid fitting
The processing program loaded, cropped, and registered single-shot images prior to fitting.Image registration was performed using the MATLAB image processing toolbox because of camera motion during the run.All registration translations were tracked and factored into measurement location uncertainty.The images were fit using a double-Gaussian equation, as shown in equation ( 1), to each streamwise row of data using a nonlinear least squares fitting procedure.Figures 7 and 8 show time-averaged and single-shot rows of data fit using equation (1).Fitting was used to identify the location of the FLEET emissions in the camera image, which were called emission centroids.A double-Gaussian fit was chosen over alternative centroid-finding algorithms such pseudo-Voigt fitting because it provided a good estimate of the centroid while simultaneously limiting the number of free variables.This conclusion was made after a comparison of results fit with both a Gaussian distribution and a pseudo-Voigt fit, in which an insignificant difference was observed in centroid location between the two methods.The amplitude of the emission was represented by a, the location of the emissions in the image was represented by µ, and the width of the emissions was represented by σ.The subscripts correspond to the first (initial) and second (displaced) emissions respectively ( The time between laser tagging and the first gate was less than 100 ns, corresponding to a displacement of fewer than four pixels in the streamwise direction.The small streamwise displacement, and the primarily streamwise flow direction, permitted fitting the first gate as a line.The excellent SNR, as well as relatively short exposure duration compared to the emission decay time constant, minimized the impact of random and systematic errors in the identification of this centroid.The subscript m in the variables µ 1,m and µ 2,m indicates that the these are 'measured' value that in general contain significant contributions from random and systematic error terms as later explicitly defined in equation (12).The displaced centroid had a reduced SNR and a relatively long gate duration, and was not fit as a line to retain spanwise resolution.The error associated from these facts caused the expected error in µ 2,m to be significantly greater than in µ 1,m .In later data processing only the error in µ 2 is considered, However, the error quantification process presented in this work could be applied to both exposures if the error was of a similar scale.The fitting routine applied the time-averaged fitting bounds to identify the emission centroids with 95% confidence intervals.The centroids were obtained for roughly 6 mm across the span of the test article in the spanwise measurement campaign, and 10 mm in height for the wall-normal campaign, defining the spatial extent of viable data.
In every image, SNR was measured as the integral of the signal from the Gaussian distribution fitted to the displaced gate, divided by the residual of the captured emissions to that same Gaussian distribution [16].The displaced gate was chosen for SNR calculations because of its substantially weaker signal and the increased variability compared to the initial gate.Camera pixel rows were not binned, as recommended by Reese [16,37] if the SNR is less than 4, to retain full camera resolution.Velocimetry uncertainty was documented to provide an alternative avenue for bounding errors.
Velocimetry and filtering
The instantaneous velocity was computed using the known time delay between the midpoint of the two IRO gates (t i ), the identified image scale (s), and the calculated displacement between the two centroids (µ i ) as shown in equation (2).A single displacement was used for velocimetry because only one of the two displaced gate were accurately captured.The random uncertainty in centroid location and systematic uncertainty in image scale were propagated into the raw instantaneous velocity Various metrics were used to filter the instantaneous velocity profiles.A mean signal threshold was applied for each row, isolating the region of emissions with sufficient signal for processing.The coefficient of determination (R 2 ), SNR, and velocity uncertainty were each used to filter data, eliminating 7% of data points.A final filtering metric was applied that eliminated data in which the fitted centroids were near the bounds and eliminated an additional 2% of the data points.The centroid fitting bounds corresponded to a difference 300 m s −1 from the mean velocity.This filtering metric was reviewed a posteriori and showed that only data approximately 4σ from the mean velocity were removed, giving the authors confidence that a negligible amount of viable data was eliminated.
The mean velocity was calculated by passing the curated data into equation (3), where instantaneous velocity measurements (V i,m ) were evaluated for the total number of images (N).Equation ( 4) was used to calculate the RMS velocity fluctuation ( V ′2 ).The one-dimensional velocities calculated in camera space were projected in the streamwise direction using the orientation and known inclination of the test article with respect to the tunnel axis.The mean ( Vm ) and RMS velocity fluctuation (V RMS,m ) velocity were reported as the projection of the velocity in the streamwise direction.These calculations are represented as measured values with the subscript 'm', as calibrations are applied to both the mean velocity and RMS velocity fluctuation The random and systematic errors in the instantaneous velocities were propagated into the mean velocity and RMS velocity fluctuation via standard error propagation as shown in equations ( 5)- (7).The prescript δ is used to denote uncertainty in a measured or derived quantity.Additional uncertainty contributions were included due to uncertainty in the beam pointing angle and flow inclination angle to represent the uncertainty in the projection from camera space to the captured velocity projection.The uncertainty quantified up until this point includes contributions from centroid finding, timing, image scale, beam pointing angle, and flow inclination angle
Synthetic data
While emission decay is already known to impact the mean velocity of FLEET measurements [18,22], the impact of random measurement error on the RMS velocity fluctuation is not as well documented.Several assumptions about the flow field and characteristic of the measurement error were made to quantify the impact of measurement error.Real flow fluctuations were assumed to follow a normal distribution of instantaneous velocities about a central mean velocity as shown in figure 9(a).The measurement error was also represented by a normal probability distribution as shown in figure 9(b), with the width of that distribution dependent on the SNR and camera system.Random error is believed to be approximately normal after analysis of a probability distribution of measured instantaneous FLEET velocities in the freestream of the ACE tunnel, where random error is large relative real flow fluctuations.In these conditions it was observed that the distribution of errors appears nearly Gaussian with a slightly greater kertosis observed by heavy tails.By assuming both the real flow fluctuations and error are normal distributions and uncorrelated, the two distributions can assumed to be added in quadrature.Real or simulated measurements with no physical flow fluctuations result in the measured RMS fluctuating velocity representing exclusively measurement error.Synthetic data was generated to imitate captured FLEET emissions with known mean velocity and no fluctuating velocity to permit direct analysis of the resulting mean and RMS velocity fluctuation errors.This model imitated 8) which represents a Gaussian emission in a two-dimensional camera space at a given time, with amplitude (A) modeled by equation ( 9) and centroid location modeled using the spanwise velocity profile in equation (10).In equation ( 9), τ represents the 1/e decay constant associated with FLEET emission in ACE.An arbitrary streamwise position of the laser within the image is provided by µ 0 .This modeling approach is supported by previous fitting efforts that have shown a double-Gaussian approach to FLEET velocimetry to be accurate [15,16,38].A one-dimensional fluid diffusion model was used to represent the diffusion of the FLEET emission, which was fit at various time delays to identify a line width as a function of time σ(t).A rigid sphere model for pure nitrogen self-diffusion was used as the estimate for emission diffusion.This process increased the width of FLEET emission at longer time delays This displacing ideal Gaussian emission shape is then timeintegrated using equation (11).The times for integration, t 0 and t f are set to match the initial and final time of a single intensifier exposure, with subsequent exposures added in summation.The final intensity profile is shown in figure 10(a), with the streamwise and spanwise directions indicated by variables x and y respectively.The final step in the synthetic data generation process was the addition of noise which was essential to replicate the random measurement error observed when processing the real FLEET images.A synthetic binary map was generated that suppresses a portion of the ideal synthetic emissions, between 25%-75% of pixels depending on the SNR of emissions in that row.The binary mapping feature was used to replicate what is believed to be an artifact in images as the FLEET signal approaches the detection threshold of the intensifier system as observed in figures 5 and 6.Binary mapping was performed using input from experimental FLEET results outside of the boundary layer, namely the spanwise locations furthest from the surface and the upper-region of the wall-normal results.By replicating the noise pattern and SNR in this region, where inherent flow fluctuations are minimal, the final noise binary map was tuned to this data set.The amplitude of emissions was normalized using the binary map to ensure that the timeaveraged emissions were not altered due to the suppression of individual pixels.A Poisson distribution was used on the scaled synthetic data to represent shot noise before the emissions pass through a hypothetical intensifier unit.The data was rescaled and passed through a Gaussian blurring function to replicate the camera system imaging the intensifier unit.Beam focusing was replicated using a spanwise scaling factor on the synthetic data.Figure 10(b) shows a synthetic image with noise generated to match a captured FLEET data set.
Synthetic images similar to figure 10(b) were generated for each FLEET measurement.Each image set was processed using the same program as for the original FLEET images, obtaining synthetic mean and RMS velocity fluctuation.
Application of synthetic data
To develop equations that quantify the impact of error, random and systematic error terms were modeled as factors impacting the displaced centroid location.As previously discussed, the displaced centroid had a lower SNR and longer gate duration than the initial fit centroid, as well as was not fit as a line, all causing this centroid to be the dominant source of both random and systematic error.For this reason the random and systematic error in µ 1 is neglected in future calculations.The measured centroid of the displaced gate was modeled as the summation of the real tagged molecule location (µ i,2 ), systematic error (µ i,2,err (τ )), and random error (µ ′ i,2,err (SNR)) as shown in equation (12).This definition of the centroid was plugged into equation ( 2) to produce equation ( 13), using the Reynolds decomposition to separate mean and fluctuating velocity Equation (13) shows the buildup of the four significant contributors to the measured FLEET velocity: the real mean ( V) and fluctuating (V ′ i ) velocity of the flow field, the systematic error in velocity due to emission decay ( Verr (τ )), and the random error in velocity due to measurement error due to the SNR (V ′ i,err (SNR)).The systematic error is due to emission skewing caused by emission decay (τ ), which has been documented to be significant when the decay time of emissions is on the order of the IRO exposure [18,21].The random error in velocity, sometimes called imprecision, has been shown to vary as a function of the SNR [21,22].It is hypothesized that in the ACE tunnel, Verr (τ ) and V ′ i,err (SNR) are especially substantial due to the long second-gate exposure relative the decay-time of emissions, and because of the low SNR.
The systematic error in the mean velocity was obtained by fitting the displaced gate from the ideal synthetic image, such as in figure 10(a), with a Gaussian distribution and documenting the error in the velocity caused by emission decay.Figure 11 shows the process of identifying µ i,2,err for a single input decay constant.Emission strength was significantly skewed across the IRO gate, resulting in centroids fitted nearer to the initial lasing location because of decaying emission intensity.The error in the centroid prediction is then calculated as Verr (τ ).Equation ( 14) shows how this identified error in the mean velocity can then be used to calibrate the mean velocity from experimental FLEET results.An example of the mean velocity correction is shown in figure 12(b), next to the decay constant in figure 12(a) The difference between the physical midpoint of the displaced gate and the fitted centroid was generally on the order of two pixels, which translated to Verr (τ ) ≃ 25 m s −1 .The large impact of emission decay on measurements, compared to other FLEET measurements, was because the displaced gate had a long exposure time (500 ns) relative to the emission decay time (1/e ≃ 400 ns) of FLEET emissions in the low-pressure air in the ACE tunnel [18].The decay constant was calculated on a per-image basis to be constant throughout the spanwise campaign, while a small variation in the decay time was observed for measurements made through a boundary layer in the wallnormal campaign.A calibration value, or curve (as a function of the decay constant) in the case of the wall-normal measurement, was obtained for the measured mean velocity.
The random error in the measured velocity was obtained by fitting the RMS velocity fluctuation from the synthetic data as a single-exponential function of the SNR (V RMS,err (SNR)) as shown in figure 13(a).In this figure the spanwise resolution of the measurement is exploited to visualize the relationship between measured RMS velocity fluctuation and the SNR.In this plot the V RMS,err represents the precision floor of the measurement.This fit was then subtracted in quadrature from the measured RMS velocity fluctuation as shown in equation (15).Equation (16) shows the relationship between the imprecision measured as RMS velocity fluctuation and single-shot measurement error from equation (13).Figure 13(c) shows the RMS velocity fluctuation before and after calibration, with figure 13(b) showing the SNR over that same span.From figure 13(c) it is clear that the calibration reduced the dependence of the fluctuation on the SNR over the span.The uncertainty in the fit to the measurement imprecision was propagated into the uncertainty in the real RMS velocity fluctuation
Results
This section discusses the velocimetry results obtained from experiments in the ACE tunnel for the spanwise and wall-normal campaigns.Section 4.1 discusses the velocimetry results of the spanwise test campaign before and after the implementation of the calibration curves for mean velocity
Calibration of FLEET results
The FLEET images from 20 unique runs specified in table 3 were processed to calculate the mean velocity and RMS velocity fluctuation and associated 95% confidence intervals for the uncertainties considered in this campaign.The steady-state tunnel run duration for the spanwise tests were approximately 20 s, while the run duration for the wall-normal measurement was approximately 10 s, resulting in 10-20 thousand images processed per run.Diagnostic locations were measured relative to the trailing edge of the tripping array and normal to the surface of the test article.The spanwise distances used for velocity plots were normalized by the trip scale shown in figure 14.The FLEET results are compared to similar measurements before and after calibration to show the need for calibration as well as to show the change in results due to calibration.Table 4 is a summary of the calibration results.
The uncorrected mean velocity was compared against the freestream tunnel velocity.The ACE tunnel has a dataacquisition (DAQ) system that recorded stagnation pressure in the tunnel settling chamber using a pitot probe, freestream pressure using a pressure tap in the nozzle, and stagnation temperature using a thermocouple in the settling chamber [25][26][27].
The assumption of isentropic flow permitted the calculation of the freestream Mach number and velocity behind the shock produced by the leading edge of the wedge model.Uncertainty in the measured ACE DAQ freestream velocity was estimated to be 0.39% by Buen et al [29], however, stagnation chamber pressure varied slightly over the run which caused freestream velocity variation less than 1% of the mean freestream velocity over the course of the steady-state portion of the run.
The FLEET data collected at the 130 and 255 mm downstream locations, and 12 mm normal to the plate, best approximate the freestream velocity above the boundary layer.Previous testing with this model has provided confidence that these measurement locations lie outside the boundary layer [29,31].Tunnel velocity reported by the DAQ was 857 m s −1 was compared to the average of the uncorrected FLEET from the two measurements in the freestream which was predicted to be only 829.5 ± 8.8 m s −1 .The average of uncorrected RMS velocity fluctuation in the freestream was measured with FLEET to be 55.3 ± 4.3 m s −1 .However, the tunnel freestream velocity fluctuations were believed to be more accurately measured using VENOM at around 8.7 m s −1 (1% of the freestream velocity) [31].The error in the FLEET measurements was attributed to the aforementioned systematic errors caused by emission decay and measurement imprecision.The magnitude of these errors motivated the adoption of the calibration curves defined in equations ( 14) and (15).
FLEET emissions were fit to find the τ to vary between 450 ns in the boundary layer and 350 ns in the freestream.The calibration curve for mean velocity was obtained using synthetic data as described in section 3.5, which provided a normalized magnitude of calibration of 2%-4%.The FLEET results for RMS of velocity uncertainty were similarly calibrated using equation (15).The calibration curve V RMS,err (SNR) was generally on the order of 5%-8% of the freestream velocity (40-60 m s −1 ).In some FLEET measurements, increases in measured fluctuations velocity near the edges of the measurement region could not be attributed to imprecision and were not accounted for in the RMS fluctuation correction.It is believed that a minimum SNR (of around 3.5-4 using the definition of SNR provided) is required to calculate RMS velocity fluctuation, which was not achieved for the entire span of a small subset of the data.
The calibrated FLEET data was compared against the ACE DAQ and previous VENOM velocimetry conducted with the same test article to quantify the validity of the mean velocity and RMS velocity fluctuation calibration.Table 4 shows a summary of the same comparison.Figure 15 shows postcalibration freestream FLEET measurements had less than 1% error compared with the ACE DAQ system.This provided confidence in the formulation and application of Verr (τ ).
The FLEET results were then compared against spanwise VENOM results in ACE collected with the same test article and tunnel conditions [31], highlighted in table 4. Freestream velocity fluctuations were measured using VENOM 12 mm above the wedge test article and 380 mm downstream the tripping array to be approximately 1% with a mean velocity estimated to be 825 ± 25 m s −1 .A 1% magnitude of freestream velocity fluctuations was consistent with previous studies in ACE [27].Averaging the two calibrated FLEET measurements that best estimate freestream conditions resulted in a mean velocity of 862.2 ± 12.2 m s −1 and a velocity fluctuation of 1.9 ± 1.5%.A single spanwise FLEET measurement, at location of 53 mm downstream the tripping array and normal height of 3 mm, was compared to a VENOM measurement, at a downstream location of 55 mm at approximately same height.The lower and upper bounds for the mean velocity and RMS velocity fluctuation are provided, as the boundary layer shows substantial non-uniformities due to the trip elements.The calibrated FLEET shows moderate agreement with the VENOM results, with the difference in the lower-bound for mean velocity attributed to insufficient spatial resolution in the VENOM and perhaps slightly different diagnostic heights.At a distance 340 mm downstream of the trips and a normal height near 3 mm above the surface, VENOM measured a mean velocity of 637 m s −1 with velocity fluctuations 10% of the mean velocity [31].FLEET results were linearly interpolated between the 255 and 380 mm locations to match the same location as VENOM, which resulted in a mean velocity of 640.2 ± 9.4 m s −1 and a velocity fluctuation of 9.6 ± 1.7%.This was a close agreement given the spatial interpolation and slight ACE tunnel run condition differences between the measurement sets.These comparisons show that the calibration of FLEET results was moderately successful, improving the accuracy of mean and RMS velocity fluctuation measurements in the freestream and boundary layer.
Quantifying the transition to turbulence in the boundary layer
With the accuracy of results improved through synthetic calibration data, the transition to turbulence in the boundary layer was documented.Figure 16(a), a visual representation of table 3, shows the diagnostic locations at which mean velocity and RMS velocity fluctuation were measured.Figures 16(b) and (c) show the mean and RMS velocity fluctuation respectively for all locations at a normal height of 3 mm above the surface of the wedge test article.
The mean velocity and RMS velocity fluctuation were measured with spanwise uncertainty of approximately 1/2 mm, while streamwise location uncertainty was 1 mm.The mean velocity at 21.5 and 53 mm in figure 16(b) show local minimums behind that were attributed to the flow deceleration in the wake behind the tripping array, while the local minimum between the tripping elements was attributed to counter-rotating vortex pairs generated by the array.Local maximums were observed in the RMS velocity fluctuation in figure 16(c) for the 21.5 and 53 mm data near the sides of each tripping element (±0.2) which was attributed to instabilities in the generation of the vortex pair.The strong spanwise variability observed as far downstream as 53 mm was substantially reduced by the later measurement locations.The increased spanwise consistency in the mean velocity and RMS velocity fluctuation at station 130 mm indicates that the flow was transitioning to turbulence.The downstream stations at 255 mm and 380 mm do not have substantial spanwise variation and were thus considered either highly transitional or fully turbulent flows.
The mean velocity and RMS velocity fluctuation at the 53, 130, and 380 mm downstream measurement locations are shown in figures 17(a)-18(c), wherein the results were visualized using linear interpolation between the measurement locations shown by the dotted lines.Figure 17(a) shows the flow deceleration in the shaded region behind the tripping array that was later washed out by the downstream stations.Figure 18(a) shows two local maximums in the RMS velocity fluctuation at the sides of a turbulence-inducing trip element.These local maximums were believed to be the result of a counter-rotating vortex pair observed behind tripping elements in similar DNS simulations [35].The results became substantially more spanwise-consistent at the downstream locations, indicating transitioning flow after 130 mm downstream of the trips.
In figure 18(a), the local increases in the normalized RMS velocity fluctuation at near ±0.4 normalized spanwise distance is believed to be due to measurement error rather than real fluctuations in the flow field.It was observed that the defocusing of the femtosecond laser filament caused a decrease in the SNR ratio that translated into increased random error.This additional random error resulted in a higher measured RMS velocity fluctuation that could not be easily eliminated using synthetic data.
Wall-normal FLEET velocimetry
The wall-normal FLEET velocity results were calibrated using synthetic data similar to the spanwise campaign.Unlike in the spanwise test campaign, the decay constant was observed to change as a function of height, requiring a variable mean velocity calibration.Wall-normal FLEET mean velocity and RMS velocity fluctuation with 95% confidence intervals (CI) are compared against a RANS CFD simulation in figure 19.
A RANS CFD simulation was generated to compare with the FLEET dataset to set up the framework for future CFD validation efforts.A CFD unstructured, finite-volume flow solver using the Spalart-Alarmas turbulence model was used to generate the simulation.A uniform inflow condition was provided to the test section, which neglected the transition to turbulence provided by the trips.The wall condition at the test article was simulated as adiabatic, which was in line with previous simulations of the ACE tunnel.This simulation serves to provide a fully-developed turbulent boundary layer with which to compare the FLEET measurements of the actual flow field.The generation of the model and the reasoning behind these assumptions are detailed by Pehrson et al [23].
The wall-normal FLEET configuration recorded the minimum mean velocity to be 48 m s −1 at a height of 12 micrometers.Camera and tunnel oscillations during the run caused a vertical uncertainty in the measurement location of 0.1 mm.The wall-normal FLEET configuration resulted in a freestream mean velocity of 870.5 ± 11.4 m s −1 , which was within 1% relative error of both the freestream measured from the CFD simulation and the ACE DAQ.The mean velocity below 7 mm was substantially lower than the mean velocity predicted by the CFD simulation.The lower velocity was found to be consistent throughout the duration of the tunnel run, and thus was not an artifact of tunnel startup or shutdown transients.After testing, a forward-facing step present between the baseplate and the beam-port insert was measured to have a height of 200 µm.This step may have caused a Mach wave that intersected the tagged FLEET molecules.A second theory considered, but ultimately dismissed, was that a counterrotating vortex pair maintained coherence to the measurement location 137.5 mm behind the tripping array where it was captured in the mean velocity profile.Later measurement performed on flat plate models in the ACE tunnel using wallnormal FLEET showed no evidence of a counter-rotating vortex pair at similar downstream locations from the same tripping array [23,24].
The wall-normal FLEET profile of the RMS velocity fluctuation is shown in its entirety in figure 19(red).The measured fluctuation below 0.75 mm above the surface of the test article dramatically increased which was attributed to overlapping emissions from the initial and displaced gates.The fitting algorithm was capable of fitting and suppressing timeaveraged emissions, but introduced additional random error in centroid fitting near the wall.The measured fluctuation above 8 mm was erroneous, as the freestream tunnel fluctuations are on the order of 1% [25,27,31].The increase in measured fluctuation was because the FLEET beam defocused further from the wall, and the emissions decayed faster at freestream conditions, both resulting in a much lower SNR.It was observed that the model for random error lost applicability for SNR below 3.5 because imprecision did not follow the single-exponential model.Between the heights of 0.75 and 8 mm, where the RMS velocity was thought to be reliable, a local maximum was observed in the RMS velocity fluctuation velocity around a height of 1.5 mm.
Summary and conclusions
In conclusion, FLEET velocimetry was conducted for the first documented time in the Texas A&M University ACE tunnel.FLEET was shown to provide accurate results for mean velocity and RMS velocity fluctuation in a complex flow field in a tripped boundary layer in ACE despite the working fluid of air at low density.Tunnel optical access permitted 20 spanwise testing locations.A preliminary wall-normal diagnostic was also performed.Image resolution is sufficient to capture the flow phenomena behind a series of turbulence-inducing trips.Superimposed IRO burst gates were required on a single camera image because the total FLEET signal lifetime (2.5-3.5 µs) was shorter than the minimum inter-frame delay of the imaging system.
Spanwise diagnostics were limited by beam width at the sides of the test article so all spanwise diagnostics were at least 1.5 mm above the plate.The wall-normal diagnostic orientation permitted diagnostics within just 10's of µm above the surface of the test article.The working fluid of air at low pressure in the ACE tunnel resulted in low FLEET signal with a short lifetime.An image intensifier was used to superimpose two gates in each image.In the wall-normal orientation, the superimposed emissions overlapped below 0.75 mm from the surface due to low velocity.These overlapping emissions dramatically increased random error in measurements made in this region that could not be accounted for in error-modeling.Wall-normal diagnostics were complicated by thermal expansion during tunnel preheat and by tunnel vibrations.The random error modeling used to improve the RMS velocity fluctuation velocity appeared successful by comparing results in the freestream and against VENOM boundary layer profiles.
The transition point in the boundary layer behind the set of turbulence-inducing trips was identified.Wake effects dominate up to the 53 mm downstream location, while spanwiseconsistent flow was observed after the 130 mm downstream location.The point at which the flow begins to transition to turbulence behind the tripping array is believed to lie between the 53 mm and 130 mm locations.No significant difference was noted in results taken downstream of the 130 mm location, so no precise determination is given for the downstream point at which the flow is fully turbulent.A strict minimum beam height limits data captured near the model's surface in the spanwise measurement campaign, so only the outer layer of the boundary layer is observed.Wall-normal FLEET is performed 138 mm downstream of the tripping array which shows an anomalous mean velocity profile that is hypothesized to be caused by a shock originating from a 200 µm forward-facing step on the surface of the model.Wall-normal diagnostics provided mean and RMS velocity fluctuation results into the viscous sub-layer, but only the results for mean velocity are believed accurate below 0.75 mm.
Hypersonic FLEET diagnostics will remain relevant in the future for velocimetry in complex unseeded flow fields.The spatial resolution, low uncertainties, and relatively simple implementation of FLEET secures its place as an essential tool for velocimetry.The work presented here serves to prove FLEET as a viable tool to measure mean and RMS velocity fluctuation in hypersonic tunnels with a working fluid of air at low pressure.This work also expands on the wallnormal diagnostic technique to make near-surface measurements without significant light scattering.The random error modeling employed for diagnostics in the ACE tunnel can be applied to other tunnels, especially to improve measurements of RMS velocity fluctuation.In future tests, FLEET calibration could be performed by taking measurement in the freestream of a hypersonic tunnel to quantify measurement uncertainty without the need for synthetic data.The error modeling and measurement uncertainty quantification represent essential steps towards industry-adoption of FLEET velocimetry.
Figure 1 .
Figure 1.Schematic of ACE tunnel flow path.
Figure 2 .
Figure 2. CAD models of test articles with measurement axes.(a) 2.75 • half-angle wedge test article.(b) Flat plate test article.
Figure 6 .
Figure 6.Single-shot (left) and 1000-image time-averaged (right) FLEET emissions for the wall-normal measurement campaign.
Figure 7 .
Figure 7. Single row of spanwise data taken at the center of the span from figure 5. Equation (1) applied to the data to obtain fit.(a) Time-averaged data and (b) single-shot data.
Figure 8 .
Figure 8. Single row of wall-normal data taken 5 mm above the surface in figure 6. Equation (1) applied to the data to obtain fit.(a) Time-averaged data and (b) single-shot data.
Figure 9 .
Figure 9. Physical velocity and random error in measurements.Variable notation adopted from equation (13), although the error in the mean velocity was neglected from this visualization for simplicity.(a) Probability distribution of the physical velocity around a central mean velocity.(b) Probability distribution of the expected non-physical velocity caused by random measurement error.
Figure 10 .
Figure 10.Synthetic images that replicate figure 5. (a) Ideal synthetic image.(b) Single-shot synthetic image with noise applied.
Figure 11 .
Figure 11.Single gaussian fit to find the systematic error in mean velocity.Variable notation taken from equation (12).
Figure 12 .
Figure 12.Calibrating the mean velocity for FLEET measurement taken above the boundary layer in the spanwise orientation.(a) Spanwise-average of the calculated decay constant.(b) The measured and corrected mean velocity.
Figure 13 .
Figure 13.RMS velocity fluctuation calibration for one measurement.Calibration performed as a function of SNR and subsequently visualized across the measurement span.Data taken from a measurement in the upper boundary layer.(a) Comparison of the measured RMS velocity fluctuation with the synthetic RMS velocity fluctuation.Equation (15) used to calibrate RMS velocity fluctuation to account for random error.(b) SNR over the span.(c) Measured and corrected RMS velocity fluctuation over the span.
Figure 16 .
Figure 16.Velocimetry at 5 downstream locations at 3 mm above the test article.(a) All diagnostic locations.(b) Mean velocity.(c) RMS of velocity fluctuations.
Figure 19 .
Figure 19.Perpendicular boundary layer profile 137.5 mm behind the trip array.
Table 1 .
ACE tunnel run conditions.
Table 2 .
Image intensifier settings from each test configuration.Bracketed values represent a sequential list of gates and delays.
Table 4 .
Comparison between FLEET, VENOM, and the ACE DAQ.Spatially interpolated values in the boundary layer indicated with an asterisk. | 11,950.4 | 2023-08-31T00:00:00.000 | [
"Engineering",
"Physics"
] |
Maize migration mitigates the negative impact of climate change on China’s maize yield
Crop migration as an adaptation to modulate climate change’s impact on crop yields presents both benefits and risks. We explored how maize migration in China modulates yield responses to climate change and quantified the potential economic benefits of maize migration as an adaptation strategy. We employed a panel data model to identify and measure the factors driving the relocation of maize area, linear regression to quantify the effects of maize migration on climate exposure and irrigated area, and an econometric model to estimate the effects of maize migration on yield. The results show that rise in temperature has a significant negative effect on maize area and that precipitation has a significant positive effect. The migration of maize area is driven by socio-economic factors including agricultural gross domestic product, power of farming machines, and fertilizer input. Moreover, expanded irrigation reduces the adverse effects of high temperatures on maize yield, thereby influencing adaptive crop migrations. The beneficial effects of maize migration are primarily achieved by reducing the adverse effects of extreme heat and strengthening the positive effects of irrigation. However, the extent of this adaptation is jointly affected by agricultural policies, irrigation infrastructure, and economic factors. Current market-oriented agricultural policies may be effective in guiding spatial shifts in maize distribution to align with climate-driven changes, potentially decreasing the vulnerability of China’s maize yield to the impact of climate change. China’s food security policies need to consider climate-driven spatial shifts in crop cultivation and enhance food subsidy policies to highlight the benefits of investment in climate change adaptation, such as adjusting cropping acreage and irrigation to farmers in North China.
Introduction
In China, maize is one of the most important staple crops, accounting for 39.1% of the total grain production and significantly contributing to the country's food security (Han et al 2022, Peng et al 2023).Nevertheless, the expected impact of climate change is likely to exacerbate food security challenges, harming maize productivity (Zhang et al 2017, Hou et al 2021, Pickson et al 2022).Therefore, it is imperative to understand how adaptation measures moderate the impacts on maize production for designing and implementing targeted policies to reduce climate change risks and facilitate adaptation.
Extreme temperatures due to climate change are predicted to reduce average yields for several major crops (Rezaei et al 2023).However, these impacts vary across space, with some cold areas acquiring benefits from increase in moderate temperatures and some hot areas suffering harms from increase in extreme temperatures (Rising and Devineni 2020).Changes in productivity caused by climate change drive farmers to substitute crops and move to new areas (Cui 2020).Thus, farmers can adapt to climate change by spatially adjusting their cropping patterns (Costinot et al 2016).Sloat et al (2020) showed that crop migration mediates rising temperatures in crops' growing seasons.This implies that empirical studies that ignored crop migration adaptation might have overestimated the impact of adverse climate change on crop yields.However, the beneficial effects of dynamic crop migration as an adaptation to modulate the impact of climate change on crop yields remain unquantified.
This study explores how maize migration in China modulates maize yields' response to climate change and quantifies the potential production benefits of maize migration.China's maize acreage showed a year-on-year increase of 0.37 million ha from 1949 to 2022.Since 2000, maize cultivation in northern China has expanded rapidly, with the acreage under cultivation expanding by 42% (NBSC 2022).The geographical centroid of maize production has distinctly shifted towards the northeast at a rate of 15.65 km yr −1 between 2000 and 2015, with the most significant distance being 259.10 km (Fan et al 2018).
This study makes the following contributions: first, we consider the adaptation of crop spatial distribution adjustments when assessing the impact of climate change on crop yields.Previous studies evaluated the impact of climate change on crop yields, assuming that crop spatial distribution remained unchanged, but ignored farmers' adaptation to climate change by altering crop spatial distributions (Schlenker and Robert 2009, Chen et al 2016, Zhang et al 2017).Although some studies have already examined the impact of climate change on the spatial distribution of crops (Zhao et al 2016, Ewert et al 2015, Cui 2020), there has been little research exploring how changes in crop spatial distribution impact crop yield response to climate change (Leng and Huang 2017).
Second, in addition to climatic factors, socioeconomic factors such as agricultural policies, urbanization, irrigation infrastructure, and production techniques play a significant role in crop migration (Hu et al 2019b).These factors directly affect the profitability of maize cultivation, thereby indirectly affecting farmers' decisions on adjusting maize area (Cui 2020).However, few studies have distinguished between climate factors and socioeconomic factors regarding their respective impacts on the spatial distribution of maize in China.
Our analysis comprises the following steps: (1) identifying the extent to which each of the driving factors affects crop migration; (2) quantifying the effects of maize migration on extreme heat exposure and the irrigation ratio; (3) assessing the role of maize migration in modulating the impact of climate change on maize production.
Study area
Maize is planted throughout China, with regional differences in varieties and growing season.Five major maize production regions-Northeast, Northwest, Southwest, South, and North-were included in the study area (supplementary information 1 figure S1).Maize is categorized by season (Chen et al 2016).Spring maize, typically planted in April and harvested in late September, is cultivated in the mountainous regions of Northeast, Northwest, and Southwest China.Summer maize is grown in June, has a slightly shorter growing season than spring maize, and is primarily produced in the North China Plain.Autumn maize is primarily planted in the mountainous regions of Southwest China (Liu 1993).
Data
The high-resolution maize distribution data for the years 2000-2019 were provided by the National Ecological Science Data Centre (https://cstr.cn/31253.11.sciencedb.08490;Luo et al 2020).The data were drawn from GLASS LAI remote sensing images and were provided on a 1 km × 1 km resolution grid.The 500 m irrigation cropland distribution data for the years 2000-2019 were obtained from Zhang et al (2022).We upscaled these data to a 10 km × 10 km resolution grid for consistency with the weather data.The grid data on the spatial distribution of maize and irrigation are displayed in the supplementary information (figure S2).The accuracy of the gridded data in this study was further validated based on county-level and the provincial statistical data.The county-level maize planting area data were obtained from the Chinese Academy of Agricultural Sciences, which has been widely used in relevant research on agricultural production in China (Chen et al 2016, Wang et al 2024).The provincial statistical data were obtained from the National Bureau of Statistics (www.stats.gov.cn).The results showed that the gridded areas data are highly aligned with these two data sources (supplementary information figure S3).
Weather data were obtained from the China Meteorological Data Service Centre (http://data.cma.cn/en).The center has been publishing the daily weather data of over 800 weather stations across China since 1951.We employed the kriging spatial prediction technique to interpolate daily weather data from each meteorological station onto a 10 × 10 km grid, aligning it with the spatial distribution of maize (Cressie and Wikle 2015).
The historical data of each county's agricultural input data and each provinces' maize yield between 2000 and 2019 were obtained from the Chinese Academy of Agricultural Sciences and a series of statistical books by the Provincial Bureaus of Statistics.
Maize area model
We used a panel data model to measure the factors driving the relocation of maize area (table 1).With the log-transformed factors, the regression equation is as follows: where ln is the natural log, i is the index of the 10 × 10 km grid, and t is the annual index.area it refers to the maize area for grid i in year t.C it represents climate variables, including temperature (TEM) and total precipitation (TSP) during the maize growing season.Referring to Cui (2020), temperature and precipitation are the most critical climatic variables affecting the suitable crop planting area.E denotes the socio-economic variables, including the proportion of the county's agricultural gross domestic product (Agdp), per ha Labor (Labr), per ha machinery (Mach), irrigation area (Irrg), cropland area (Crland), and Urbanisation level (Urban) (see table S1 for more details about these variables).These socio-economic variables were also widely adopted in similar studies (Li et al 2015, Hu et al 2019a, Fan et al 2020, Qian et al 2022).δ i represents grid fixed effects to control for time-invariant grid-specific characteristics such as geographic location and soil quality.θ t represents year fixed effect to control for technological progress, policy change, and maize price fluctuation (Wang et al 2022).ε it is the error term.α m is the estimated coefficient for climate factors, indicating that when temperature and precipitation change by 1%, the planting area of maize will change by α%.β n is the estimated coefficient for socio-economic variables, indicating that when socio-economic variables change by 1%, the planting area of maize will change by β%.
As maize policies altered farmers' marginal returns to maize cultivation, these policies may have influenced the extent to which farmers respond to climate change (Roberts and Schlenker 2013).China started a nationwide maize stockpiling program in 2007.A key feature of this policy is that the government collects maize from farmers at minimum support prices (Huang and Yang 2017).In 2016, China government implemented the direct payment maize subsidy policy tied to planting acres.Under the new policy, maize price is determined by the market conditions (Wu and Zhang 2016).
We proposed a causal framework to examine whether maize policies intensify or weaken the impacts of temperature on maize area.Given the relatively minor impact of precipitation change on maize area, we focused on how maize policies affect temperature-driven changes in maize area.We constructed a model incorporating the interaction between project implementation and temperature.Conceptually, we used the interaction model to estimate the impact of maize policies on the relationship between temperature and maize area.The interaction model is expressed as follows: where 1).γ 1 represents the impact of policy implementation on maize acreage.γ 2 represents the impact of maize policies on the sensitivity of maize acreage to temperature.
Climate impact isolation
To separate the impact of climatic and socioeconomic factors on the migration of maize area, we followed Li et al (2015) and predicted maize area using the real-world socio-economic factors, holding the climatic factors constant at the 2000 value: where HA C i,t is the exponential conversion of the predicted maize area from equation (1).A i,2000 and C i,2000 represent the actual maize area and the climate variables, respectively, in 2000 for grid i. C i,t denotes the climate variables in t for grid i. α is the coefficient estimated in equation (1).
Maize migration impact on climate exposure and irrigated area experienced by maize
Linear regressions of growing season temperature and precipitation over time were analyzed at the grid cell level.We used per gridded maize area as the regression weight.This approach ensures that grid cells with a larger proportion of land dedicated to cultivating maize is given greater weights, effectively capturing the entire temperature and precipitation distribution experienced by maize (Sloat et al 2020): where C it represents climate variables, including temperature (TEM) and total precipitation (TSP) during the maize growing season; t represents the year; ε is the error term; weights represent regression weights, where dynamic maize area from 2000 to 2019 is used.β 0 indicates the linear trend of temperature and precipitation experienced by maize.
We also considered a counterfactual scenario as regression weights-static maize grid areas in 2000: where C it , t and ε are the same as in equation ( 4).Weights represent regression weights, where static maize area in 2000 is used.β c indicates the linear time trend of temperature and precipitation experienced by maize if it remains at the distribution observed in 2000.
We also used quantile regression to estimate equations ( 5) and ( 6).This analysis was conducted at the 95% quantile of temperature, corresponding to 28 • C. Temperatures above 28 • C are defined as extreme high temperatures, as this temperature represents a threshold for maize growth (Wang et al 2024).β indicates the time trends in the warm bound (95th percentile) of growing season temperature for maize areas.
Maize yield model
We employed an econometric model to estimate the effects of maize migration on yield.Following Leng and Huang (2017), we used two gridded maize area weights to aggregate gridded climate data to the province-level: one using transient weights incorporating change in the maize's spatial distribution from 2000 to 2019 and the other based on a static maize grid map in 2000.The static maize grid map assumed that the spatial distribution of maize remained constant over the study period.The grid irrigation data was similarly aggregated.
The econometric specification of the maize yield function employed in this study follows that of Wu et al (2021).Thus, we examined the impact of climate change on maize yield production across 31 Chinese provinces using the specification below: where ln is the natural log and y it is the yield for province i in t.The tem it and pre it represent temperature and precipitation, respectively, during the maize growing season.The irr it is the irrigated maize area.X it denotes the control variables including fertilizer and machinery inputs.The trend it indicates the time trend to control for technological progress.δ i represents province fixed effects to control for timeinvariant characteristics such as geographic location and soil quality.
We employed two aggregated province-level climate datasets to fit the econometric models: the transient maize maps from 2000 to 2019 and the static maize map in 2000.Considering group-wise heteroscedasticity, cross-sectional correlation, and autocorrelation within the panels, we used a feasible generalized least square to estimate the model (Wu et al 2021).Statistical tests can be found in the supplementary information.
Climatic factors promote the migration of maize to the northeast
Regression results from the maize area model indicate that growing season temperature has the strongest negative effect on maize area, with each 1% increase in growing season temperature leading to a 2.533% decrease in maize area (table 1).Growing season precipitation has a slight positive effect on maize area.Expanding irrigated areas weakens the sensitivity of maize growth to precipitation (Zhou and Turvey 2014, Kang et al 2017).
Machinery and irrigation area are positively correlated with maize area.Agricultural machinery, as a substitute for farm labor, helps to improve crop productivity (Ma et al 2022).Irrigation compensates for rainfall deficiencies in Northern China.Urbanization has a negative effect on maize area (every 1% increase in urbanization decreases maize area by approximately 0.09%).
Figure 1 shows that the geographical centroid of maize area was found to have a distinct migration towards the northeast at the longest distance of 256 km between 2000 and 2019.The effects of climatic factors on the maize centroid's migration indicates a distinct migration towards the northeast.
Table 2 shows that how the temperature impacts on maize area depend on the treatment assigned by agricultural policies.Market-based policies affect farmers' responses to climate change to the greatest extent, increasing the negative effect of temperature on maize area by about 0.72%; while stockpiling program increases the negative effect of temperature on maize area by only 0.21%.This shows that current market-oriented agricultural policies may be effective in guiding spatial shifts in maize distribution to align with climate-driven changes.We added five types of sensitivity checks to ensure the robustness of the estimated climate effects on maize area, including controlling for maize revenue, production costs and lagged terms of precipitation, winsorizing data, controlling for province-by-year fixed effects, and considering spatial correlation in the error term (see SI 7).As shown in table S7, the estimated climate effects are consistent across specifications, indicating that the results are robust.
We used socio-economic data from farmer surveys, including fertilizers, seeds, pesticides and agricultural machinery power cost.The results indicate that the estimated coefficients are still robust (see supplementary table S8).
Climate-driven maize migration reduces exposure to extreme temperatures and strengthens the irrigation ratio
The negative effect of global warming on maize yields is moderated by spatial shifts in the maize production area.Figures 2(a) and (b) show the trends in crop-specific growing season temperatures from 2000 to 2019 (solid red lines).The growing season temperature experienced by maize is significantly lower than the counterfactual days.Extreme temperature (28 • C), a key factor damaging crop yields, is also significantly lower than in the counterfactual scenario.Figure 3 illustrates that average temperature increases significantly faster during the maize growing season in southern China than in the north.Therefore, the temperature exposure of maize decreases as maize production shifts from the south to the north.
Figure 2(c) illustrates the changes in water resources during the maize growing season.The precipitation trends experienced by maize acreage exhibits a decreasing trend (figure 2(c)) and is less than the counterfactual.Increased drought hazards associated with climate change and increased exposure of maize to droughts owing to production area expansion in the northern region primarily account for these changes.
Figure 2(d) shows that the ratio of irrigated maize expanded by 12% compared to the counterfactual scenario.This indicates that irrigation expansion is an important driver of rain-fed crop migration.In 1996, the Central Rural Work Conference of the Chinese government launched irrigation construction projects in 300 key counties, which contributed to a 7% increase in irrigation coverage (Wang et al 2024).
Climate-driven maize migration reduces production losses from climate change
Table 3 presents the sensitivity of maize yield to each climate variable using two econometric models based on maize area map weights.We found that if changes in the spatial distribution pattern of maize are not considered, the negative effect of growing season temperature would be overestimated, whereas the positive effect of the irrigation ratio would be underestimated.When considering crop migration, a 1% increase in temperature and expansion in irrigation result in a 0.398% decrease and a 0.11% increase in maize yield, respectively.In the absence of crop migration, a 1% rise in temperature causes a more severe 0.533% reduction in yield, while an increase in irrigation area by 1% elevates the yield by 0.7%.The no difference in the estimated elasticity of precipitation between the two models may be attributed to the compensation of irrigation for the decline in precipitation.
Figure 4 depicts the impact of climate change on maize production in China.At a national level, crop migration had a mitigation effect of 13.49 million tons and transformed an expected loss of 42.13 million tons with no adaptation to a loss of 28.64 million tons from 2000 to 2019.The avoided loss was equivalent to 15% of the China's total maize production in 2000 (NBSC 2022).
Discussion
Our results indicate that the maize production system becomes less vulnerable to climate change as the northward movement of maize area mitigates growing season temperature and strengthens the positive impact of irrigation.Furthermore, the geographical centroid of maize area was found to have a distinct migration towards the northeast at the longest distance of 256 km between 2000 and 2019.North China achieves a higher grain growth mainly due to a rapid increase in maize production.The development of animal husbandry, the impact of international soybean trade, and the drive of economic benefits have promoted the expansion of maize planting scale.
Second, our findings reveal that agricultural machinery has a significant positive impact on maize 2017).In 2016, the government revised the maize stockpiling policy into 'market-oriented purchase' and 'subsidy' , aiming to give play to the role of maize market price formation mechanism (Wu and Zhang 2016).These two policies create different incentives for farmers to adjust the cultivated area and spatial distribution of maize.Specifically, China's stockpiling program prevents domestic prices from soaring, potentially discouraging climate-driven expansion in maize acreage (Cui and Zhong 2024).The marketoriented policy allows Chinese farmers to independently make decisions about what to grow and how to manage agricultural production based on their knowledge of climatic conditions and market demand.Large-scale farmers in North China are more active in expanding their planting acreage in virtue of abundant arable land and favorable climate (Wang et al 2018).
Our study has several limitations.Firstly, our empirical models capture only current adaptation practices in response to within-year temperature shocks.Secondly, province-level data on yield cannot capture the heterogeneity in maize yield within a province.Our results reflect the role of maize migration in mitigating the adverse effects of climate change at the national level.However, we cannot identify the role of maize migration in mitigating the negative effects of climate change on maize production in China's major maize-producing provinces, such as Heilongjiang and Henan.Future studies need to incorporate finer-scale crop yield datasets and consider the role of maize migration within specific provinces.
Conclusion
Our findings indicate that rise in temperature has a significant negative effect on maize area and that precipitation has a significant positive effect.Socioeconomic factors dominating the migration of maize area include agricultural gross domestic product, power of farming machines, and fertilizer input.From 2000 to 2019, the migration of maize production area mitigated 13.49 million tons in climate change damage to maize production, equivalent to 15% of the total maize production in 2000.Current market-oriented agricultural policies may be effective in guiding spatial shifts in maize distribution to align with climate-driven changes, potentially decreasing the vulnerability of China's maize yield to climate change.
Figure 1 .
Figure 1.Predicted shift in maize area centroids in China, which have a ln(y)-ln(x) relationship.Note: Blue dot: actual migration of maize area; red dot: actual migration of maize area by climate change.
Figure 2 .
Figure 2. Trends in growing season temperature, warming bound (>28 • C), precipitation, and irrigation ratio over time.Red lines indicate observed temperature trends that are influenced by changes in crop area and climate, whereas green dashed lines represent a counterfactual scenario in which maize areas remain static at the 2000 distribution level.
Figure 3 .
Figure 3. Spatial differences in temperature trends from 1980 to 2019.Note: Trends are calculated using linear regression.
Table 1 .
Estimated maize area function.
Table 2 .
The impact of agricultural policies on temperature-maize area relationship.
Table 3 .
The sensitivity of maize yields to the climate variables.Dependent variable = ln (maize yield).Migration model uses a transient weight incorporating change in the maize's spatial distribution from 2000 to 2019; the counterfactual model uses a static maize grid map in 2000.* p < 0.05; * * p < 0.01; * * * p < 0.001. Notes: Wu Q and Zhang W 2016 Of maize and markets: China's new maize policy agricultural policy review CARD Agric.Policy Rev. 3 7-9 (available at: www.card.iastate.edu/ag_policy_review/article/?a=59) Zaveri E and Lobell D 2019 The role of irrigation in changing wheat yields and heat sensitivity in India Nat.Commun.10 4144 Zhang C, Dong J and Ge Q 2022 Mapping 20 years of irrigated croplands in China using MODIS and statistics and existing irrigation products Sci.Data 9 407 Zhang P, Zhang J and Chen M 2017 Economic impacts of climate change on agriculture: the importance of additional climatic variables other than temperature and precipitation J. Environ.Econ.Manage.83 8-31 Zhao J, Yang X, Liu Z, Lv S, Wang J and Dai S 2016 Variations in the potential climatic suitability distribution patterns and grain yields for spring maize in Northeast China under climate change Clim.Change 137 29-42 Zhou L and Turvey C G 2014 Climate change, adaptation and China's grain production China Econ.Rev. 28 72-89 | 5,412.4 | 2024-06-26T00:00:00.000 | [
"Environmental Science",
"Economics",
"Agricultural and Food Sciences"
] |
Research on Indoor Scene Classification Mechanism Based on Multiple Descriptors Fusion
+is study aims at the great limitations caused by the non-ROI (region of interest) information interference in traditional scene classification algorithms, including the changes of multiscale or various visual angles and the high similarity between classes and other factors. An effective indoor scene classification mechanism based on multiple descriptors fusion is proposed, which introduces the depth images to improve descriptor efficiency.+e greedy descriptor filter algorithm (GDFA) is proposed to obtain valuable descriptors, and the multiple descriptor combination method is also given to further improve descriptor performance. Performance analysis and simulation results show that multiple descriptors fusion not only can achieve higher classification accuracy than principal components analysis (PCA) in the condition with medium and large size of descriptors but also can improve the classification accuracy than the other existing algorithms effectively.
Introduction
With the rapid development of the Internet and the increasing demand for applications based on location awareness, location-based services are getting extensive attention. Most people cannot live without the location service and the navigation system based on GPS (Global Position System) in their daily life. Obviously, outdoor localization technology has been relatively mature, and many mobile devices also refer to outdoor location technology [1,2,3,4]. Due to the particularity of indoor environment, the GPS signal cannot directly meet the requirements of indoor localization service. At present, there are many indoor localization methods [4][5][6], mainly including WiFi, RFID, Bluetooth, Ultrawide band, and so on. Nowadays, the visual indoor localization system [7][8][9] is attracting more and more attentions of the researchers all over the world due to the advantages of low deployment cost, strong autonomy, and high localization accuracy.
A large visual database, namely, Visual Map, has occasionally been established at offline stage to achieve accurate indoor visual localization. Visual Map may contain a large number of images or image features of different scenes and corresponding location information, which is the foundation of visual indoor localization. When the user performs a location query online, the image will be retrieved in the Visual Map. Traditional image retrieval algorithms rely on pixel point matching [10,11], which can only give the results of image matching but does not contain the visual image location information. In addition, existing image retrieval algorithms often carry out global traversal search, which leads to excessive time overhead and is not conducive to real-time localization of mobile users. erefore, an effective indoor scene classification mechanism is proposed in this paper based on multiple descriptors fusion. e images in Visual Map will be classified according to the scenes, so as to reduce the time overhead of visual images retrieval at online stage and improve the efficiency and accuracy of indoor scene classification. In this paper, both the visual information and the depth information of an image are fused. e visual image mainly contains color information, and each point on the depth image corresponds to the visual image and contains position information. Both types of images are captured by Microsoft Kinect 2.0.
In the indoor scene classification mechanism, the initial descriptor set containing two kinds of image descriptors will be generated by the existing spatial pyramid model (SPM) [12,13]. en, the greedy descriptor filter algorithm (GDFA) will be proposed to find out the valuable descriptors. Multiple fusion descriptors will be generated by homologous and nonhomologous combination to further enhance the effectiveness of descriptors. Finally, support vector machine (SVM) will be adopted for classification. e overall framework of the indoor scene classification mechanism is shown in Figure 1. e remaining of the paper is arranged as follows: Section 2 reviews the research progress of scene classification techniques and their applications in indoor scenes. Section 3 describes the generation of the initial descriptor set and the descriptor filtering in detail. Section 4 introduces the experimental database of this paper and shows descriptor evaluation results. In Section 5, two combinations of homologous and nonhomologous will be realized and the combination results will be evaluated. Section 6 concludes the article.
Motivation
At the Scene Understanding Symposium held at MIT in 2006, an important point was clearly stated for the first time, namely, scene classification is a new promising research direction for image understanding. Although existing classification methods claim to be able to solve any scene classification problems [14,15], the experimental outcome shows that only the outdoor scene classification can be effectively solved by these methods, while the indoor scene classification problems may still be a challenging task. In addition, [16] shows that the classification accuracy of the indoor scene is far lower than that of the outdoor scene adopting the same feature extraction and classification recognition methods. erefore, it is important to improve the classification accuracy of the indoor scene.
In early studies, low-level features of images were usually extracted to classify scenes, such as color, texture, and shape [17][18][19]. However, these methods based on low-level features have not been a hot topic in the field of scene classification due to its unsatisfactory classification effect. In order to overcome such problems, the methods based on middlelevel features of image are proposed. e global feature Gist is adopted and improved in [20]. e good identification ability of scale invariant feature transform (SIFT) makes it always be adopted as the local features with the highest priority in many scene recognition algorithms [21]. Shi et al. [22] proposed an indoor scene classification algorithm based on the enhancement of visual sensitive area information. And local features and global features are integrated by the visual sensitive area information.
With the rise of Kinect, the scene classification algorithm based on depth information [24,25] has received more and more attention. e histogram of oriented gradient (HOG) algorithm [26] is adopted to classify depth images and visual images, respectively [28]. SIFT is adopted to extract features of depth images and color images, and SPM coding is adopted to classify images after feature fusion [29]. SIFT of visual images and speeded up robust features (SURF) [27] of depth images are fused to classify images [30]. Five deep core feature extraction algorithms are designed in [31] to extract the size, edge, and shape information of visual images, respectively, and the extracted information is fused for classification.
As research continues, the model based on the convolutional neural network (CNN) [16,23] has attracted the researchers. However, massive training sets are required in CNN, which may result in relatively long training time. In addition, CNN usually has high computing requirement on the platform, so it is difficult to realize indoor scene classification on the platform with limited computing resource.
Multiple Image Descriptor Generation and Filtering
Inspired by [28][29][30][31], visual information and depth information will be fused in this paper. e higher accuracy indoor scene classification effect will be achieved by the spatial 3D information contained in the depth image, which is insensitive to light and reflects the position relationship between objects. Features of the original images will be extracted by D-SIFT (Dense SIFT) [32], and similar features will be clustered to form BoW (Bag-of-Words) [33][34][35] by K-means [36,37]. Based on BoW, the initial descriptors set including visual image descriptors and depth image descriptors will be generated with the construction of SPM. It is true that the number of initial descriptors is large and the quality is uneven. In addition, combining directly with unfiltered initial descriptors will lead to an explosion of the combined results. erefore, a simple and effective descriptor filtering algorithm ought to be proposed to obtain those valuable descriptors.
Initial Descriptors Generation.
e descriptor generated expression could be derived from the following procedure. Let I be any input image and x be a descriptor generated by the image. L is a set of predefined class tags, and l is one of them. e function of generating descriptor x from image I can be expressed as g(I) � x, and the probability of successfully matching descriptor x to class tag l is Pl | x. erefore, the expression of the most appropriate class tag l will be l � arg max l∈L P(l | g(I)). (1) e key to the research will be turning the initial descriptors into valuable descriptors with high classification accuracy. In order to find such descriptors, equation (1) will be further optimized. On the premise of the best descriptor filtering and combination methods, a correct class label assigned to input image I will be l (l ≠ l) and X is adopted to express a set of multiple image descriptors. en, the optimized descriptor generation expression will be g(I) � arg max g(I)∈X P(l | g(I)). (2) According to equation (2), the initial descriptors generated by the input image can only get the desired classification effect through filtering and combination. Initial descriptors are large in number and poor in quality, while descriptor filtering can discard worthless descriptors and descriptor combination can improve the effectiveness of descriptors.
e descriptor generation process based on SPM will be described as follows.
Spatial Pyramid Model.
In recent years, the BoW model has been widely adopted in computer vision. It takes the image features as visual words and classifies images by counting the number of visual words in each image. However, the traditional BoW lacks the spatial position information [29]. In this research, SPM will be established to cut the image into scale cells, then the number of visual words will be counted in each cell and the histograms can be drawn. Finally, histogram features at all scales will be linked together to form an eigenvector. We assume that a part of visual words has been selected as basic features. e steps of descriptor generation based on SPM are described in detail as follows: (i) Extracting the D-SIFT feature. e SPM-based descriptor generation process is shown in Figures 2(a)-2(c), and each cutting type will be divided into three columns for clear explanation. As shown in Figure 2(a), the first column shows the cutting type of the initial image, the second column represents the statistical results of visual words for each cell, and the initial descriptors formed by connecting the second column histograms are shown in the third column. e image contains 5 visual words; three pyramid hierarchies; and vertical, horizontal, and grid, the three cutting methods.
e descriptors generation based on SPM mainly depends on three important parameters: BOW size (S), pyramid hierarchy (H), and cutting method (C). H � 0 represents the first hierarchy, and the image is cut 0 times. H � 1 represents the second hierarchy, and the image is cut 1 time; H � 2 represents the third hierarchy, and the image is cut 2 times. erefore, the number of cutting depends on H. In other words, when H � h, the image will be cut h times, and the number of cells generated after cutting is 2 HC . Finally, seven different descriptors are obtained in Figure 2, whose size increases exponentially with the number of H and C and has a linear relationship with dictionary size S. e calculation formula of descriptor size η is as follows: As we know, image descriptors contain semantic and spatial distribution information of the scene. S will determine the semantic meaning of descriptors, while H and C will focus on the spatial distribution of descriptors, ensuring that more detailed information can be provided. e larger S will provide more detailed semantic information, making features more obvious and more representative. However, if there are a lot of visual words, the histogram will become longer, which will affect the image retrieval and matching process, subsequently. Analogously, a higher pyramid hierarchy contains more detail, while a lower hierarchy is more general.
As can be seen from [12,13,38], the standard values of the three parameters are S � 20, 50, and 100; H � 0, 1, and 2; and C � 1 (horizontal and vertical segmentation) and 2 (grid segmentation), respectively. 21 different visual image descriptors and 21 depth descriptors can be obtained by combining these standard values. e reason why the number of descriptors is 21 instead of 27 (3 3 ) is that H � 0 in the pyramid model does not cut the image, with no demands for combination indeed. In other words, for any S, the first pyramid hierarchy will deal with only one descriptor, while the second and third pyramid hierarchies will deal with three descriptors.
Descriptors Filtering.
In this section, the greedy descriptor filter algorithm (GDFA) will be proposed to find the most valuable descriptors in the initial descriptor set. Since η of the initial descriptors mainly gathered in (0, 400] (as shown in Figure 3), η is divided into three continuous intervals (0, 150], [150,350],and[350, ∞) for the convenience of descriptor filtering. We assume that large, medium, and small intervals are suitable for our data-gathering platform with small, medium, and high computing power configurations, respectively. e descriptor weight α is related to the descriptor classification accuracy ζ and descriptor size η . In order to obtain descriptors with smaller size and higher accuracy, the calculation formula of the weight α could be defined as follows: e greedy descriptor filtering algorithm (GDFA) flow is given in Algorithm 1.
At first, the weight of all descriptors is calculated according to equation (4). Next, the descriptor size is divided into (0, 150], [150,350], and [350, ∞) three continuous intervals, and then the descriptors are sorted in order of weight values from the largest to the smallest. e descriptor with the largest weight in N i is filtered and added to the first position in F. If the descriptor weight is greater than 95% of the weight of the previous selected descriptor, that is, (α i > 0.95α i−1 ), the descriptor is filtered out; otherwise, the next descriptor will be compared. GDFA not only could find out the most valuable descriptors in each interval, but also could filter out descriptors with similar weights.
Experimental Database.
In order to study the indoor scene classification mechanism, as shown in Figure 4(a), the indoor image data gathering platform with Microsoft Kinect 2.0, independently developed by the laboratory, will be adopted to carry out image data gathering in the Heilongjiang University physical laboratory building. e database contains visual and depth images captured in 9 indoor scenes under different lighting conditions. To cite some examples, Figure 4(b) shows part of the database images.
e database images will be randomly divided into 5 sequences, namely, Training 1, 2, and 3 and Test 1 and 2. e image number for 9 scenes in 5 sequences is listed in Table 1.
Evaluation
Results and Analysis. K-fold cross-validation could be a common accuracy test method, which can effectively avoid over-learning and under-learning. 10-CV (10-fold cross-validation) will be adopted to evaluate the classifier model in this section. To ensure that each cross-validation image is similar, a subset of 30 consecutive images will be randomly assigned to Fold1-Fold10 (represents 10 subsets of the 30 images), which effectively prevented any deviation caused by the time continuity in the data set. Figure 5 shows Mobile Information Systems the distribution of each scene in the data set in each fold of 10-CV and global distribution. It is worth noting that scenes in the data set are not evenly distributed in Fold1-Fold10. Table 2 shows the classification accuracy of initial descriptors of 42 visual image descriptors and depth image descriptors after 10 times of cross-validation. In SPM, when H � 0, for any kind of segmentation type, there is no image cutting and the generated descriptors are identical, so the evaluation results are identical too. By comparing the results of visual images and depth images, we can find that the classification accuracy of depth images is significantly lower than that of visual images. e reason may be that the visual coding technology (visual coding is the mapping between data and visual results) of the depth image is not accurate enough to obtain fine-grained data.
GDFA can find the valuable descriptors from the initial descriptor set, which will facilitate the descriptor combination work in Section 5. Table 3 shows the internal parameters and classification accuracy of the 4 visual image descriptors and 7 depth image descriptors filtered by GDFA, analogously, and the evaluation data are from 10-CV. In other words, the 42 initial descriptors given in Table 2 (9) Filter the descriptor N i [1] with the largest weight in N i and add N i [1] to Φ i (10) if N i [j − 1] is filtered and α[j] > 0.95 * α[j − 1] then (11) Add N i [j] to α i (12) else (13) end (14) end (15) end Output: filtered descriptor list---α ALGORITHM 1: Greedy descriptor filtering algorithm. 6 Mobile Information Systems reduction with PCA can preserve the most important features in high-dimensional data and remove noise and worthless features, which could improve data quality and data processing speed. Figure 3 shows the comparison between the filtering result of GDFA and the dimensional reduction result of PCA (the solid point in Figure 3 Mobile Information Systems
Descriptor Combination
e most valuable descriptors have been selected by GDFA in Section 4. In order to further obtain the high-quality and highly efficient final descriptor, this section will propose a multiple descriptor combination algorithm (this section only combines two descriptors) although this step might increase the running time of scene classification. ere will be two descriptor combination levels, as shown in Figure 6.
One is the descriptor level (DL), which can be input to SVM1 after the descriptors of Image1 and Image2 have been connected into one combination descriptor, as shown in Figure 6(a). e other one is the classifier level (CL), which weights the different response results after Image1 and Image2 have been input to SVM1 and SVM2 separately, as shown in Figure 6(b). Also, this section will discuss homologous combinations (V + V or D + D) and nonhomologous combinations (V + D). Table 1: e number of images of 9 scenes in 5 sequences.
Homologous Combinations.
is section will combine two descriptors extracted from the same image type, namely, V + V or D + D, which are called homologous combination. e combination will be carried out at DL and CL, respectively. e test set of SVM could have been composed of two groups of sequences with obvious light differences, Test 1 and Test 2, respectively.
V + V.
ere are 6 different combinations of the 4 depth image descriptors V1, V2, V3, and V4 given in Table 3, which will be applied to DL and CL, respectively. e classification accuracy obtained in Test 1 and Test 2 is shown in Figures 7(a) and 7(b), respectively.
D + D.
ere are 21 different combinations of the 7 depth image descriptors D1, D2, D3, . . ., D7 given in Table 3, which will be applied to DL and CL, respectively. e classification accuracy obtained in Test 1 and Test 2 is shown in Figures 8(a) and 8(b), respectively. Comparing Figure 7 with Figure 8, we find that the classification accuracy of D + D is generally lower than V + V. e highest classification accuracy in Test 1 and Test 2 achieved by the best depth image descriptor D7 is 48.79% and 65.45%, respectively (while the highest classification accuracy in Test 1 and Test 2 achieved by the best visual image descriptor V4 is 74.76% and 85.78%, respectively). When the best initial descriptor D7 acts as the parent descriptor, the highest classification accuracy of DL is 56.07% in Test 1, while it is 71.86% in Test 2. Apparently, the classification accuracy in Test 2 is still higher than that in Test 1 in D + D.
Similar to V + V, DL always outperforms CL in D + D. e classification accuracy of combination descriptors in DL is always higher than the parents' descriptors (39 out of 42), while only a few combination descriptors have higher classification accuracy than parents' descriptors in the CL (16 out of 42). e internal parameters of D7 are S � 100, H � 2, and C � horizontal. D5+D7 (56.07%) achieves a favorable effect, and the internal parameters of D5 are S � 100, H � 1, and C � horizontal. D2+D7 (71.86%) also achieves a favorable effect, and the internal parameters of D2 are S � 50, H � 2, and C � horizontal. e similarity of the optimal combination is C � horizontal, which is verified in Section 4. In addition, the internal parameters of V4 and D7 are S � 100, H � 2, and C � horizontal. So, we can speculate that high classification accuracy could be obtained by descriptors with such a group of internal parameters, which will be verified in Section 6.
Nonhomologous Combinations.
is section will combine two descriptors extracted from different image types, namely, V + D, which is called as nonhomologous combination. ere are 28 different combinations of V1, V2, V3, and V4 and D1, D2, D3, . . ., D7 in Table 3, which will be applied to DL and CL, respectively. e specific evaluation process is the same as homologous combination, and the evaluation results are shown in Figure 9.
In Test 2, the highest classification accuracy of CL and DL reaches 80.36% and 92.64%, respectively, while in Test 1, it reaches 72.84% and 81.76%. is is consistent with what we found before, the classification accuracy of Test 2 is always higher than Test 1, and DL always outperforms CL. In CL, the combination with the highest classification accuracy is D5+V4 (72.84%) in Test 1. In the meantime, the classification accuracy of V4, which acts a parent descriptor, is 74.76%. e combination with the highest classification accuracy is D7 + V4 (80.36%) in Test 2. e classification accuracy of V4, which acts as a parent descriptor, is 85.78%. As shown in Figures 9(a) and 9(b), only a few combination descriptors have higher classification accuracy than parent descriptors in the CL (18 out of 56), the same as in homologous combinations. It shows that the result of CL is not satisfactory.
In DL, the combination with the highest classification accuracy is D7 + V4 (81.76%) in Test 1. In the meantime, the classification accuracy of V4, which acts as a parent descriptor, is 74.76%. e combination with the highest classification accuracy is D7 + V4 (92.64%) in Test 2. e classification accuracy of V4, which acts as a parent descriptor, is 85.78%. As shown in Figures 9(a) and 9(b), the classification accuracy of combination descriptors in DL is always higher than that in parents' descriptors (56 out of 56).
We can conclude that DL outperforms CL in nonhomologous combination because most combination descriptors in DL outperform their parent descriptor, while the combination descriptors in CL might be difficult to achieve. In addition, no matter in which level, the combinations of the descriptor with excellent performance and the descriptor with poor performance outperform other combinations. To cite some, D1+V4 precedes D1+V1, D1+ V2, and D1+V3 in Figure 9(b).
Combining Figures 7-9, we can conclude that the overall effect of V + V and D + V outperforms D + D. Sometimes V + V outperforms D + V although nonhomologous combinations contain more comprehensive information. DL combines descriptors before entering a classifier, which may preserve characteristics of the descriptors completely. is may be the reason why DL is always better than CL. So, we only compare the evaluation results of V + V and V + D in DL. Table 4 lists the best combinations of homologous and nonhomologous in DL, as well as the highest classification accuracy (bold data) obtained in Test 1 and Test 2. e best combination is V3 + V4 in Test 1, and the best combination is D2 + V4 in Test 2. We recall that the light variation in Test 1 is stronger than that in Test 2. So V + V can be the best in Test 1, while D + V can be the best in Test 2. As shown in Table 3, descriptor size has 8 possible values (including single descriptor or combination descriptor), respectively: 20, 40, 200, 220, 400, 420, 600, and 800. e maximum classification accuracy corresponding to each descriptor size value is compared with PCA results. Figure 10 shows the relationship between classification accuracy and descriptor size in Test 1 and Test 2. As we can see, the classification accuracy of the multiple descriptors fusion mechanism can be improved significantly with the descriptor size from small to middle. Also, the classification accuracy gradually tends to be stable with the descriptor size from middle to large. In Test 1, when descriptor size equals to 400 (large), V2 + V3 (80.94%) gets the highest classification accuracy. In Test 2, when descriptor size equals to 600 (large), D2 + V4 (92.64%) gets the highest classification accuracy. PCA achieves high classification accuracy in the condition with small descriptor size. e superiority of the multiple descriptors fusion mechanism becomes obvious with the increasing descriptor size.
Execution Time.
Indoor scene classification is divided into two stages: offline training and online testing. It is assumed that the construction of BoW and classifier training has been completed at the offline stage. erefore, what affects the running time of the online stage is the generation and classification of descriptors, including 4 steps, as shown in Table 5.
Step 2 is related to BoW size (S), so S � 20, 50, and 100 are studied, respectively. Step 3 depends on the size and number of image cells, which is related to pyramid hierarchy (H) and cutting method (C). Step 4 is determined by η.
Algorithm Analysis and Comparison.
Under the same database, the classification accuracy obtained by our mechanism will be compared with other fusion methods, as shown in Table 6. e classification accuracy obtained by the algorithms with single feature fusion [28][29][30] tends to be low for the indoor scene, largely because these algorithms do not filter descriptors. So it seems that the algorithm with single feature fusion is suitable for indoor scene classification. Higher classification accuracy is obtained by the algorithm with multiple features fusion [31], which extracted five different kernel descriptors from the images. After integration, they are trained and classified by Linear SVM, Kernel SVM, and Random Forest, respectively, and obtained 89.6%, 90.0%, and 90.1% accuracy in this experiment. 92.6% accuracy is achieved by our classification mechanism, which has a 2.5% higher value than in [31]. Above all, multiple descriptors fusion mechanism has good performance in indoor scene classification.
Conclusion
Aiming at the actual demands for indoor positioning applications, a multiple descriptors fusion model is established and an image classification strategy is proposed to improve the quality and efficiency of descriptors so as to achieve a better indoor scene classification effect. Firstly, the initial descriptor set is formed based on the established SPM. en, the greedy descriptor filtering algorithm is adopted to select the descriptors with high weight in each descriptor size interval and a valuable descriptor set is obtained. Finally, the multiple descriptors combination algorithm is proposed to obtain high-quality and highly efficient multiple descriptors by combining homologous and nonhomologous images at DL and CL, respectively. e generation, filtering, and combination of multiple descriptors proposed in this study improve the performance of the classifier. e evaluation results reflect that the multiple descriptors fusion mechanism proposed in this study outperforms the well-known PCA dimensionality reduction technology, especially for the condition with medium or large descriptor size.
is strategy not only achieves better results than other feature fusion algorithms, but also solved the limitations of existing scene classification algorithms applied to interior scenes.
Future research will focus on the improvement of the image feature extraction algorithm and the efficiency of constructing visual words by clustering features in the visual BoW model by other clustering algorithms. More attention will be paid to enhance the effectiveness of descriptors when describing image information. At the same time, the improvement of the quality of the depth image will be taken into account so as to make more efficient use of depth data in the process of descriptor filtering and descriptor combination. Alternatively, a more complete data set can be adopted.
Data Availability
e data results used to support the findings of this study are presented in this paper. | 6,591.2 | 2020-03-16T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Numerical modelling of the KOBO extrusion process using the Bodner–Partom material model
Numerical simulations of the extrusion process assisted by die cyclic oscillations (KOBO extrusion) is presented in this paper. This is highly non-linear coupled thermo-mechanical problem. The elastic-viscoplastic Bodner–Partom-Partom material model, assuming plastic and viscoplastic effects in a wide range of strain rates and temperatures, has been applied. In order to perform simulations, the user material procedure for B–P material has been written and implemented in the commercial FEM software. The coupled Eulerian–Lagrangian method has been used in numerical computations. In CEL method, explicit integration of the constitutive equations is required and remeshing is not necessary even for large displacements and large strains analyses. The results of numerical simulations show the heterogeneous distribution of stress and strain inside container and the non-uniform distribution of strain in the extruded material. The increase of material temperature has been noted. The results obtained (stress, temperature, location of plastic zones) qualitatively confirm the results of experimental investigations. The application of the user material procedure allows accessing all material state variables (current yield stress, hardening parameters, etc.), and therefore it gives detailed information about phenomena occurring in extruded material inside recipient. This information is useful for a proper selection of parameters of the KOBO extrusion process e.g. synchronization of the punch displacement with the die oscillations frequency to avoid the saturation of material isotropic hardening, which blocks the progress of extrusion.
The development of material forming processes over years and their application in many industrial sectors caused the need of proper description of the material behaviour at very high strain rates, up to even 5000-11,000 s −1 [1,2]. The determination of material properties at high strain rates is complicated and restricted by the ambiguity of the methods used, as well as by the complexity of the strain phenomenon. The mechanical properties of materials at strain rates above 10 s −1 might be determined with the use, e.g. of the split Hopkinson pressure bar test, a Taylor impact, a shock loading by plate impact and a high-speed photography, whose usage is limited by the equipment availability [3]. In numerical simulations, the description of the material behaviour subjected to loading at high strain rates requires the selection of a proper material model. The classical plasticity theory, in which the constitutive equations are time-independent or time-dependent, might be applied in a solution of many engineering problems. In forming processes involving large deformations and large strains, the unified plasticity theory has indicated sustainability for use. The unified plasticity theory, which is related to both elasticity and plasticity theories, is used to model the behaviour of material under loading for a wide range of temperature and strain rates including their dependence on time [4]. The unified theory does not include the yield condition typical for the classical plasticity one. The lack of yield condition eliminates the need of judgement of loading/unloading conditions [5]. The longterm material response can be considered for different loading conditions (creep and the relaxation) [6]. In the unified plasticity theory, both elastic and inelastic responses occur simultaneously. As in classical plasticity of metals, the elastic and inelastic strains are additive for all stages of loading and unloading [6,7].
The classical plasticity in macroscopic scale applies differential constitutive equations, e.g. the plastic flow rule to describe the material model. The microscopic phenomena responsible for the plastic deformation, i.e. the dislocation slip and its formation, as well as the grow of twins-are not considered here. Therefore, the classical plasticity theory can model the changes in a material microstructure on the macroscopic level only. In contrast, the micromechanisms of plastic deformation are considered in a unified plasticity theory. The dislocation slip, which is mainly responsible for the plasticity in solids, can be analyzed in a macroscopic scale. The continuum body divided into macroscopic points (grains) can be described by the unified plasticity which examines the response of all macroscopic grains in a solid as their irreversible macroscopic movement with a certain direction caused by the dislocation slip [8].
The viscoplastic constitutive model proposed by Miller assumes both isotropic and kinematic hardening by the addition of one backstress for kinematic hardening and a drag stress for isotropic one [21]. It does not include yield stress and the viscosity function is a combination of a hyperbolic sine and a power law [31]. The Robinson model proposed first by Robinson and developed by Arnold and Saleeb [32] and Saleeb [33] introduces a backstress for a kinematic hardening, a drag stress for an isotropic one, a yield stress and a power function for a viscoplastic flow. Additionally, the model assumes a static recovery term without introducing a dynamic one as in other viscoplastic models [31]. The main disadvantage of this model is so-called "indifferent character" of the kinematic hardening resulting in vanish of previous backstress in further tension or compression tests. It causes that the subsequent tension results in exactly the same response of a material as in the initial tension cycle [34].
The Walker unified viscoplastic model does not contain a yield stress and includes both isotropic and kinematic hardening which are described as a drag stress and the backstress evolution, respectively. In the backstress equation, the special asymmetry is contained and the initial non-recoverable asymmetry of a viscoplastic behaviour of a material is included. The static recovery effect takes place only for the nonlinear backstress. In the Walker's equation, temperature rate terms for a backstress are introduced [31,35].
In the model proposed by Krempl [25], the backstress evolution is formulated in terms of the total strain rate instead of the viscoplastic strain one. The dynamic recovery term included in this equation is proportional to the norm of a viscoplastic strain rate. The Krempl model also includes the hardening rate effects for the equilibrium stress.
Johnson-Cook [17][18][19][20] Onera exponential [26] Perzyna [28][29][30] The other unified viscoplastic constitutive equation described by Delobelle [23] is formulated as a hyperbolic sine and the backstress evolution is given by a secondary and tertiary backstresses. The drag stress defining the isotropic hardening is a function of a temperature and the accumulated plastic strain. The static recovery for the Delobelle model appears only in kinematic hardening [31,36].
The Perzyna model [37] relates the viscoplastic strain rate to the specific function depending on the current stress and state variables which define the stress-strain history. The model includes only isotropic hardening without taking into consideration the kinematic one. It assumes a yield stress but it is relatively insensitive to the strain rate and a temperature.
In each of the mentioned unified viscoplastic models, there is no function describing the relationship between the stress or a viscous stress and the norm of the viscoplastic strain rate. The viscosity functions are often only modifications of the power law in Norton's equation for creep [31] where the n exponent varies with the stress or the strain rate [38].
Many of unified elastic-viscoplastic constitutive equations are limited to modelling of small strain problems. Their usefulness in engineering applications is restricted by difficulties with the identification of multiple material parameters. Among material models of the unified plasticity theory, the Bodner-Partom one become very popular in the eighties and nineties of the twentieth century.
The Bodner-Partom (B-P) unified plasticity model was formulated by S.R. Bodner and Y. Partom in 1975 in order to examine the nonlinear elastic-viscoplastic response of titanium alloy subjected to loading, assuming the strain hardening [39]. It is an elastic-viscoplastic model described by physical and phenomenological factors based on the continuum mechanics [40]. The B-P model takes into account the micromechanical effects, e.g. the dislocation dynamics in isothermal loading conditions, kinematic and isotropic hardening, the material damage, the relaxation and the creep [41,42]. It was initially used to describe the behaviour of metal alloys under loading at elevated temperatures and for a wide range of strain rates [43,44]. The basic equations were developed in order to consider implementations for non-metallic materials [45][46][47], e.g.; polymers [48][49][50], tissues [51], as well as architectural and technical fabrics [52,53]. The model is also used for sophisticated problems associated with the creep and the crack development of composites with a metal matrix [54]. In [55], the B-P model was applied to describe viscoelastoplastic properties and the deformation mechanism of a cement-emulsified asphalt mixture. Numerical simulations of the damage evolution for plastic-bonded explosives subjected to complex stress states are shown in [56]. The determination of viscoplastic parameters of a rubber-toughened plastics using the B-P model is presented in [57]. More detailed information about the B-P model and its application in numerical calculations for different materials is included in [58][59][60][61][62][63].
Due to the assumption of a wide range of strain rates and temperatures, the B-P model might be applied in numerical simulations of the material forming processes. In [64], it was used for modelling the conventional extrusion process. In this paper, for the first time the B-P model is applied in numerical simulations of the KOBO extrusion process. The KOBO method is an unconventional elastic-plastic deformation process classified to Severe Plastic Deformation (SPD) methods changing a plastic deformation by the introducing the die cyclic oscillations with a given frequency and a given angle (approximately 5°-7°) [65]. The die cyclic oscillations cause the change in material structure and lead to an increase in the concentration of lattice defects [66].
The reduction of the extrusion force and the plastic work, as well as the elimination of process annealing in comparison to the conventional extrusion are the main advantages of the KOBO method. The process allows the cold forming of heavily deformed materials and enables the stable processing of their structure to the even nanostructured size. The products with a complex geometry might be produced without significant tools wear [66,67]. More detailed information about the KOBO extrusion process is contained in [66][67][68][69][70][71].
Numerical simulations of the KOBO extrusion can help to optimize the whole process in terms of the reduction of the extrusion force and the total operational costs. Reliable simulations can eliminate possible technical problems associated with the material continuous extrusion. In some laboratory tests material was not extruded at all, or it was extruded only a little, and then it was blocked in the die. Numerical simulations can predict such unwanted behaviour usually caused by a large isotropic hardening (up to saturation) and can help to set-up extrusion process parameters to avoid such technical difficulties.
The modelling of the conventional extrusion process is shown in [72][73][74]. In [75], the numerical simulations of the standard extrusion process using the Eulerian-Lagrangian approach is presented. The results of the coupled Eulerian-Lagrangian (CEL) analysis were compared with the results of axisymmetric Lagrangian numerical analysis (the continuous remeshing was required) and with the experiment. The reasonable convergence between alternative numerical approaches and experimental data were obtained.
Although many papers dedicated to the conventional extrusion are published, there are very few numerical simulations available concerning the KOBO extrusion method. Most of them focus on the change of the material structure during the process. The evolution of the texture in the KOBO extrusion is contained in [76]. In [77] numerical analyses including the generation, interaction and annihilation of point defects in a KOBO method are shown. In [78], the numerical simulation of the KOBO extrusion for Chaboche-Lemaitre elastic-plastic model with isotropic and kinematic hardening is presented. The results show phenomena occurring in the extruded material, e.g. the ratcheting and the mean stress relaxation. The small amount of research dedicated to the modelling of the KOBO extrusion assuming the hardening of a material confirms the purposefulness of the works undertaken.
In simulations of KOBO extrusion process presented in this paper, the coupled thermo-mechanical CEL approach including the heat generation due to the plastic deformation has been used. Since B-P material model is not implemented in commercial FEM programs, the user material procedure VUMAT has to be written, compiled and linked to Simulia Abaqus progam used here in numerical computations.
The Bodner-Partom unified elastic-viscoplastic material model
The KOBO extrusion is a complex process leading to the occurrence of high stresses and strains in a material. Material model appropriate for cyclic loading considering kinematic and isotropic hardening should be used in numerical calculations. Material model also ought to take into account the change of a temperature and its influence on the elastic-viscoplastic plastic response. The Bodner-Partom (B-P) material model applied in simulations is based on three fundamental relationships [41]: (1) The plastic flow rule relates the inelastic strain rate ̇ (ie) ij with the deviatoric stress using the plastic multiplier (Eq. 9).
where s ij is deviatoric stress tensor (Eq. 10) where ij is Kronecker delta function.
(2) The kinetic equation relates the plastic multiplier with the stress invariants using internal state variables. (3) The evolution law defines the changes of the internal state variables ( Ż I and ̇i j ).
The B-P model allows taking into consideration simultaneously elastic and plastic effects, isotropic and kinematic hardening, viscoplasticity, creep, as well as the relaxation for a wide range of temperatures and strain rates [79]. The model is described by the following constitutive equations: (1) The superposition of elastic and inelastic strains (Eq. 11): where ij is the total strain, ij (e) is the elastic strain and ij (ie) is the inelastic (plastic) strain. (2) The elastic stress rate ̇ (e) ij obtained from the time derivative of the generalized Hooke's law (Eq. 12): where ̇ kl and ̇ (ie) kl are the total and inelastic strain rates, respectively and C ijkl is the elastic stiffness tensor according to the Hooke's generalized law.
(3) The inelastic strain rate (Eq. 13): where D 0 and n are the B-P material parameters, Z 0 is the initial value of the isotropic hardening variable, Z is the internal state variable and J 2 is the second invariant of the deviatoric stress. (4) The incompressibility condition for inelastic deformations (Eq. 14): The B-P material behaviour under loading is described using temperature dependent and independent parameters.
The internal state variable Z in Eq. 13 represents the material resistance to the inelastic flow at the current state. It is the superposition of isotropic Z I and directional (kinematic) Z D components in line with Eq. 16: The Z I depends on the load history and its evolution is defined as follows (Eq. 17): The part of Eq. 17 is a softening or the static recovery caused by the low strain rates at relatively low temperatures. If the static recovery does not occur, the Z 1 equals the Z saturated value.
The Z D parameter depends on the load history and the ij tensorial quantity value is related with the directional component of the hardening in the direction of current stress ij (Eq. 18).
where (13) The proper selection of the B-P material parameters enables to obtain an appropriate material response under external load. The detailed procedure for the determination of the B-P material parameters is described in [81,82]. Other research can be found in [83,84]. In last thirty years, the B-P model has been extensively examined and values of material constants for different materials are available in the literature, therefore. The data for AMG-6 alloy considered here for 20, 300 and 400 °C is taken from [63].
From the computational point of view, the KOBO extrusion is a coupled thermo-mechanical process in which the plastic deformation and the friction between tools and extruded material are the heat sources. The heat generation caused by the plastic deformation is calculated using the following equation (Eq. 20): whereq is the heat volume rate and ̇ (ie) is the inelastic strain rate, is the Cauchy stress, x is a backstress associated with the kinematic hardening and is a Taylor-Quinney coefficient. The Taylor-Quinney coefficient might be determined experimentally by processing thermal measurements. The constant averaged value is commonly applied in numerical simulations [85]. The Taylor-Quinney coefficient = 0.9 has been used in this work. The increase of a temperature in a material in a wake of a heat generated during the high strain rate plastic deformation might be calculated using the following formula (Eq. 21): where c p is a specific heat and is density.
It is very important that the user material subroutine written for large displacements and large strains should consider the rotation of the coordinate system. In Simulia ABAQUS, constitutive equations are defined in the corotational frame which enforces the use of objective stress rates namely: Jaumann or Green-Naghdi.
The Jaumann stress rate tensor ∇J is defined as (Eq. 22) [65]: where ̇ is stress rate tensor in a corotational frame, is Cauchy stress tensor and W is spin tensor which comprises both the deformation and the rotation.
In the Jaumann rate, the ̇ term is associated with the material deformation and the next two terms-W and W result from the rotation of the coordinate system. In ABAQUS program, the Jaumann stress rate is used for commercially implemented material models.
The Green-Naghdi rate ∇G of the Cauchy stress is (Eq. 23) [86]: where is the angular velocity matrix resulting from a rigid body rotation. Similarly to the Jaumann stress rate, the Green-Naghdi rate contains a term associated with the deformation ( ̇ ) and two terms which are related to the rotation ( − ). In ABAQUS program, the Green-Naghdi rate of the Cauchy stress is applied as a default for the user-defined materials. Thus, the results obtained for the user-defined material model and material model implemented commercially may differ because of various objective stress rates used. It is worth highlighting that it is possible to apply the Jaumann stress rate for the user-defined material model. The Green-Naghdi stress rate might be expressed by means of the Jaumann rate as follows (Eq. 24): After substitution of Eq. 24 into Eq. 23, one can get Eq. 25.
Enforcing the Jaumann stress rate for the userdefined material model might be helpful in testing the correctness of user material procedures in elastic-plastic benchmark tests involving large displacements and rotations. The explicit integration (the Euler forward method) algorithm for the Bodner-Partom model using Eq. 12-20 is made in the user material procedure. The subscript for time t has been omitted in this paper. The components of the strain increment are the input data. The procedure determines the increment of the inelastic strain and updates the stress at the end of the integration step. The algorithm of the explicit integration for the B-P model is as follows: (1) The determination of Z I state variable for time t + Δt (Eq. 26): where Ẇ p is a plastic work rate-Eq. 27: (2) The calculation of state variable for time t + Δt (Eq. 28): where the norms of tensorial values ‖ ‖ and ‖ ‖ are as follows (Eq. 29): (3) The determination of Z D and Z state variables for time t + Δt (Eq. 30): (4) The computation of inelastic (plastic) strain for time t + Δt (Eq. 31) wheres is a deviatoric stress tensor. (5) The determination of stress for time t + Δt (Eq. 32): (6) At the end, the state variables , ̇ (ie) and Z I are saved.
The coupled Eulerian-Lagrangian method
Solving problems of solid mechanics including large deformations is difficult and complicated with the use of displacement-based finite element method described by the Lagrangian formulation in which the movement of the continuum is specified as a function of the material coordinates and time [87,88]. In the Lagrangian formulation, nodes move together with the material in time (Fig. 1a) [89]. The mesh distortions and contact conditions can lead to the problems with a convergence. In these cases Eulerian methods become more efficient than the Lagrangian ones.
In the Euler method, which is applied usually in fluid mechanics, the movement of a continuum is a function of spatial coordinates and time [65]. The Eulerian reference mesh remains undistorted and is used to track the motion of a material in the Eulerian domain [18,19]. The material can move through the fixed Eulerian mesh and the elements distortion does not occur (Fig. 1b).
The CEL method keeps advantages of Eulerian and Lagrangian approaches and is very effective for a solution of large deformations problems. In [90,91], the coupled Eulerian-Lagrangian method has been applied for the modelling of orthogonal cutting. The application of the CEL method for the prediction of a residual stress in dissimilar friction stir welding of aluminum alloys is presented in [92]. The modelling of defects in a friction stir welding process using the CEL approach is described in [93]. In [94], the usefulness of the Coupled Eulerian-Lagrangian analysis in geotechnics is praised. The CEL method can be also applied in modelling of material forming processes. Numerical results of the extrusion process are contained in [95,96]. The modelling of the KOBO extrusion process using the CEL method and the Chaboche-Lemaitre material model is described in [78].
In the CEL approach, bodies which undergo large deformations (processed material) are meshed with Eulerian elements and the stiff bodies (tools) are meshed with Lagrangian ones. The Eulerian material is tracked as it flows through the mesh by computing its volume fraction (VF) in the CEL analysis. Each Eulerian element is designated a percentage, which represents the portion of that element filled with a material. If the Eulerian element is fully filled, its VF is 1. The VF is 0 for elements which are not filled with the Eulerian material.
The Lagrangian mass, momentum and energy conservation equations in the Eulerian spatial formulation arranged into conservative forms as below (Eq. [33][34][35] [79,97]: The Eulerian governing equations above are written in the common general conservative form as follows (Eq. 36) [79,97,98]: where is the flux function and S is the source term.
The solution of Eq. 36 can be divided into two stages (Eqs. 37 and 38) solved sequentially using the splitting operator. They represent Lagrangian and Eulerian steps, respectively. The Eq. 37, which represents the Lagrangian step, contains the source term and the Eulerian step described using Eq. 38 includes the convective one [97,98].
The graphical scheme of the split operator is shown in Fig. 2. The deformed (Lagrangian) mesh is moved to the Eulerian fixed mesh and the volume of material transported between adjacent elements is computed. The Lagrangian solution variables, such as the mass, energy, momentum, stress and others are then adjusted to account for the flow of the material between adjacent elements [78,79]. The CEL method captures advantages of both the Lagrangian and Eulerian methods. The mesh is not deformed, and therefore the remeshing is not needed. A unique feature of the CEL method is that a single volume can be filled simultaneously with many materials which allows simulations of the extrusion of composites and porous materials [78]. The CEL approach also ensures better interpretation of contact conditions than the Eulerian method. The classical FEM methods based on the Lagrangian approach often cause the contact problems entailed by the distortion of the mesh.
Numerical model
The 3D geometry model for the KOBO extrusion process is shown in Fig. 3. The tools are modelled as rigid bodies The 8-node linear Eulerian hexahedral elements considering thermomechanical coupling are used to mesh the Eulerian domain. The rectangular as well as cylindrical Eulerian domains with different mesh densities have been tested in simulations (Fig. 4). The Eulerian mesh covers the position of the material at the beginning of the process, and its location after the extrusion. Elastic and thermal material properties, as well as the FEM model data are contained in Table 2. The B-P model parameters for a Fig. 2 The graphical interpretation of the split operator Fig. 3 The model of the KOBO extrusion process with a cylindrical Eulerian domain AMG-6 alloy for different temperatures are also listed in Table 2.
Numerical simulations of the KOBO extrusion process have been done in commercial Simulia ABAQUS program. The explicit integration procedure is required by ABAQUS in the CEL analysis. This integration method is conditionally stable-the stable time increment is very small [80]. The analysis of the KOBO extrusion requires hundreds of thousands time increments, and for this reason the problem cannot be solved in real process time. The mass scaling technique and the smooth step loading have been applied in order to speed up simulations and minimize the computation time [86,100]. In order to decrease the computational costs, the time of real extrusion has been shortened in numerical simulations about one order. To avoid very large inertia forces caused by the sudden punch movement, the punch displacement has been smoothed from time t 0 to t 1 as described by Eq. 39 [101]: where is a position parameter, u 0 is the initial punch position and u 1 is the final punch drive. In numerical simulations, = t−t 0 t 1 −t 0 and u 0 = 0 . The use of a smooth step amplitude in ABAQUS enables to apply and suppress the loads gradually.
Benchmark tests and simulations of KOBO extrusion
The proper selection of the B-P material model parameters is necessary to perform reliable numerical calculations of the KOBO extrusion process. Material parameters might be determined on the basis of experimental research, e.g. in tension or compression tests made for various strain rates. The determination of B-P parameters is out-of-scope of this paper. Since the Bodner-Partom material model is quite often used in solutions of various engineering problems, its constitutive parameters are available in literature for many typical materials. For B-P parameters given in Table 2 and for temperatures of 20, 300 and 400 °C. The influence of the strain rate on the yield stress can be clearly seen in Fig. 6 for all considered temperatures. Many simulations of the KOBO extrusion process have been made for various Eulerian meshes (cylindrical, rectangular), various mesh densities, different contact conditions (slip or no slip on die), etc. As the exemplary results, von-Mises stress distribution in the extruded material is shown in Fig. 7. This distribution is heterogeneous and the highest stress occurs near the die hole and next to the walls of the container. The stress decreases in the center of the recipient. Figure 8 shows the equivalent plastic strain in the material subjected to the extrusion. The highest plastic strain values occur in corners of the recipient. One may notice that the material in a middle part of the container is in the elastic state (hydrostatic compression). Unfortunately, due to this fact, the B-P model is not computationally efficient. The elastic and inelastic strains are calculated in the whole domain for each load increment which significantly extends the computational time. On the other hand, elastic-plastic material models involving the yield condition are more effective from the point of computational costs since the elastic response requires much less computations than the plastic one.
It is important that the CEL approach in ABAQUS does not take into account the heat flow from the material to the tools cooled in experiments. Thus, the In numerical simulations material parameters should be defined for the entire temperature range occurring in the analysis. The material softening with the increased temperature reduces the plastic work of the extrusion, and this way reduces the temperature raise. The characteristic shape of plastic zones and streams of the plastic flow obtained numerically were confirmed in experimental investigations (Fig. 9). According to [102], the characteristic shape of lines of plastic flow is associated with the dominant crystal orientation which follows with the stream of the material.
The advantage of using material user subroutine over the material models commercially implemented in the FEM program is the possibility to obtain more information about the material response. Unlike in the commercially implemented material models, in the user procedures all isotropic and directional (kinematic) material hardening parameters are available and the influence of these parameters on the KOBO extrusion might be examined in this way.
Considering the KOBO process as a cyclic loading process, the ratcheting phenomenon should be taken into account. The ratcheting is characterized by the directional progressive accumulation of plastic deformation in a material under the non-symmetrical stress-controlled cyclic loading with non-zero mean stress [103,104] without the load increase. Thus, the ratcheting is a very welcome phenomenon in the KOBO extrusion process. Unfortunately, the ratcheting used to stabilize after a certain number of loads, depending on the increase of the yield stress resulting from the isotropic hardening [105]. A large and dominant isotropic hardening can hamper a further deformation due to the cyclic load making the material response being almost only elastic. Detailed knowledge about the material hardening characteristic is very useful in optimization of the KOBO process in selection of appropriate frequency and the amplitude of the die oscillations in order to avoid the raise the isotropic hardening.
Summary and conclusions
The three-dimensional thermo-mechanical coupled Eulerian-Lagrangian analysis was used in order to simulate the KOBO extrusion process of the AMG-6 alloy. The elastic-viscoplastic unified Bodner-Partom model including the large displacements was applied here. The model parameters were selected on the basis of the literature review.
In numerical calculations, the extruded material was modelled using the Eulerian mesh and the tools (die, recipient and punch) were defined as rigid bodies using the Lagrangian mesh. The explicit integration of constitutive equations was used here. For the Bodner-Partom material model the user procedure was written and later on it was linked to the commercial FEM software. The correctness of the user subroutine was checked on several elastic-plastic benchmark tests.
Different shapes of Eulerian domains and variously tuned meshes were tested. The cylindrical Eulerian space with the mesh thickening to the core was chosen as one which provides the best results. On the basis of results obtained, the following conclusions can be drawn.
(1) The CEL method enables to model the KOBO extrusion process and obtain reliable results. Contrary to the Lagrangian approach, remeshing is not necessary. (2) The proper modelling of the KOBO extrusion applying the user procedure extends the knowledge about strain hardening and temperature sof-tening in extruded material, and enables to set-up experimental conditions of the process. (3) The coupled thermomechanical problem requires definition of the material data for the whole range of temperatures which occur in the process. (4) The knowledge concerning the material characteristic, including the material hardening can optimize the process and select the proper die frequency and amplitude in order to avoid technical problems associated with the material extrusion and the die damage.
Funding The authors did not receive support from any organization for the submitted work.
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The Fig. 9 The comparison of plastic zones in a numerical simulation (a) and its experimental verification (b [102], c) images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 7,344.4 | 2022-08-25T00:00:00.000 | [
"Materials Science"
] |
Majorana bound states in hybrid 2D Josephson junctions with ferromagnetic insulators
We consider a Josephson junction consisting of superconductor/ferromagnetic insulator (S/FI) bilayers as electrodes which proximizes a nearby 2D electron gas. By starting from a generic Josephson hybrid planar setup we present an exhaustive analysis of the the interplay between the superconducting and magnetic proximity effects and the conditions under which the structure undergoes transitions to a non-trivial topological phase. We address the 2D bound state problem using a general transfer matrix approach that reduces the problem to an effective 1D Hamiltonian. This allows for straightforward study of topological properties in different symmetry classes. As an example we consider a narrow channel coupled with multiple ferromagnetic superconducting fingers, and discuss how the Majorana bound states can be spatially controlled by tuning the superconducting phases. Following our approach we also show the energy spectrum, the free energy and finally the multiterminal Josephson current of the setup.
Introduction. Majorana bound states (MBS) [1] have been proposed as a building block for solid-state topological quantum computation [2]. Different setups have been discussed theoretically [3][4][5][6][7][8][9][10] -many of them relying on the combination of materials with strong spin-orbit coupling, superconductors and external magnetic field. Following these suggestions, experimental research has been focused on hybrid structures between semiconducting nanowires [11] and more recently two-dimensional electron systems [12][13][14][15][16][17][18][19] in proximity to superconducting leads. Setups based on 2DEGs are of special interest, as they benefit from the precise control of the 2DEG quantum well technology developed in the last 40 years. Ideally, the external magnetic field should act as a pure homogeneous Zeeman field acting on the conduction electrodes of the semiconductor. In practice, in the presence of superconductors, the situation is more complex due to orbital effects and spatial inhomogeneity due to magnetic focusing [20][21][22].
The aim of the present work is twofold: to propose a setup for hosting and manipulating MBS at zero external magnetic field, and to discuss a general analytical approach for describing the transport and topological properties of 1D boundary systems with generic symmetries. The proposed setup is sketched in Fig. 1 which consists of a 2DEG [23] coupled to ferromagnetic insulator/superconductor (FI/S) electrodes. Related 2D systems have been recently explored in Refs. [6][7][8]24]. The magnetic proximity effect from the FI induces an effective exchange field h in the superconductors breaking timereversal symmetry and resulting in a spin splitting of the density of states. Experimentally, manufacturing S/FI films is well demonstrated. [25,26]. We approach the theoretical problem by developing an exact method that provides a systematic dimensional reduction procedure Schematic of a narrow semiconductor channel (2DEG/nanowire) contacted to ferromagnetic insulatorsuperconductor bilayers. Phase differences ϕj of the superconducting j-th fingers are imposed across the semiconductor channel. Multiple fingers can be used to precisely control the position of the topological bound states in the junction.
based on a continuum transfer matrix approach [27][28][29][30], which in certain aspects is closely related to scattering theory [31]. The effective 1D boundary Hamiltonian obtained provides access to the energy spectrum, the free energy [32][33][34][35], and the multiterminal Josephson currents in the setup. An analytically tractable 1D topological invariant also emerges in a natural manner. The approach also applies to 2DEG strongly coupled to superconductors via transparent interfaces, required for large topological energy gaps. Here, we apply this method for our class D problem [36] and discuss phase-controlled manipulation [3,37] of the MBS with inhomogeneous multiple S/FI "fingers".
Topological order. Hamiltonian (1) has the chargeconjugation symmetry H = −C † H * C, C = −σ y τ y . As H y inherits the symmetry of H, its topological properties can be characterized by the low-energy part of the 1D bulk invariant of class D: [1] where pf is the Pfaffian of a 4 × 4 matrix. Note that χ(y) can change sign only if bound states cross = 0, or when H y has a singularity there. [42] As a consequence, zeroenergy bound states are expected between regions with different χ. This argument can be generalized to other symmetry classes and formally also to 0D invariants in systems of finite size along x. Infinite superconducting leads. For this specific case, we determine the A ± factors from the W matrices in the leads, W ± . We define the projectorsP +(−) = to growing(decreasing) modes with momenta +(−) Im k y < 0 in the upper(lower) lead, where sgn is the matrix sign function. The mode matching conditions for bound states can then be written as P ± u(±L/2) = 0. At energies where the spectrum of the leads is gapped, half of the modes are growing and half are decreasing. SinceP ± are then half-rank matrices, one can generally find R such thatP ± = RP ± where P ± have the structure of Eq. (3). For µ lead → ∞ and no spinorbit interaction in the leads, direct calculation gives [41] 2m lead µ lead a mismatch parameter. Narrow-channel expansion. Consider now a narrow channel with a Hamiltonian constant on |y| < L/2. Then Ψ(y, y ) = e (y−y )W . Expanding to leading orders in L When α y is parallel to the exchange field of the leads, it commutes with A ± and does not contribute. Imperfections in the S/N interfaces may also be included in this model and will affect the precise form of A ± . The above Hamiltonian is obtained via operator manipulations, and we did not need to e.g. select a variational wave function basis.
Within the quasiclassical limit in the leads and by setting for simplicity α y h , Eq. (8) can be written as yielding an effective 1D Hamiltonian with an energy dependent order parameter∆ * , an exchange field h * , a potential shift, and an energy renormalization [see Eqs. (S6) in [41] for explicit expressions]. We have neglected the k x dependence of A ± , by assuming k x k F,lead . For leads with identical |∆|, |h| and phase difference ϕ, we find at → 0, Z −1 * . At low energies, Eq. (9) is similar to widely studied quantum wire models [4], and characterized by the same 1D topological invariant in class D The superconducting self-energy in Eq. (9) in the limit considered here (L → 0, transparent NS interfaces) turns out to be similar in form to weak-coupling tunneling models, [4,5,44,45] derived projecting onto lowest quantum well confined modes in the N-region. The explicit expressions for the prefactors, the shift in the potential, and α y spin-orbit obtained here are not found in typical tunneling approaches. The magnetic proximity effect from ferromagnetic superconductors that affects the energy dependence of both the superconducting and exchange self-energies, on the other hand in principle can be captured also by a tunneling calculation.
Phase diagram and spectrum. Figure 2(a) shows χ and the 1D invariant M = sgn[µ 2 , for a class D narrow Josephson junction, translationally invariant along x, under a phase difference ϕ. This is in agreement with the phase diagram presented in Ref. 8, for the specific value of µ * . [46] The chemical potential dependence is shown in Fig. 2(b), together with the size of the χ = −1 region around ϕ = π. The behavior as a function of µ with constant k S exhibits finite-size k F L oscillations from scattering at the NS interface, which are not present [7,8] in the result (not shown) for the matched case µ = µ lead , m = m lead where µ * = const(µ). The correspondence between χ and the mode spectrum at k x = 0 is shown in Fig. 2(c). The above narrow-channel approximation breaks down when |k F |L 1, and is applicable for the first lobe. This limitation is also visible in Fig. 2(c), where the narrowchannel approximation predicts a zero-energy crossing between 0 <k F L < π, whereas in the exact solution the system is in the nontrivial state for the whole interval. Nevertheless, states with χ = −1 can be achieved also at higher doping and mismatch, but in a narrower parameter region.
It is important to note that S/FI bilayers have restrictions on the magnitude of the exchange field. The S/FI bilayer energy spectrum becomes gapless at h > ∆. Moreover, thin S/FI bilayers at low temperatures support a thermodynamically stable superconducting state only below the limit h < ∆(T = 0)/ √ 2 [48][49][50] above which a first-order transition to normal state occurs at T = 0. As the induced effective order parameter is ∆ * ∝ | cos ϕ 2 |, a change of the 1D invariant can however be achieved for any h at phase differences close enough to ϕ = π.
The propagating mode spectrum of the 1D narrowchannel model is shown in Fig. 3(a) for different values of the phase difference. The behavior is typical to quantum wires [1,4]: the magnetic and superconducting proximity effects open energy gaps at k x = 0 and k x = k F . The energy gap at k x = 0 closes and reopens at the topological transition. The gap at k F closes at ϕ = π where ∆ * vanishes. Finite size. The bound state energies of a system with finite size in the x-direction are given by the zeros of the determinant of the effective 1D model, [32][33][34] Here Ψ( ) is the fundamental matrix connecting the ends of the 1D channel, for the 1D differential operator H eff , defined analogously to Eq. (2) above. Roots w( j ) = 0 of the above 8 × 8 determinant can be found numerically. The bound state wave functions associated with each can be found from the corresponding singular vectors. Figure 3(b) shows the bound state energy spectrum as a function of the phase difference ϕ. When the phase difference crosses the bulk topological transition point, one of the ABS crosses over to form a MBS pinned at ≈ 0 and localized at the ends of the 1D channel [see Fig. 3(c)].
Supercurrent in a multiple-finger setup. Consider now the geometry of Fig. 1 with multiple superconducting fingers with widths j and different order parameter phases ϕ j on one side. For a spatially piecewise constant Hamiltonian, we then have Ψ( ) = j e Wj ( ,ϕj ) j . Supercurrent exiting the jth superconducting finger is given by the corresponding derivative of the grand potential, which has a closed-form expression: The sum runs over the Matsubara frequencies ω n = 2πT (n + 1 2 ). Here, we used the fact that starting from a functional determinant approach [32,33] , with overall proportionality constants independent [41] of ϕ j . Resulting current-phase relations are shown in Fig. 4(b), for (a) Bound-state spectrum in a finite-size "3finger" setup with finger widths j = 100ξ, 4ξ, 100ξ and phases ϕj = 0, ϕ2, 0.8π, (b) Current density-phase relations, at T = 10 −3 ∆. Shown is the result for uniform junction [Eq. (12)], the same within the narrow-channel approximation [Eq. (13)], and IS,2/ 2 in the 3-finger system. Parameters as in Fig. 3. (c) Sweeping the phase ϕ2 = 0.6π → 0.704π → 0.8π, the ≈ 0 MBS localized at x = 2 (red) moves toward x = 0 (orange). Other parameters defined in caption of Fig. 2. Inset: schematic of the structure considered.
infinite-length channels (ln Det → kx ln det) and for a finite-length < ∞ system. The results from (12) and (13) differ somewhat at L = 0.25ξ, but approach each other as the channel width L → 0. The finite-size system result is close to the infinite-size result.
Phase control of the MBS. Consider now a superconducting finger of width 2 between a trivial (ϕ 1 = 0, x < 0) and a nontrivial (x > 2 , ϕ 3 = 0.8π) segment. The bound state spectrum as a function of ϕ 2 is shown in Fig. 4(a). As ϕ 2 crosses the bulk transition point ϕ c ≈ 0.7π of the segment, the MBS initially localized at x = 2 re-localizes to x = 0 (see Fig. 4(c)). We have assumed 2 ∼ ξ so that the energy gap remains large also when sweeping the phase. The result shows that in a multi-finger setup, the MBS location can be controlled, envisioning 2D channel networks with FI/S electrodes as a platform for a phase-controlled braiding of the MBS. [3,6] To drive a segment into the non-trivial state (ϕ → π) one can use a superconducting loop, connected to at least to some of the fingers. By controlling the supercurrent, it is likely possible to fine tune the MBS position.
InAs-2DEG [23] with Al/EuS leads [25,26] provide a topological gap of E g /k B 60 mK. The corresponding coherence length is ξ ∼ 80 nm, making the fabrication of the devices compatible with the modern technologies. Phase biasing can be implemented with superconducting loops and, in combination with current injection, can be used to fine tune the MBS position.
Conclusions. In summary, we have used a transfer matrix approach to obtain an effective 1D boundary Hamiltonian. We have applied it to compute the spectrum of a S/FI-2DEG junction and to show how the topological properties of the structure can be tuned by the superconducting phase differences and the electrostatic gating. This enables the spatial control of the MBS, and 2D topological networks for braiding operations without requiring strong external magnetic fields. Our approach is quite general, not limited to any specific model or symmetry class, and can be extended to other 1D channel problems with continuum Hamiltonians that are polynomials in k y . Moreover, the model can be applied to investigate the properties of Josephson junctions in two and three-dimensional systems, for example surfaces of topological insulators [3,15,51], or graphene [39]
Bound state equation. Let us point out the status of the bound state equation. The result of Refs. [33] can be written as (note that tr W = 0 and det Ψ = 1), where L ± are the lengths of the upper(+) and lower(−) superconducting leads and L that of the normal channel in between. The multiplicative normalization of Det depends on the highest-order derivative term in H.
Consider the diagonalization W ± = Φ ± diag(Λ < ± , Λ > ± )Φ −1 ± with growing (Re Λ > ± > 0) and decaying (Re Λ < ± < 0) modes, and write Φ = φ < φ > w < φ < w > φ > . For L ± → ∞ and neglecting modes in the leads that decay when moving away from the N-region, are the projection matrices in Eq. (3) of the main text, and R = diag(R − , R + ), Green function. The Green function of the original Hamiltonian at |y|, |y | < L/2 can be expressed as The constants C ± such that C + + C Free energy. The free energy can be expressed up to a constant as F = − 1 2 T ωn ln Det G −1 (iω n ). Since det R and the normalization of Det are independent of the order parameter phases, F(ϕ) = − 1 2 T ωn ln Det M(iω n , ϕ) + F 0 . As a consequence, supercurrents I = 2e ∂ ϕ F flowing in the structure are determined only by ln Det M, and by extension, the phase-dependent part of its narrow-channel 1D approximation, F(ϕ) − 1 2 T ωn ln Det[H eff − ] + F 0 .
Supercurrent. Note also that if restricting Eq. (13) of the main text to low energies, and diagonalizing H eff yields a well-known result for the supercurrent I S,j = − e 2 m tanh m 2T ∂ ϕj m where m are the bound state energies.
Quasiclassical approximation. In the limit µ lead → ∞ and α x/y = 0 we can compute the projectorsP ± . Using an integral representation of the matrix sign function, we havẽ where b = τ3 2m , and C +(−) are counter-clockwise semicircles enclosing the upper(lower) complex half-plane. Poles indicating propagating modes are displaced from the real axis by Im = 0. Moreover, G(k x , k y ) = [ −H lead (k x , k y )] −1 is the Green function. Changing the integration variable to ξ = k 2 2m lead − µ lead and taking the limit µ lead → ∞ we find | 3,868.2 | 2017-12-05T00:00:00.000 | [
"Physics"
] |
Quantum-Path-Resolved Attosecond High-Harmonic Spectroscopy
Strong-field ionization of molecules releases electrons which can be accelerated and driven back to recombine with their parent ion, emitting high-order harmonics. This ionization also initiates attosecond electronic and vibrational dynamics in the ion, evolving during the electron travel in the continuum. Revealing this subcycle dynamics from the emitted radiation usually requires advanced theoretical modeling. We show that this can be avoided by resolving the emission from two families of electronic quantum paths in the generation process. The corresponding electrons have the same kinetic energy, and thus the same structural sensitivity, but differ by the travel time between ionization and recombination — the pump-probe delay in this attosecond self-probing scheme. We measure the harmonic amplitude and phase in aligned CO 2 and N 2 molecules and observe a strong influence of laser-induced dynamics on two characteristic spectroscopic features: a shape resonance and multichannel interference. This quantum-path-resolved spectroscopy thus opens wide prospects for the investigation of ultrafast ionic dynamics, such as charge migration. DOI:
Tracking charge migration in molecules is one of the main objectives of attosecond spectroscopy [1,2].To reach this goal, two directions are being pursued.In the ex situ scheme, attosecond extreme ultraviolet (XUV) pulses are produced by high-order harmonic generation in a rare gas [3], and used to trigger or probe ultrafast dynamics in the medium of interest [4,5].The in situ approach, also known as high-order harmonic spectroscopy (HHS), consists in using the high-order harmonic generation (HHG) process to probe the emitting medium, through a three step mechanism [6][7][8][9].First, the strong laser field lowers the potential barrier of the target, enabling electrons to tunnel out from the highest occupied orbitals.Second, the freed electronic wave packet is accelerated by the laser field and driven back to the parent ion.Last, the electrons radiatively recombine to the ground state, emitting a burst of XUV radiation.The recombination can be seen as an interference process between the bound and accelerated parts of the wave function [10,11].It is thus sensitive to the bound wave function's structure, encoding information into the spectrum [11,12], the phase [13,14], and the polarization state [15,16] of the emitted XUV radiation.
In this simple picture of HHS, the molecular ion is frozen between ionization and recombination.However, strong field ionization generally triggers ultrafast dynamics, leading to an attosecond evolution of the ionic core.The three-step mechanism then becomes a pump-probe scheme, in which the dynamics is initiated by tunnel ionization and probed by the recolliding electronic wave packet [17].This configuration was first implemented to reveal the vibrational dynamics occurring in molecular ions [18,19], before being used to track the free evolution of the hole resulting from the ionization of multiple orbitals [20,21], and to reveal the existence of laser-induced hole dynamics in N 2 [22], iodoacetylene [23] or chiral molecules [24].Several theoretical works have recently investigated the importance of subcycle multielectron dynamics in HHS, including correlation during tunneling [25] and just before recombination [26] as well as dynamical exchange [27].
In this scheme, the pump-probe delay corresponds to the time spent by the electron in the continuum.The first technique developed to map the dynamical processes relies on the natural spread of the electron trajectories in the continuum [19]-the electron travel time varies quasilinearly with the harmonic order for the short quantum paths detected in the experiments [28].In this approach, the de Broglie wavelength of the recolliding electron varies together with the pump-probe delay, such that different spectral components of the recombination dipole moment are probed at different times.It can thus be difficult to differentiate between structural [29] and dynamical effects [20], both being sensitive to the de Broglie wavelength but only the latter to the recombination timing.Two strategies have been used to circumvent this issue: varying the laser intensity [20,30] or the laser wavelength [23].Both operations enable changing the electron travel time in the continuum, and thus the pump-probe delay, at a fixed de Broglie wavelength.
While they are suitable to reveal field-free dynamics [20,21], these approaches are problematic when the dynamics is driven by the strong laser field: changing the laser wavelength or intensity modifies the ionic dynamics.In practice, the evolution of the system is thus retrieved by comparing the experimental signal to theoretical calculations [22,23,31].The complete characterization of the harmonic radiation-amplitude and phase, resolved as a function of energy and molecular alignment angle [32], as well as along the different polarization components of the emission [33][34][35]-provides a wealth of comparison points between experiment and theory, but does not permit the direct observation of the dynamics.This would require changing the pump-probe delay in the experiment, while keeping fixed the laser intensity and wavelength, as well as the de Broglie wavelength of the electron.In this Letter, we show that this can be achieved by making use of a wellknown property of HHG: each harmonic is emitted by two electron quantum paths (QPs) labeled short and long, that have spent very different travel times in the continuum [9,36] and thus probe the target with the same de Broglie wavelength, at the same driving laser intensity, but at different delays [37].The contributions from these two QPs can be distinguished in spatially resolved harmonic spectra [38,39].We perform HHG in strongly aligned N 2 and CO 2 molecules using a 800 nm field, and resolve the amplitude and phase of the short and long QPs as a function of molecular alignment.HHG from these molecules are known to present distinctive features-a shape resonance in N 2 [40,41] and destructive interference between multiple channels in CO 2 [20].We find that both effects are reflected very differently in short and long QPs, unambiguously and directly demonstrating the strong influence of the underlying attosecond dynamics on HHG.
The experiment was carried out using two laser beams generated by a phase mask [42], which, at the focus of a lens (f ¼ 50 cm focal length), produces two phase-locked high-harmonic sources in a pulsed Even-Lavie supersonic gas jet [43].A pump laser pulse prealigns the molecules in one of the two sources, inducing a change of the harmonic intensity and phase reflected in the far-field interference pattern.The spatially resolved spectrum presented in Fig. 1(a) shows harmonics from H13 to H29 produced in N 2 .Each of them exhibits a well-collimated, spectrally narrow component, surrounded by a more divergent and spectrally broader ring.The spatial profile of both components is modulated by a well-contrasted interference pattern.
The two spatiospectral components of each harmonic originate from short and long electron QPs in the HHG process.Their ionization and recombination times, calculated within the strong-field approximation (SFA) [9,44], are shown in Fig. 1(b).For H13, the travel time of the electron in the continuum is 900 as for short QPs and 2450 as for long QPs.The short and long QPs can be separated in the spectrospatial domain, as observed in Fig. 1(a) [38,45,46].The short QPs are spatially and spectrally narrow, while the long QPs show broader divergence and spectral widths.
To extract the amplitude and phase evolution of each component of the HHG signal, we filter the contribution of short and long QPs by using the masks depicted in Fig. 1(a), and perform a Fourier analysis of the spectrally integrated spatial fringe pattern.Figures 2(a)-2(d) show the resulting evolution of the harmonic intensity and phase as a function of molecular alignment angle, for the short and long QPs.The intensities are normalized to their value at 90°and the phases are set to zero at this alignment angle.The data are symmetrized relative to the parallel alignment (0°).Note that measuring the complete phase evolution as a function of harmonic order and alignment angle would require the use of photoelectron spectroscopy [32], which prevents the resolution of short and long trajectories.Alternatively, more complex interferometric schemes with an atomic reference could enable resolving the complete phase evolution of the harmonic emission [47].
For short QPs, the modulation of the harmonic emission with molecular alignment shows a very high contrast around H19-23.For instance, the signal from H21 is 14 times stronger when the molecules are parallel to the driving laser polarization than when they are perpendicular.As the harmonic order further increases, the modulation contrast decreases, and the harmonic emission shows local minima around 60°and a local maximum around 0°, except for H27 which presents a local minimum at 0°.The phase measurement reveals an interesting evolution as well.The curvature of the phase variation around 0°reverses when the harmonic order increases, from positive for H13-17 to negative above H19.A phase jump around 60°also gradually appears, maximizing to around 1.5 rad in the cutoff (H23-27).
Turning to the long QPs, we observe a dramatically different behavior.All harmonics show a very similar intensity modulation contrast, around 3. The highest harmonics present a local minimum at 60°.The phase of H13 shows an evolution opposite to all other harmonics (as well as short QPs), with a large 1.5 rad excursion.This strong difference between short and long QPs for this harmonic is consistent with the recent measurements of [48], who observed a π shift in the relative phase of short and long QPs between 0°and 90°for the integrated harmonic emission between H11 and H15.For H15, we find that the phase evolution is remarkably similar to the one of the short QP, and for H17 the phase modulation is more contrasted in the long QP.Harmonics 19-23 only show weak variations.The contributions from short and long QPs merge above H25 due to the proximity of the harmonic cutoff (see Fig. 1).We thus only present H27 on the short QPs plot.
The harmonic emission from the short and long QPs show drastically different amplitude and phase evolutions as a function of molecular alignment angle.The strong intensity modulation of the short QPs around H19-23 (29.5-35.7 eV) is the signature of a shape resonance in the photorecombination cross section of the X channel (associated with the highest occupied molecular orbital HOMO) [41], as shown in the theoretical molecular-frame scattering-wave recombination dipole matrix element [Fig.2(e)] [49,50].The calculated phase of the photorecombination dipole moment in the X channel [Fig.2(f)] shows a remarkable qualitative agreement with the measured phase evolution of the short QPs, with a reversal of curvature at 0°around H19 and sudden phase jumps around 60°for high orders.Theoretical studies have shown that if the ion remains steady between ionization and recombination, shape resonances affect short and long QPs in a similar manner [32,51].Our results show the opposite and thus demonstrate the existence of a subcycle dynamics in the ion.The shape resonance appears when electrons are trapped by the potential barrier in the X channel before recombining.If the laser excites the ion between ionization and recombination, the electrons can recombine to deeper orbitals, e.g., to the HOMO-1 (associated with the A channel) [22].These cross-channels are not sensitive to the shape resonance, which is a single-electron effect with no interchannel couplings [52].Long QPs, probing the system at longer times than short QPs, reveal the higher proportion of excited ionic states characterized by intensity and phase variations that are very different from that dictated by the shape resonance in the X channel.
As a second illustration of the interest of QP-resolved HHS, we analyze the results of measurements in CO 2 at 0.8 × 10 14 W cm −2 , in which characteristic destructive interference between contributions from the HOMO (X channel) and HOMO-2 (B channel) is known to occur when molecules are aligned around 0° [20].Figures 3(a) and 3(b) show the harmonic intensity as a function of the alignment angle, for short and long QPs.For short QPs, the destructive interference appears as a local minimum of the intensity around 25°at H19, and shifts to higher angles as the harmonic order increases [Fig.3(a)].It is associated with a jump of the harmonic phase as a function of molecular alignment, reaching 2.3 rad around 30°for H23 [Fig.3(c)].The signal from long QPs is quite different, showing no such sign of interference minimum and phase jump.
If the ionic populations do not evolve between ionization and recombination, then the relative phase between interfering channels is where Δφ XB 0 is the initial relative phase between the ionic states populated by tunnel ionization, τ is the electron travel time in the continuum, Ip j is the ionization potential of channel j and Δφ XB rec the phase difference between the recombination dipole moments of the two channels, which PHYSICAL REVIEW LETTERS 130, 083201 (2023) 083201-3 is ∼0.5π around H21-27 [20].The dynamical interference minimum in HHG from CO 2 occurs when Δφ XB ¼ ð2n þ 1Þπ with n ∈ Z.Previous work on short QP established that Δφ XB 0 ¼ 0 [20].The condition for destructive interference is then fulfilled for electron travel times τ ≈ 1.2 fs.This is consistent with our measurement of a minimum at H19 and a phase jump on the short QPs.Destructive interference is also expected when τ ≈ 2.15 fs, which is the travel time of long QPs around H19-21.Thus, in the absence of additional ionic dynamics, the destructive interference between channels X and B is expected to occur at the same harmonic order for short and long QPs.This is clearly not the case in our measurements-the long QPs show no destructive interference minimum, and no large phase jump [Figs.3(b) and 3(d)].This demonstrates the influence of the dynamical evolution of the ion in the time window between 1 and 2 fs after ionization.This is consistent with the recent calculations of Shu et al., who demonstrated the importance of a laser-induced coupling between the B and C states of CO 2 molecules aligned parallel to a 800 nm laser field [53].This coupling enables electrons tunneling from the HOMO-2 to recombine to the HOMO-3, opening a new cross channel in the process.Interestingly, Shu et al. found that for short QPs, these dynamics hardly affect the interference minimum between X and B channels.This explains why they had remained unaccounted for until now.In contrast, in the case of long QPs the system has enough time to evolve between ionization and recombination.These QPs thus carry information relative to the HOMO-2/HOMO-3 cross channel.This stresses the importance of QP-resolved highharmonic spectroscopy to track such attosecond dynamics.
In conclusion, using two prototypical cases, we have demonstrated the ability of quantum-path-resolved highharmonic spectroscopy to reveal the attosecond dynamics between ionization and recombination.In CO 2 , the destructive interference and phase jump between X and B channels observed in the short QPs disappears at long electron excursion times, reflecting the coupling between B and C channels [53].In N 2 , the influence of the shape resonance in the X channel, which strongly affects the short QPs response, disappears at long excursion times, suggesting a recombination to a deeper, nonresonant channel.Both features illustrate the important role of laser-induced couplings in high-harmonic spectroscopy.These couplings, predicted by theory [22,53], are often neglected because they are difficult to disentangle from structural effects in conventional high-harmonic spectroscopy experiments, but clearly appear in QP-resolved measurements.Such investigations could also be extended to condensed matter, where short and long QPs have recently been identified [54], to unravel attosecond hole dynamics in semiconductors.
FIG. 1 .
FIG. 1. (a)Spatially resolved harmonic spectrum produced by two sources in N 2 at ∼1.5 × 10 14 W cm −2 .One XUV source is generated in unaligned N 2 while the other is produced from N 2 molecules aligned at 54°from the probe polarization.The gray and red lines mark out the short and long QP contributions.(b) Calculated ionization (solid) and recombination (dashed) times for short (gray, circles) and long (red) QPs within the strong field approximation.The schematic drawing illustrates the resolution of attosecond dynamics by different QPs.An electron tunnels out from an orbital, triggering ultrafast hole dynamics in the ion.The ion has not evolved much during the short travel time of the short QP and the electron recombines to the same orbital.For the long QP, the ionic state has changed and the electron recombines to a different orbital.
FIG. 2 .
FIG. 2. Intensity (a), (b) and phase (c), (d) of the short (a), (c) and long (b), (d) QP contributions to high-harmonic generation in aligned N 2 .(e) Calculated modulus square and (f) phase of the molecular-frame scattering-wave recombination dipole matrix element to the X channel. | 3,746.4 | 2023-02-21T00:00:00.000 | [
"Physics"
] |
A Comparative Study of Criticality Conditions for Anomalous Dimensions using Exact Results in an ${\cal N}=1$ Supersymmetric Gauge Theory
Two of the conditions that have been suggested to determine the lower boundary of the conformal window in asymptotically free gauge theories are the linear condition, $\gamma_{\bar\psi\psi,IR}=1$, and the quadratic condition, $\gamma_{\bar\psi\psi,IR}(2-\gamma_{\bar\psi\psi,IR})=1$, where $\gamma_{\bar\psi\psi,IR}$ is the anomalous dimension of the operator $\bar\psi\psi$ at an infrared fixed point in a theory. We compare these conditions as applied to an ${\cal N}=1$ supersymmetric gauge theory with gauge group $G$ and $N_f$ pairs of massless chiral superfields $\Phi$ and $\tilde \Phi$ transforming according to the respective representations ${\cal R}$ and $\bar {\cal R}$ of $G$. We use the fact that $\gamma_{\bar\psi\psi,IR}$ and the value $N_f = N_{f,cr}$ at the lower boundary of the conformal window are both known exactly for this theory. In contrast to the case with a non-supersymmetric gauge theory, here we find that in higher-order calculations, the linear condition provides a more accurate determination of $N_{f,cr}$ than the quadratic condition when both are calculated to the same finite order of truncation in a scheme-independent expansion.
I. INTRODUCTION
There has been considerable interest in asymptotically free gauge theories that have matter content such that they exhibit renormalization-group flows from the deep ultraviolet (UV) to infrared (IR) fixed points (IRFPs) [1,2].At the infrared fixed point, the beta function vanishes, so the theory is scale-invariant, and is inferred to be conformally invariant [3], whence the term "conformal window".With no loss of generality, one may restrict to massless matter fields, since if a matter field had a nonzero mass m 0 , one would integrate it out of the effective low-energy theory that is relevant for momentum scales below m 0 in the flow to the infrared limit.The properties of a theory at an infrared fixed point in this conformal window are of fundamental interest.Among these are the scaling dimensions D O of various (gauge-invariant) local operators, O, such as ψψ and Tr(F µν F µν ), where ψ and F µν denote fermion and gauge field-strength operators.Owing to the gauge interactions, the scaling dimension of an operator O differs from its free-field value, D O,f ree : D O = D O,f ree − γ O , where γ O is the anomalous dimension of O. Higher-loop calculations of anomalous dimensions at an IR fixed point in the conformal window have been performed in a number of works, including [4]- [16], using both conventional series expansions in powers of the gauge coupling at the IR fixed point and in powers of a scheme-independent expansion variable.Inputs for renormalization-group functions utilized in this work included those in [17]- [20].Extensive measurements of anomalous dimensions have been carried out using lattice simulations; some of these works are [21]- [32].
As one decreases the matter content, the value of the gauge coupling at the IRFP, α IR , increases, and eventually the theory changes qualitatively with the disappearance of this conformal IR fixed point.A commonly studied example is a non-Abelian gauge theory (in d = 4 spacetime dimensions at zero temperature) with gauge group G and N f copies ("flavors") of massless Dirac fermions transforming according to a representation R of G.One arranges that N f is smaller than an upper (u) bound, N f,u , depending on G and R, so that the theory is asymptotically free.As N f decreases below N f,u , the theory exhibits the aforementioned conformal IRFP, and the lower boundary of the conformal window occurs as N f decreases through a critical value denoted N f,cr [33].Generalizations of this with several fermions transforming according to different representations have also been studied [13][14][15][16][28][29][30], but here it will be sufficient for our analysis to restrict our consideration to the case of matter fields transforming according to a single representation of the gauge group.
In addition to its importance in the context of formal quantum field theory, a determination of N f,cr is important for the analysis of gauge theories with N f slightly less than N f,cr , since in choosing such a theory to study, one needs to know at least the approximate value of N f,cr .A theory with N f slightly below N f,cr has a gauge coupling that runs slowly over a large range of momentum scales, due to an approximate IR zero in the beta function, but eventually becomes large enough to produce spontaneous chiral symmetry breaking and associated dynamical breaking of the approximate dilatation invariance.As a result, these theories (often called "walking" or quasi-conformal theories) feature an approximate Nambu-Goldstone boson, the dilaton, as has been confirmed by lattice simulations [24,26,27].Since the mass of a Nambu-Goldstone boson is protected against large radiative corrections, models incorporating this physics thus have the potential to address the Higgs mass hierarchy problem [34].
Two of the conditions that have been suggested to determine the lower boundary of the conformal window in asymptotically free gauge theories are the linear critical condition (γCC), γ ψψ,IR = 1, and the quadratic critical condition, γ ψψ,IR (2 − γ ψψ,IR ) = 1 [35]- [38].As is evident from the fact that the quadratic critical condition can be rewritten equivalently as (γ ψψ,IR − 1) 2 = 0, it has a double root at γ ψψ,IR = 1 and hence is formally identical to the linear γCC.However, these two critical conditions yield different predictions for N f,cr when using, as input, a finite-order series expansion for γ ψψ,IR .In non-supersymmetric gauge theories, the quadratic condition has been found to converge faster as a function of the order to which this series for γ ψψ,IR is computed [14,15].An interesting question concerns how general this difference is; i.e., is it the case that the quadratic critical condition will also yield more rapid convergence than the linear critical condition in other theories?
In this paper we investigate this question, using as our theoretical laboratory an N = 1 supersymmetric gauge theory with gauge group G and N f pairs of massless chiral superfields Φ and Φ transforming according to the respective representations R and R of G.We take advantage of the key fact that for this theory one has exact results for γ ψψ,IR and N f,cr [39][40][41][42].
This paper is organized as follows.In Sect.II we review some relevant background concerning the N = 1 supersymmetric gauge theory and our calculational methods.Section III contains a discussion of the linear and quadratic critical conditions on γ ψψ,IR .Our calculational results on the comparison of these conditions for the supersymmetric theory are presented in Sect.IV.Our conclusions are summarized in Sect.V.
II. BACKGROUND ON THE N = 1 SUPERSYMMETRIC GAUGE THEORY AND CALCULATIONAL METHODS
In this section we briefly review some relevant background and our calculational methods.We consider a vectorial N = 1 supersymmetric gauge theory (in d = 4 spacetime dimensions) with gauge group G and matter content consisting of N f flavors of massless chiral superfields in the fundamental and conjugate fundamental representations, denoted as Φ and Φ (with color and flavor labels implicit here).In terms of component fields, the chiral superfield Φ has the decomposition where ψ is taken as a left-handed Weyl fermion, θ is an anticommuting Grassmann variable, and F is a nondynamical auxiliary field.We denote the running gauge coupling as g = g(µ), where µ is the Euclidean energy/momentum scale at which this coupling is measured, and define α(µ) = g(µ) 2 /(4π).As noted above, we restrict consideration of this theory to the range of N f where it is asymptotically free.Owing to this, its properties can be computed perturbatively in the UV limit at large µ, where α(µ) → 0. The dependence of α(µ) on µ is described by the renormalization-group (RG) beta function, 2) The argument µ will generally be suppressed in the notation.The series expansion of β in powers of α is where and b ℓ is the ℓ-loop coefficient.
We restrict here to mass-independent, supersymmetry-preserving regularization/renormalization schemes and to gaugeindependent scheme transformations.The first two coefficients in (2.3) are [43] b and [44]- [46] b where C A , T f , and C f are group invariants [47].These coefficients b 1 and b 2 are scheme-independent, while the b ℓ with ℓ ≥ 3 are scheme-dependent.With an overall minus sign extracted, as in Eq. (2.3), the condition of asymptotic freedom is that b 1 > 0, and thus N f < N f,u , where the upper bound on N f is Note that if N f = N f,u so that b 1 = 0, then the twoloop coefficient has the negative value b 2 = −12C f C A , so (with the minus sign prefactor in Eq. (2.3)) the theory is not asymptotically free.This is the reason that we require the strict inequality N f < N f,u for asymptotic freedom rather than the condition A number of additional exact results have been established about the IR phase structure of the theory [39][40][41][42].We briefly summarize some relevant properties here.For a general gauge group G and representation R, ) where the theory flows from the UV to an IR fixed point of the renormalization group.(The CW interval is also commonly called the non-Abelian Coulomb phase.) In general, the expressions in Eqs.(2.7) and (2.9) for N f,u and N f,cr are not necessarily integers.In cases where N f,u or N f,cr is not an integer, one implicitly treats it as a formal result applicable in the framework in which one generalizes N f from the non-negative integers to the non-negative real numbers.This will not be important for our present analysis, which focuses on a comparison of the relative accuracies of linear and quadratic γ critical conditions when used with finite-order perturbative anomalous-dimension inputs.However, for reference, we give some illustrative examples for the case , where the upper (lower) sign applies for S 2 and A 2 .
With b 1 > 0 for asymptotic freedom, the condition that this two-loop beta function should have an IR zero is that b 2 < 0, which is that N f > N f,b2z , where As we discussed in [48] (see also [49,50]), N f,b2z may be larger or smaller than N f,cr , depending on the chiral superfield representation R For a general gauge group G, the N = 1 theory under consideration here, with N f flavors of chiral superfields Φ and Φ in the representations R snd R, respectively, is invariant under a classical continuous global (cgb) symmetry where the first and second U(N f ) groups consist of operators acting on Φ j and Φi , respectively, with i, j = 1, ..., N f , and the R-symmetry group U(1) R is defined by the following commutation relations where the Q α and Q † α are the generators of the supersymmetry transformations (with α spinor index here).The U(1) A symmetry is anomalous, due to instantons, so the actual non-anomalous continuous global symmetry of the theory is This symmetry is exact at an IR fixed point in the conformal window.The representations of the matter chiral superfields under the gauge and global symmetry groups
SU(Nc) SU(N
are listed in Table I for the generic case in which the representation R is complex.We will focus on the gauge-invariant quadratic operator products of the "meson" type, where, as above, i and j are flavor indices and the group indices are implicit, with it being understood that they are contracted in such a way as to yield a singlet under the gauge group G.As a holomorphic product of chiral superfields, M j i is again a chiral superfield.The bilinear fermion operator product in M j i is ψi ψ j ≡ ψT i,L Cψ j L , where C is the conjugation Dirac matrix, and we use the convention of writing ψi,L and ψ j L as left-handed Weyl fermions.Because the global symmetry (2.13) is exact in the conformal window, the meson-type quadratic chiral superfields transform according to (irreducible) representations of the group G gb .The anomalous dimension of this operator is independent of the flavor indices i and j [51], so in [52] and here, we denote its value at the superconformal IRFP simply as γ M,IR .Using the fact that ψi,L = (ψ i R ) c , the fermion bilinear in Φi Φ i can be rewritten in the standard form ψi ψ i of a mass term in a non-supersymmetric theory.Denoting γ ψψ,IR as the anomalous dimension of the latter bilinear, it follows that γ M,IR = γ ψψ,IR . (2.15) A closed-form expression for the beta function of this theory was derived by Novikov, Shifman, Vainshtein, and Zakharov (NSVZ) [39]: It is convenient to introduce the notation and Thus, the conformal window is the interval One can express the anomalous dimension of an operator such as a fermion bilinear ψψ in a gauge theory as a series expansion in the squared gauge coupling, where c ℓ is the ℓ-loop coefficient.As noted above, the value of this anomalous dimension at an IRFP is written as γ ψψ,IR .The one-loop coefficient c 1 is schemeindependent, while the c ℓ with ℓ ≥ 2 are schemedependent.Physical quantities such as anomalous dimensions at an IRFP clearly must be scheme-independent.In conventional computations of these quantities, one first writes them as series expansions in powers of the coupling, as in (2.20), and then evaluates these series expansions with α set equal to α IR , calculated to a given loop order.However, a (finite-order) series expansion of this type is scheme-dependent beyond the leading terms.Scheme dependence is also present in higher-order perturbative calculations in quantum chromodynamics (QCD), and its effects have been routinely addressed in studies comparing perturbative QCD predictions with experimental data.Formally speaking, these studies were on scheme dependence in the vicinity of the UV fixed point at zero coupling in QCD.Studies of scheme dependence in the different context of an IR fixed point located away from zero coupling have been carried out in [50], [53]- [58].For perturbative series calculations of anomalous dimensions, it is desirable to use a formalism in which results calculated to each order are scheme-independent.
Since α IR → 0 as b 1 → 0 at the upper end of the conformal window, as it follows that one can reexpress the series expansion for γ ψψ,IR in terms of a variable that is proportional to b 1 , namely the scheme-independent variable [2,59] (2.21) In the present theory, Scheme-independent calculations of anomalous dimensions of various operators at an IRFP were carried out in [8]- [12] for non-supersymmetric gauge theories, and results were compared with measured values from lattice simulations.In [52] we carried out corresponding scheme-independent calculations of anomalous dimensions of several composite superfield operator products in the present N = 1 supersymmetric theory.In general, the scheme-independent series expansion for a (gaugeinvariant) operator O at an IRFP in the conformal window can be written as The truncation of this series to Thus, for the operator M we write It is convenient to define the reduced schemeindependent expansion variable and we denote y cr = 1 − x cr = 1/2 at the lower end of the conformal window.
In the conformal window, the anomalous dimension at the IRFP in the conformal window, the exact expression for γ ψψ,IR = γ M,IR , is This can be seen, for example, by solving for γ M at the IR zero of the NSVZ beta function in Eq. (2.16), which is thus γ M,IR .(Another derivation makes use of the R charges of the Φ and Φ chiral superfields, as discussed in [52].)This anomalous dimension γ M,IR can be expressed in terms of y as follows: Thus, the coefficient κ M,j in Eq. (2.25) has the value (2.31) The finite sum (2.25) was evaluated in our previous work [52], yielding Note that the numerator of the expression on the righthand side of Eq. (2.32) contains a factor (y − 1) which cancels the denominator in Eq. (2.32), so that the resulting expression is a polynomial, as is clear from its definition (2.25) or from Eq. (2.30).
In [11] we showed that, for a given N f in the conformal window, γ M,IR,∆ p f approaches the exact result in Eqs.(2.29) and (2.30) exponentially rapidly (see Eqs. (2.37)-(2.41) in [11]).We recall this result, since it is relevant here.As in [11], we define the fractional difference (2.34) Since y p = e −p ln(1/y) and 0 < y ≤ 1/2 in the conformal window, this fractional difference evidently approaches zero exponentially rapidly as a function of the truncation order, p.This is true for any value of y in the conformal window, and, as a special case, it is true in the limit y → y cr = 1/2.
III. ANOMALOUS DIMENSION CONDITIONS IN CONFORMAL WINDOW
From analyses of the Schwinger-Dyson equation for the fermion propagator, of operator product expansions, and other arguments [35][36][37][38], it has been suggested that the upper bound applies for an IRFP in the conformal window.Since γ ψψ,IR increases as one decreases N f throughout the conformal window, it follows that the lower end of this conformal regime occurs when the inequality (3.1) is saturated, i.e., when the following condition holds: That is, Eq. (3.2) determines the value of N f,cr demarcating the lower end of the conformal window.We denote Eq. (3.2) as the linear γ critical condition, denoted as LγCC.Note that this condition is in accord with the exactly known value of γ ψψ,IR = γ M,IR in the present N = 1 supersymmetric gauge theory, as is clear from the exact result (2.29).
The quadratic condition was discussed as a critical condition for fermion condensation, and its connection with the condition (3.2) was noted in [35] (see also [60]).We denote Eq. (3.3) as the quadratic γ critical condition, QγCC.As is obvious from the fact that Eq. ( 3.3) can be rewritten as (γ ψψ,IR − 1) 2 = 0, it has a double root at γ ψψ,IR = 1.
Hence, an exact solution of the quadratic equation (3.3) yields the same result as the linear condition (3.2).However, when applied in the context of series expansions such as Eq.(2.23), as calculated to finite order, the results differ from those obtained with the linear condition (3.2).This difference arises because the quadratic condition (3.3) generates higher-order terms in powers of the scheme-independent expansion variable, and leads to different coefficients of lower-order terms [14,15].In a nonsupersymmetric gauge theory with N f fermions transforming according to a single representation of the gauge group, the use of the quadratic condition (3.3) was found [14,15] to (i) show better convergence as a function of increasing order of truncation of the series (2.23) than the linear condition (3.2) and (ii) predict a larger value of N f,cr than the linear γCC.This work in [14,15] used the general results [9,10] for γ ψψ,IR,∆ p f to the highest order that we had calculated them, namely p 4.
As noted in the introduction, an interesting question that we will investigate here is whether the quadratic γCC also converges more rapidly than the linear γCC in the above-mentioned N = 1 supersymmetric gauge theory.An additional question that we will also investigate concerns whether the values of N f,cr obtained from the LγCC and QγCC approach the exact value N f,cr = 3N c from above or below.Equivalently, we will determine whether the corresponding values of x cr approach the exact value x cr = 1/2 from above or from below.It is worthwhile to mention that a rigorous upper bound on γ ψψ,IR in a conformal field theory is that [61][62][63] γ ψψ,IR ≤ 2 .
IV. CALCULATIONAL RESULTS
The linear γCC equation γ ψψ,IR − 1 = 0 with γ ψψ,IR calculated to order O(∆ p f ) inclusive is Eq.(3.2).Substituting Eq. (2.32), this becomes or equivalently, LγCC p : This LγCC p condition is a polynomial equation of degree p in the variable y, or equivalently in the variable x = 1−y.We denote the (physical) solution of the LγCC equation (4.2), expressed in terms of the variable x, as x cr,L,p .For the 1 ≤ p ≤ 3, we give the analytic solutions below, with floating-point values displayed to the indicated number of significant figures: x cr,L,1 = 0 (4.3) and Although the LγCC p condition (4.2) has p formal solutions, in each case, there is no ambiguity concerning which of these is the physical solution.For example, for p = 2, the other solution, namely x = (1/2)(3 + √ 5) = 2.618 is outside the conformal-window range, 1/2 ≤ x < 1; for p = 3, the other two solutions form an unphysical complex-conjugate pair, and so forth for higher p.
The quadratic γCC condition (3.3) with γ ψψ,IR calculated to O(∆ p f ) is (γ ψψ,IR,∆ p f − 1) 2 = 0.If one takes the square root of this equation to begin with, one simply recovers the linear γCC equation.If, instead, one evaluates terms at O(∆ p f ) resulting from the quadratic expression, then one obtains the equation where the sum S has the form where the coefficients λ j will be discussed shortly.Given an input for γ ψψ,IR calculated to O(∆ p f ), the quadratic γCC generates terms up to O(∆ 2p f ); however, for selfconsistency, one performs the corresponding truncation of terms to O(∆ p f ), since this is the accuracy of the input expressions for γ M,IR,∆ p f .For the coefficients λ j we calculate that and so forth for higher j.In general, we find that λ j contains a term 2κ j and then (a) if j is odd, a sum of terms of the form −2κ r κ j−r where 1 ≤ r ≤ (j − 1)/2, and (b) if j is even, a sum of terms of the form −2κ r κ j−r with 1 ≤ r ≤ (j/2) − 1, together with a term −κ 2 j/2 .Substituting the expression κ j = 1/(N f,u ) j from Eq. (2.31), we find and hence Calculating this sum in closed form, we obtain The numerator of the expression on the right-hand side of Eq. (4.15) contains a factor of (1 − y) 2 which cancels the factor in the denominator, so that the result is a polynomial in y, as is obvious from its definition, Eq. (4.7), or from Eq. (4.14).The resultant quadratic γCC condition, evaluated to O(∆ p f ), is Since S is a polynomial in y, it follows that S − 1 is also, and hence the expression in square brackets in Eq. (4.16) contains a factor of (1 − y) 2 , which cancels with the (1 − y) 2 in the denominator.We denote the (physical) solution of the QγCC eqution (4.16), expressed in terms of the variable x, as x cr,Q,p .As is clear from Eq.
(4.14), if p = 3, then the QγCC p condition is a polynomial equation of degree p in the variable y, or equivalently in the variable x, while if p = 3, then the coefficient of the highest-power term vanishes, so the resultant equation is of degree 2 in y.Indeed, with this cancellation, the QγCC 3 equation is identical to the QγCC 2 equation.As was the case with the LγCC p condition, although for p ≥ 2, there are several solutions, there is no ambiguity concerning which is the physical solution; for example, for p = 2, the other solution is x = 2 + √ 2 = 3.414, which is outside the conformal-window range of x.The analytic solutions to the lowest cases are 3. the fractional difference with respect to the exact value, 4. the fractional difference with respect to the next lower-order value, We see that in this theory, (i) for a given order O(∆ p f ) with p ≥ 3, the linear γCC yields a value of x cr,L,p that is closer to the exact value x cr = 1/2 than the value x cr,Q,p obtained from the quadratic γCC, so that the linear γCC yields an estimate of x cr that approaches the exact value more rapidly than the the estimate from the quadratic γCC.This is our main result.Furthermore, while the linear γCC yields a value of x cr,L,p that approaches the exact value from below, the quadratic γCC at order p ≥ 2 yields a value of x cr,Q,p that approaches the exact value from above.These findings are evident in Tables II and III.We have checked that these properties also hold at higher truncation order beyond the highest order, p = 10, shown in these tables.
Contrasting these results with those in the corresponding non-supersymmetric gauge theory, one must first recall that the value of N f,cr (depending on the gauge group G and the fermion representation R) is not known exactly, so that one cannot make a precise comparison with it.However, one can, at least, determine the fractional changes in the values of the solutions for x cr,L,p and x cr,Q,p as functions of the order O(∆ p f ) to which one has calculated γ ψψ,IR .At an IRFP in a non-supersymmetric gauge theory with fermions in one representation, the maximum order to which the scheme-independent calculations have been performed is p = 4, with results given in our Refs.[9,10].It was found in [14,15] (and confirmed in [16]), using these results for γ ψψ,∆ p f from [9,10], that the quadratic γCC converges more rapidly than the linear γCC.Thus, for p ≥ 3, the relative accuracies and convergence rates of the linear versus the quadratic γCC that we find for this N = 1 supersymmetric theory are opposite to the behavior that was found in the non-supersymmetric theory.Moreover, in the nonsupersymmetric gauge theory, the linear and quadratic γCC conditions yield estimates of N f,cr that increase as a function of the truncation order, p [14,15].This is also true for the values of N f,cr and thus x cr,L,p obtained from the LγCC p equation in the supersymmetric gauge theory studied here, i.e., x cr,L,p approaches the exact value x cr = 1/2 from below.In contrast, in this supersymmetric theory, for p ≥ 2 the value of x cr,Q,p calculated from the QγCC p equation approaches the exact value of x cr from above.
V. CONCLUSIONS
In conclusion, in this paper we have performed a comparison of the linear and quadratic critical conditions γ ψψ,IR = 1 and γ ψψ,IR (2 − γ ψψ.IR ) = 1, where γ ψψ,IR is the anomalous dimension of the fermion bilinear ψψ at an infrared fixed point in the conformal window in an N = 1 supersymmetric gauge theory with N f pairs of chiral superfields Φ i and Φi transforming according to the R and R representations of the gauge group G, respectively.This theory has the appeal that both γ ψψ,IR and the value N f,cr at the lower boundary of the conformal window are known exactly.We find that, as a function of the order O(∆ p f ) to which one uses the truncated calculation of γ ψψ,IR as input, for p ≥ 3, the linear critical condition yields an estimate of x cr = N f,cr /N f,u that is more accurate than the quadratic critical condition.This behavior is opposite to what was found for non-supersymmetric gauge theories.It should be emphasized that the use of both the linear and quadratic critical conditions with finite-order inputs for γ ψψ,IR,∆ p f are approximate perturbative methods.Thus, differences between predictions for the lower end of the conformal window obtained with these methods provide one measure of the importance of higher-order terms in the inputs, γ ψψ,IR,∆ p ′ f with p ′ > p.Studies that elucidate the properties of IR-conformal gauge theories and, in particular, the location of the lower boundary of the conformal window in these theories, are of continuing interest, both for basic quantum field theory and for possible phenomenological applications.The comparative analysis reported herein provides some further insight into predictions from different critical conditions for the lower boundary of the conformal window.
TABLE I :
Matter content of a vectorial N = 1 supersymmetric gauge theory with gauge group G and matter content consisting of N f massless chiral superfields Φ and Φ transforming according to the representations R and R, respectively.The symmetry groups correspond to those in Eq. (2.13).
It happens that the lowest-order result x cr,Q,1 is exact, but this is not generic; for p ≥ 2, the QγCC p equation yields a value of x cr,Q,p > 1/2.In TableII, we list the results of the calculations with the linear γCC with the input value of γ ψψ,IR,∆ p f for 1 ≤ p ≤ 10, yielding the LγCC p condition.TableII includes:1.the value of x cr,L,p , 2. the ratio of x cr,L,p to the exact value x cr = 1/2,
TABLE II :
(1)this table, the columns list(1)the value p specifying the order O(∆ p f ) to which the linear (L) criticality condition LγCC is evaluated, yielding the LγCCp condition (4.2); (2) the value of N f,cr /Nc calculated from this LγCCp condition, denoted x cr,L,p ;(3) the ratio r cr,L,p in Eq. (4.19); (4) the fractional difference with respect to the exact value, Diff cr,L,p in Eq. (4.20); and (5) the fractional difference with respect to the next lower-order value, Diff cr,L,p,p−1 in Eq. (4.21).The abbreviation NA means "not applicable", and the notation 0.91197e-2 means 0.91197 × 10 −2 .
TABLE III :
(1)this table, the columns list(1)the value p specifying the order O(∆ p f ) to which the quadratic criticality condition QγCC is evaluated, yielding the QγCCp condition (4.16); (2) the value of N f,cr /Nc calculated from this QγCCp condition, denoted x cr,Q,p ; (3) the ratio r cr,Q,p in Eq. (4.19); (4) the fractional difference with respect to the exact value, Diff cr,Q,p in Eq. (4.20); and (5) the fractional difference with respect to the next lower-order value, Diff cr,Q,p,p−1 in Eq. (4.21).Other notation is as in TableII. | 7,009.8 | 2023-11-09T00:00:00.000 | [
"Physics"
] |
Modulation format dependence of digital nonlinearity compensation performance in optical fibre communication systems
: The relationship between modulation format and the performance of multi-channel digital back-propagation (MC-DBP) in ideal Nyquist-spaced optical communication systems is investigated. It is found that the nonlinear distortions behave independent of modulation format in the case of full-field DBP, in contrast to the cases of electronic dispersion compensation and partial-bandwidth DBP. It is shown that the minimum number of steps per span required for MC-DBP depends on the chosen modulation format. For any given target information rate, there exists a possible trade-off between modulation format and back-propagated bandwidth, which could be used to reduce the computational complexity requirement of MC-DBP.
Introduction
Currently, over 95% of digital data traffic is carried over optical fibre networks, which form a substantial part of the national and international communication infrastructure.The achievable information rates (AIRs), a natural figure of merit in coded communication systems for demonstrating the net data rates based on soft-decision decoding [1,2], of optical fibre networks have increased greatly over the past four decades with the introduction and development of wavelength division multiplexing (WDM), advanced modulation formats, digital signal processing (DSP), improved optical fibres and amplifier technologies, which together have facilitated the communication revolution and the growth of the Internet [3,4].However, the AIRs of optical communication systems are currently limited by the nonlinear distortions inherent to transmission using optical fibres.These signal degradations are more significant in systems using larger transmission bandwidths, closer channel spacing and higher order modulation formats.Optical fibre nonlinearities have been suggested as the major bottleneck to optical fibre transmission performance [5,6].
Much research work is focused on compensating fibre nonlinearities both electronically and optically to enhance the AIRs of optical communication systems.A number of nonlinear compensation (NLC) techniques have been investigated, such as digital back-propagation (DBP), nonlinear pre-distortion, Volterra equalisation, optical phase conjugation, nonlinear Fourier transform, twin-wave phase conjugation, etc [7][8][9][10][11][12][13].Among these, multi-channel DBP (MC-DBP) has been validated as a promising approach for compensating both intrachannel and inter-channel fibre nonlinearities in WDM optical communication systems.The performance and optimisation of MC-DBP have been investigated separately in both dualpolarisation 16-ary quadrature amplitude modulation (DP-16QAM) and DP-64QAM transmission systems [14][15][16][17][18][19], where the minimum required number of steps per span (MRNSPS), an important complexity index in MC-DBP, to achieve the greatest Q 2 factors/lowest bit-error-rates (BERs) at different back-propagated bandwidths was studied.In addition, the performance of single-channel and full-field DBP with regard to the AIRs has been studied for 16-QAM and 64-QAM systems [20].However, to the best of our knowledge, no optimisations and analytical predictions for partial-bandwidth DBP were comprehensively investigated with respect to AIRs over different signal modulation formats.
In this paper, the performance and the optimisation of MC-DBP is studied with respect to both signal-to-noise ratios (SNRs) and AIRs of the compensated optical fibre communication systems, where different modulation formats, including dual-polarisation quadrature phase shift keying (DP-QPSK), DP-16QAM, DP-64QAM, DP-256QAM, are applied.Numerical simulations and analytical modelling have been carried out in a reference system of a 9channel 32-Gbaud Nyquist-spaced WDM optical communication system, with a transmission link of standard single-mode fibre (SSMF).Additionally, the MRNSPS, which is an important index for assessing the complexity of MC-DBP algorithm [7,13,14], has been studied with respect to the AIRs of the compensated communication systems.Finally, an analytical model for predicting the performance of EDC, partial-bandwidth DBP, and fullfield DBP in the optical transmission system has been implemented by considering the contribution of modulation format dependent nonlinear distortions.
It is found that the nonlinear distortions in the systems with applying full-field DBP (due to signal-noise interactions) become independent of modulation format, while the nonlinear distortions in both the electronic dispersion compensation (EDC) and partial-bandwidth DBP (mainly from signal-signal interactions) display a considerable dependence on modulation format.In the optimisation with respect to AIRs, the MRNSPS at different back-propagated bandwidths demonstrates a strong dependence on the modulation format, in comparison with the optimisation of MC-DBP based on SNRs.Also, for any given target information rate, there exists a possible trade-off between modulation format and back-propagated bandwidth which could be used to reduce the computational complexity requirement of MC-DBP.
This paper is arranged as follows.The transmission system under investigation and numerical simulation setup are described in Section 2. The analytical transmission model and mutual information theory is outlined in Section 3. Section 4 details the simulation and analytical results, and further analyses the MRNSPS and the AIRs for different modulation formats in several scenarios of interest.Finally, conclusions are drawn in Section 5.
Transmission system
Figure 1 illustrates the simulation setup of the 9-channel 32-Gbaud Nyquist-spaced superchannel optical transmission system using the following modulation formats: DP-QPSK, DP-16QAM, DP-64QAM and DP-256QAM.In the transmitter, a 9-line 32-GHz spaced laser comb is employed as the phase-locked optical carrier, and the comb lines are optically demultiplexed before the I-Q modulators [8,21].The transmitted symbol sequence in each optical channel is independent and random, and the symbol sequences in each polarisation are de-correlated with a delay of half the sequence length.A root-raised cosine (RRC) filter with a roll-off of 0.1% is used for the Nyquist pulse shaping (NPS).The SSMF is simulated based on the split-step Fourier solution of the Manakov equation with a logarithmic step-size distribution, of which the mathematical expression can be found in Eq. (6) in Ref [22].The erbium-doped optical fibre amplifier (EDFA) is employed in the loop to compensate for the loss in the optical fibre.At the receiver, the signal is mixed with an ideal free-running local oscillator (LO) laser to realise an ideal coherent detection of all in-phase and quadrature signal components in each polarisation.
In the DSP modules, the EDC is implemented using a frequency domain equaliser [23,24], and the MC-DBP is realised using the reverse split-step Fourier solution of Manakov equation with the nonlinearity compensation implemented in the middle of each segment [7,16,25].The ideal RRC filter is applied to select the desired back-propagated bandwidth for the MC-DBP, and also to remove the unwanted out-of-band amplified spontaneous emission (ASE) noise.The back-propagated bandwidths considered are from 32-GHz for the 1-channel DBP, increasing to 288-GHz for 9-channel (hereafter, full-field) DBP.The matched filter is employed to select the central channel and to remove cross talk from neighbouring channels.Finally, the SNR is estimated over 2 18 symbols to assess the performance of the central channel, and the mutual information (MI) is further computed from the SNR, as discussed in Section 3.2.All numerical simulations are implemented with a digital resolution of 18 sample/symbol.The phase noise from the transmitter and LO lasers, the frequency offset of the transmitter and LO lasers, as well as the differential group delay (DGD) between the two polarisations in the fibre are all neglected.Detailed parameters of the optical transmission system are shown in Table 1.
Nonlinear distortions in the optical communication system
Considering the contributions from ASE noise and fibre nonlinearities (optical Kerr effect), the performance of a dispersion-unmanaged optical communication system can be described using a so-called effective SNR [26,27], which after fibre propagation can be described as For dual-polarisation multi-span EDFA amplified Nyquist-spaced WDM transmission systems, the overall ASE noise exhibits as an additive white Gaussian noise coming from EDFAs at the end of each transmission span and can be expressed as [4] ( ) where N s is the total number of spans in the link, G is the EDFA gain, F n is the EDFA noise figure, hν 0 is the average photon energy at the optical carrier frequency ν 0 , and R S denotes the symbol rate of the transmitted signal.The contribution of the nonlinear distortions due to nonlinear signal-signal interaction can be described as follows [26-28] where η is the nonlinear distortion coefficient, ε is the coherence factor, which is responsible for the increasing signal-signal interaction with transmission distance.This factor lies between 0 and 1 depending on the decorrelation of the nonlinear distortions between each fibre span [26].In contrast to the 2 S S σ − term, the corresponding noise contribution due to signal-ASE interaction grows quadratically with launched power [29,30], and can be modelled as: with the distance dependent pre-factor of ς, which accounts for the accumulation of signal-ASE distortions with the transmission distance, and can be effectively truncated as [31]: In the framework of the first-order perturbation theory, the coefficient η can be defined depending on the model assumptions.In the conventional approaches, the impact of optical fibre nonlinearities on signal propagation is typically treated as additive circularly symmetric Gaussian noise [6,26,27].In particular, the effective variance of nonlinear distortions is clearly supposed to be entirely independent of the signal modulation format.However, it has been theoretically shown that the nonlinear distortions depend on the channel input symbols, and the Gaussian assumption of optical nonlinear distortions cannot, therefore, be sufficiently accurate [1,[32][33][34][35][36][37].Furthermore, significant discrepancies have been recently observed, especially for low-order signal modulation formats [33][34][35][36].Assuming that all WDM channels are equally-spaced, the modulation format dependent nonlinear distortion coefficient η over a single fibre span can be expressed in closed-form using the following approximation [38] ( ) ( ) where β 2 is the group velocity dispersion coefficient, γ is the fibre nonlinear coefficient, L eff is the effective fibre span length, L s is the fibre span length, N ch is the total number of WDM channels, ψ(x) is the digamma function, C≈0.577 is the Euler-Mascheroni constant.The constant κ is related to the fourth standardised moment (kurtosis) of the input signal constellation.For Gaussian, QPSK, 16-QAM, 64-QAM, and 256-QAM, its values are {0, 1, 17/25, 13/21, 121/200}, respectively [36,38].Finally, the coefficient η 0 quantifies the influence of optical fibre nonlinearity per fibre span under the Gaussian assumption, and can be analytically approximated as [26][27][28] ( ) where α is the fibre attenuation parameter.
It should be noted that the signal-signal interaction term in Eq. ( 1) can be completely suppressed via full-field nonlinearity compensation.However, similar to [39], when the nonlinearity compensation is partially applied by means of operating the MC-DBP over a certain bandwidth, the residual signal-signal interaction term in Eq. ( 1) can be effectively modelled as follows ( ) ( ) where (DBP) ch N denotes the number of back-propagated channels.Parameters used in the analytical model are specified in Table 2.The effectiveness of the modulation-dependent analytical model for predicting the performance of EDC and MC-DBP has been verified in previous work [36,37,[40][41][42], where detailed comparisons between the analytical and simulation results in terms of SNR have been investigated under various transmission schemes.
Mutual information and achievable information rate estimation
The discrete-time memoryless additive white Gaussian noise (AWGN) channel with complex-valued input random symbols X and output random symbols Y in each state of polarisation is given by the well-known input-output relationship: Y = X + Z, where Z denotes a zero-mean, circularly symmetric, complex-valued, Gaussian random variable (independent of X) with power 2 where the pre-factor of 2 accounts the use of two polarisation states, [ ] 1 ,..., M X X X = denotes the set of possible transmitted random symbols, C denotes the set of complex numbers, is the cardinality of the M-QAM constellation with the number of bits per symbol m.Owing to the AWGN channel assumptions, the conditional probability density function of the channel output Y given the channel input X (the channel law) is given by [1,20,43]: Y X z z y x P y x πσ σ The relationship between the symbol-wise soft-decision MI and the SNR for the AWGN channel model is illustrated in Fig. 2, where the MI is numerically computed using Gauss-Hermite quadrature [43].For a dual-polarisation Nyquist-spaced communication system, the AIR (in bit/s) can be straightforwardly evaluated from the MI as follows: ch AIR MI.
S N R = (11) Since the "true" channel of optical fibre channel is unknown, we follow the concept of mismatched decoding as described in [44].Thus, we obtain a lower bound on the MI in Eq. ( 9), assuming that the auxiliary channel is given by Eq. ( 10), leading to the following approximations for the effective variance in Eq. ( 1
Results and discussions
In this section, the performance and optimisation of MC-DBP are investigated in terms of both SNR and AIR.The transmission link is 2000 km (25 × 80 km) SSMF, unless otherwise noted.The nonlinear coefficient in the MC-DBP algorithm is always set to 1.2 /W/km, which is the same as in the optical fibre.The simulation results of SNR versus optical signal power per channel at different back-propagated bandwidths is shown in Fig. 3, where different modulation are used.All the MC-DBP algorithms are operated with 800 step/span to ensure optimal performance.It is found that the DP-16QAM, DP-64QAM, and DP-256QAM transmission systems behave almost the same, while the DP-QPSK system outperforms the other three modulation formats in the cases of EDC and up to 7-channel DBP.This shows the nonlinear distortions in the EDC and partial-bandwidth DBP case (mainly from signal-signal interactions) depend on modulation format, and the DP-QPSK system has less nonlinear distortions than other modulation formats due to less "Gaussianity" in the QPSK constellation.This is consistent with earlier work [35,36].However, in the case of full-field (9-channel) DBP, the transmission systems using all modulation formats behave the equivalently, and this demonstrates that the nonlinear distortions (now mainly from signalnoise interactions) become independent of modulation formats in the case of full-field DBP, where the signal-signal nonlinear interactions have been fully compensated.Figure 4 shows the simulation results of SNR versus number of steps per span in the MC-DBP algorithm for different back-propagated bandwidths, where different modulation formats are applied.Here the MRNSPS is defined as the minimum number of steps per fibre span to achieve the best system performance (Q 2 factor, SNR or AIR) in MC-DBP for different numbers of back-propagated channels, which is a significant indicator for evaluating the complexity of the MC-DBP algorithm [7,13,14].It can be seen that, for the same backpropagated bandwidth, the MRNSPS is the same for all modulation formats when the MC-DBP algorithm is optimised in terms of SNR performance, which is 5 steps for 1-channel DBP, 25 for 3-channel DBP, 75 for 5-channel DBP, 150 for 7-channel DBP, and 500 for 9channel (full-field) DBP.Again, when the MC-DBP is optimised in terms of SNR, the DP-QPSK system outperforms the DP-16QAM, DP-64QAM and DP-256QAM transmission systems in the cases of (linear) EDC and up to 7-channel DBP, and the systems behave the same for all modulation formats in the case of full-field (9-channel) DBP.The above discussions are based on the evaluation and optimisation of the MC-DBP algorithm in terms of SNR performance.However, in the coded transmission systems, the AIR is a more useful indicator and can demonstrate the net data rates that can be achieved by a transceiver based on soft-decision decoding.In the following work the MC-DBP algorithm will be investigated and optimised in terms of AIR.
In Fig. 5, the MRNSPS at different back-propagated bandwidths is studied with respect to the AIRs of the 9-channel 32-Gbaud Nyquist-spaced optical communication system, where different modulation formats are applied.It can be seen that for different back-propagated bandwidths, the MRNSPS in the MC-DBP algorithm depends on the order of the modulation formats, to achieve the best possible AIR.This demonstrates, for the first time to our knowledge, that the MRNSPS depends on the modulation format if MC-DBP is optimised with respect to AIR.The MRNSPS at different back-propagated bandwidths for different modulation formats is shown in Table 3.Compared to the optimisation of MC-DBP with respect to SNR, the DP-256QAM system has the same MRNSPS to achieve the best AIR for different back-propagated bandwidths.However, the MRNSPS in the MC-DBP algorithm for DP-QPSK, DP-16QAM, and DP-64QAM systems is significantly reduced with the decrease in modulation format order.
The MRNSPS in the MC-DBP optimisation with respect to SNR is shown at the bottom of Table 3.It is found that for the DP-QPSK transmission system, the MRNSPS is reduced to (or less than) 1/5 of the MRNSPS in the MC-DBP optimisation in terms of SNR, which means the conventional optimisation actually overestimates the system requirements.This has an implication for practical optimisation of the MC-DBP algorithm that the complexity of MC-DBP may be reduced considerably if it is re-optimised with respect to AIR, depending on the modulation format applied in the communication system.Corresponding to Fig. 2, Fig. 6 shows the simulation results of AIRs versus optical signal power in the 9-channel 32-Gbaud Nyquist-spaced optical fibre communication system using different modulation formats at different back-propagated bandwidths, where all the MC-DBP algorithms have been operated with 800 step/span to ensure optimal performance.Firstly, it can be found that the highest gain in terms of AIRs for the full-field (9-channel) DBP comes from the highest order of modulation formats (DP-256QAM), which is 1.34 Tbit/s from 2.86 Tbit/s at −2 dBm in EDC case to 4.20 Tbit/s at 6.5 dBm in the 9-channel DBP case.It is also found in Fig. 6 that the AIR of the DP-256QAM system using 7-channel DBP exceeds the DP-64QAM system using full-field (9-channel) DBP when the optical power is less than 3.5 dBm, and also exceeds the AIR of the DP-16QAM and DP-QPSK systems using full-field DBP.This implies that, to achieve a given target AIR, there could be a compromise in the selection between the modulation format and the back-propagated bandwidth, depending on the permissible complexity of the system implementation.This effect will depend on the transmission distance and an analytical prediction based on the modulation format dependent noise model is implemented to assess the system performance at different transmission distances.
As discussed in Section 3, the analytical model for predicting the AIRs of the 9-channel 32-Gbaud Nyquist-spaced optical transmission system using EDC, partial-bandwidth DBP and full-field DBP has been implemented based on the modulation format dependent noise model, where different modulation formats including DP-QPSK, DP-16QAM, DP-64QAM and DP-256QAM are applied.The prediction of AIR at the optimum launch power for different back-propagated bandwidths according to this analytical model and the simulation results are both shown in Fig. 7.The '0' value the number of back-propagated channels refers to the EDC-only case, and the dots inside the hollow markers of all EDC cases are the analytical predictions.It is found in Fig. 7 that a good agreement has been achieved between the analytical model and the numerical simulations for all modulation formats, in the cases of both EDC and MC-DBP.The analytical model is further applied to predict the AIRs of the 9-channel 32-Gbaud Nyquist-spaced optical communication systems with different transmission distances, where the use of EDC, partial-bandwidth DBP and full-field DBP has been taken into consideration.The system is estimated under transmission over SSMF with a uniform span length of 80 km.The prediction of AIRs versus transmission distances is illustrated in Fig. 8.It can be seen that, at the transmission distance of 2000 km the DP-256QAM system using 7-channel DBP outperforms the DP-64QAM using full-field (9-channel) DBP, and at the transmission distance of 1200 km the DP-256QAM system using 3-channel, 5-channel and 7-channel DBP all outperforms the DP-64QAM system using full-field (9-channel) DBP.This offers a tradeoff on the selection of modulation formats and back-propagated bandwidths to achieve a target AIR.For longer transmission distances, e.g., over 2500 km, only the full-field DBP of DP-256QAM system can outperform the AIR of DP-64QAM system using full-field DBP.In addition, it can be found that for the DP-QPSK modulation format, the system using EDC, partial-bandwidth DBP and full-field DBP shows the same AIR, with the transmission distance up to 6000 km.This means that in an ideal system (no transceiver SNR limitations) the application of DBP is not necessary for DP-QPSK transmission to compensate fibre nonlinearities for enhancing the AIR, when the distance is less than 6000 km.
Although the above discussions are analysed based on ideal transmission systems, where the transceiver SNR penalty, DGD, and laser phase noise have been neglected, this paper gives an insight into the optimisation of MC-DBP, as well as in the selection of modulation formats and back-propagated bandwidths when designing practical optical communication systems using digital nonlinearity compensation techniques.The ideal implementations of both optical transmission systems and MC-DBP have been applied in order to investigate the dependence on modulation formats, when an optimum system performance is obtained.The use of 800 step/span in MC-DBP is to ensure the optimal performance of nonlinearity compensation.However, in commercial communication systems, a more feasible and promising value for the number of step per span in the application of MC-DBP is generally considered as 1 step/span [13].From Fig. 4, it can be found that in such case only 1-channel DBP can give improvement in the system performance compared to the EDC case, which is consistent with our previous work [16].The performance of optical communication system using the 1 step/span 1-channel DBP is further investigated in terms of SNRs (in Fig. 9) and AIRs (in Fig. 10).Modulation formats of DP-QPSK, DP-16QAM, DP-64QAM, and DP-256QAM are applied, and the transmission distance is again 2000 km.It can be found in Fig. 9 that, similar to the ideal MC-DBP, the DP-16QAM, DP-64QAM, and DP-256QAM systems still behave almost the same, while the DP-QPSK system outperforms the other modulation formats.In Fig. 10, the use of the 1 step/span 1-channel DBP can only produce a small improvement (~0.05 Tbit/s) on the optimum AIRs in the DP-64QAM and DP-256QAM systems, which is from 2.83 Tbit/s at −2 dBm (EDC) to 2.88 Tbit/s at −1.5 dBm (1-channel DBP) in DP-64QAM system, and is from 2.86 Tbit/s at −2 dBm (EDC) to 2.91 Tbit/s at −1.5 dBm (1-channel DBP) in DP-256QAM system.Some research was also carried out to further reduce the complexity of DBP to 1 step per whole link, which is called single-step DBP.The performance of single-step DBP has been investigated in the DP-QPSK optical transmission system using an enhanced split-step Fourier method [45,46].The performance dependence of modulation format for this singlestep DBP approach will be investigated in our future work.
Conclusions
In this paper, the performance and the optimisation of MC-DBP was investigated with respect to both SNR and AIR of the optical fibre communication systems using different modulation formats, such as DP-QPSK, DP-16QAM, DP-64QAM, and DP-256QAM.Numerical simulations and analytical modelling have been carried out in a 9-channel 32-Gbaud Nyquistspaced SSMF optical communication system.Simulation results show that the nonlinear distortions in the system using full-field DBP (from signal-noise interactions) behave independent of modulation formats, while the nonlinear distortions in the cases of EDC and partial-bandwidth DBP (mainly from signal-signal interactions) still show considerable dependence on modulation format.The minimum required number of steps per span in the MC-DBP algorithm has been studied with respect to the AIR of the compensated communication systems.It has been found that the MRNSPS at different back-propagated bandwidths shows a strong dependence on the modulation format, in contrast to the analysis that the MRNSPS at different back-propagated bandwidths is independent of modulation format when the MC-DBP is optimised in terms of SNR.In addition, the analytical model for predicting the performance of EDC and MC-DBP in the optical transmission system has been carried out based on the analysis of modulation format dependent nonlinear distortions.Good agreement has been achieved between the analytical predictions and numerical simulations.According to the analytical prediction, to achieve a given AIR, there would exist a possible trade-off between the modulation format and back-propagated bandwidth, according to practical limitations.Also in an ideal system, DBP is not necessary to compensate the fibre nonlinearities for enhancing the AIR in DP-QPSK transmission, when the distance is less than 6000 km.
Although the analysis and discussions were carried out based on the scheme of ideal transmitter and DSP to study the performance dependence on modulation formats, this paper gives an insight into the optimisation of MC-DBP, as well as in the selection of modulation formats and back-propagated bandwidths for designing practical optical communication systems using digital nonlinearity compensation techniques.
In addition, to analyse a more practical case, the application of 1 step/span in the DBP has also been studied in the Nyquist-spaced optical communication system.It is found that only 1-channel DBP can give improvement in the system performance compared to the EDC case.At the transmission distance of 2000 km, only a small improvement (~0.05 Tbit/s) on the optimum AIRs can be obtained in the DP-64QAM and DP-256QAM systems.
Fig. 1 .
Fig. 1.Schematic of the 9-channel Nyquist-spaced optical fibre communication system using EDC and MC-DBP.(NPS: Nyquist pulse shaping, PBS: polarisation beam splitter, PBC: polarisation beam combiner, LO: local oscillator, ADC: analogue-to-digital convertor, MI: mutual information) where P is the average optical power per channel, 2 ASE σ represents the total power of ASE noise within the examined channel due to the optical amplification process, 2 S S σ − represents the distortions due to signal-signal nonlinear interactions, and 2 S N σ − represents the distortions from signal-ASE noise interactions.
where [ ] Ε ⋅ stands for the mathematical expectation operator.Then, the SNR for the AWGN channel is defined as represents the average transmit power.The dual-polarisation symbolwise soft-decision mutual information (MI) for a discrete QAM signal input distribution can be numerically computed by its definition as
Fig. 2 .
Fig. 2. Theoretical symbol-wise soft-decision MI versus SNR per symbol for dual-polarisation systems using different modulation formats under the assumption of a Gaussian channel law.
Fig. 3 .
Fig. 3. Simulation results of SNR versus optical launch power at different back-propagated bandwidths for different modulation formats.
Fig. 4 .
Fig. 4. Simulation results of SNR versus number of steps per span in MC-DBP at different back-propagated bandwidths for different modulation formats.
Table 3 .MRNSPSFig. 6 .
Fig. 6.Simulation results of AIR versus optical launch power at different back-propagated bandwidths for different modulation formats.The transmission distance is 2000 km.
Figure 7 also indicates that DBP provides no gain in terms of AIR at the optimum power in the DP-QPSK and DP-16QAM optical communication systems at the transmission distance of 2000 km.This is because the MIs at the optimum SNR in the case of EDC have already reached their maximum values in Fig. 2, for the 2000 km DP-QPSK and DP-16QAM transmission systems.
Fig. 7 .
Fig. 7. Analytical and simulation results of AIRs (at optimum optical launch power) versus back-propagated number of channels for different modulation formats.Transmission distance is 2000 km.S: simulation results, T: theoretical model.
Fig. 8 .
Fig. 8. Analytical prediction of AIRs versus transmission distances at different backpropagated bandwidths for different modulation formats.
Fig. 9 .
Fig. 9. Simulation results of SNR versus optical launch power using 1-channel DBP for different modulation formats.(The transmission distance is 2000 km, and the 1-channel DBP is applied with 1 step/span).
Fig. 10 .
Fig. 10.Simulation results of AIR versus optical launch power using 1-channel DBP for different modulation formats.(The transmission distance is 2000 km, and the 1-channel DBP is applied with 1 step/span). | 6,197.8 | 2017-02-20T00:00:00.000 | [
"Engineering",
"Physics"
] |
Fast ionic conduction in semiconductor CeO2-δ electrolyte fuel cells
Producing electrolytes with high ionic conductivity has been a critical challenge in the progressive development of solid oxide fuel cells (SOFCs) for practical applications. The conventional methodology uses the ion doping method to develop electrolyte materials, e.g., samarium-doped ceria (SDC) and yttrium-stabilized zirconia (YSZ), but challenges remain. In the present work, we introduce a logical design of non-stoichiometric CeO2-δ based on non-doped ceria with a focus on the surface properties of the particles. The CeO2−δ reached an ionic conductivity of 0.1 S/cm and was used as the electrolyte in a fuel cell, resulting in a remarkable power output of 660 mW/cm2 at 550 °C. Scanning transmission electron microscopy (STEM) combined with electron energy-loss spectroscopy (EELS) clearly clarified that a surface buried layer on the order of a few nanometers was composed of Ce3+ on ceria particles to form a CeO2−δ@CeO2 core–shell heterostructure. The oxygen deficient layer on the surface provided ionic transport pathways. Simultaneously, band energy alignment is proposed to address the short circuiting issue. This work provides a simple and feasible methodology beyond common structural (bulk) doping to produce sufficient ionic conductivity. This work also demonstrates a new approach to progress from material fundamentals to an advanced low-temperature SOFC technology. The performance of non-doping ceria used in solid oxide fuel cells for generating electricity has been improved by modifying its surface. A logical design of non-stoichiometric CeO2-δ was in-situ formed by fuel cell to make ions, e.g., the oxygen ion conducted through the pathway built in ceria surface for the electrolyte to realize the fuel cell reactions, enabling an electrical current to flow. Optimizing the properties of the electrolyte is vital for maximizing the efficiency of the fuel cell. Baoyuan Wang and Bin Zhu from Hubei University, Wuhan, China and coworkers from China, Germany and Sweden set out to improve the electrical conductivity of the surface on non-doping ceria, an oxide of the rare earth metal cerium succeeded in excellent electrolyte functions. The modified surface states created new electrical pathways useful for fuel cell applications. This study highlights a new methodology to develop electrical property of CeO2 without doping based on characteristic surface defects. The CeO2 surface approach presented in this work addresses the electrolyte material challenge faced by solid state oxide fuel cells (SOFCs) over 100 years. In our approach, we take advantage of the energy band structure and surface defect to develop new functional electrolyte material based on non-doped ceria. The oxygen vacancies and defects in surface state of the CeO2 result in new electrical and band properties, thus giving rise in superionic conduction for successful SOFCs application.
Introduction
Surface/interface structures are found to play a vital role in producing exceptional material properties. For example, topological insulators with an insulating core and electron conducting surface 1-3 displayed unique electrical conducting properties. The interface between two insulating oxides can produce superconductivity 4,5 . In addition, semiconductor/ion conductor heterointerfaces, such as YSZ/SrTiO 3 6,8,10 and Ce 0.8 Gd 0.2 O 2-δ -CoFe 2 O 4 11 composites, can enhance the ionic conductivity through two material interfaces by several orders of magnitude [6][7][8][9][10] .
These extraordinary properties on surfaces or at interfaces indicate a new strategy to develop material functionality. Thus, a new emerging approach for oxide interfaces was established 12,13 . By tuning the electronic states, oxygen ion conducting properties can be modified at interfaces 14 .
Ceria (CeO 2 ) has attracted extensive interest and demonstrated multifunctionality in many fields, such as catalytic applications [15][16][17] , solar cells and photoelectrochemistry [18][19][20] , lithium batteries 21,22 , fuel cells [23][24][25][26] and a variety of other energy-related applications 25,26 . The most important characteristic of ceria is the capacity to store and release oxygen via facile Ce 4+ /Ce 3+ redox cycles, which largely depends on the concentration and types of oxygen vacancies in the lattice as well as surface structures and states. Unique physical properties are associated with Ce 3+ ions and oxygen vacancies. Especially from the nanoscale perspective, non-stoichiometric oxygen atoms are present at the grain boundaries or surface, and these concomitant vacancies play an important role in determining the various chemical and physical properties of ceria. The surface state is fundamental 27,28 and demonstrates significantly different physical and chemical properties when compared to those of the bulk matrix. The role of vacancy dynamics may be very important at interfaces and on surfaces because of the high mobility and redistribution of charged vacancies 29 . Ceria can be easily reduced from CeO 2 to CeO 2−δ through surface reduction at low oxygen partial pressures. The changes in surface oxygen vacancies often dramatically alter material physical and electrochemical properties, especially when the ceria particle size is less than 100 nm.
It is well known that CeO 2 itself is an insulator. To improve the ionic conductivity of cerium-based oxides, aliovalent doping with rare earths and alkaline cations, such as Gd, Sm, Ca and La, introduces oxygen vacancies in the lattice as charge compensating defects and increases the ionic conductivity, where the highest level of oxide-ion conductivity was reported for Gd-and Smdoped Ce 1-x M x O 2-δ (M = Gd and Sm) 30,31 . Although extensive efforts have been made to utilize doped ceria as an alternative electrolyte in solid oxide fuel cells (SOFCs), several critical challenges have hindered practical application of this material as reported extensively in the literature. (i) Ceria-based electrolytes under fuel cell conditions are reduced by H 2 , which can be accompanied by significant electronic conductivity to futher deteriorate the open-circuit voltage (OCV) and power output 32 . (ii) Once the ceria size is in nanometer scale, the electronic conduction is dominant; e.g., an enhancement of four orders of magnitude in the electronic conductivity was observed for CeO 2 when the particle size transitioned from the micro-to nanoscale 33 . There were two approaches published in Nature in 2000 for high ionic conduction that were based on structural doping 34 and surface mechanisms 35 . Doping to create bulk ionic conduction in a material is a central methodology in SOFC material research and development. However, alternative materials that can replace YSZ have not yet been successful; on the other hand, the surface approach has not been seriously developed in the current SOFC framework. This study highlights a new conceptual method to develop high electrical conductivity in CeO 2 without doping based on characteristic surface defects (Ce 3+ , oxygen vacancies and superoxide radicals) combined with band energy alignment to avoid the formation of short circuits.
The CeO 2 surface approach presented in this work addresses challenges based on recent scientific understanding and results achieved on this material. In our approach, we take advantage of the ceria electronic conduction and surface defects for the successful demonstration of new advanced SOFC materials and technologies. Through simple heat treatment processes, we created different surface defects and electrical properties to investigate the correlation between the conductivities and surface state of the CeO 2 . The presence of oxygen vacancies and defects on the CeO 2 surface resulted in new electrical and band gap properties and successful SOFC application. Our study presents a new design concept for both materials and devices that will have a great impact on the next generation of advanced SOFCs.
Experimental section
Synthesis of CeO 2 powder CeO 2 powders were prepared using the wet chemical precipitation method. In a typical synthesis procedure, 5.43 g cerium nitrate hexahydrate (Ce (NO 3 ) 3 ·6H 2 O) and 1.98 g ammonium bicarbonate (NH 4 HCO 3 ) were separately dissolved in 200 ml deionized water under magnetic stirring. Then, the NH 4 HCO 3 solution was used as the deposition agent and poured slowly (10 ml min −1 ) into the Ce(NO 3 ) 3 ·6H 2 O solution, which was stirred for 2 h and statically aged for 12 h at room temperature. Following filtration, the material was washed with deionized water to remove any possible ionic remnants and then a pure CeO 2 precursor was obtained. The CeO 2 precursor was dried at 120°C for 24 h and calcined in air at 900°C for 4 h to obtain CeO 2 powder.
Characterization
The X-ray diffraction (XRD) patterns of the as-prepared CeO 2 samples were analyzed to determine the crystallographic phases via a Bruker D8 X-ray diffractometer (XRD, Germany, Bruker Corporation) operating at 45 kV and 40 mA with Cu Kα radiation (λ = 1.54060 Å). The morphology of the samples were investigated using a JSM7100F field emission scanning electron microscope (FESEM, Japan) operating at 15 kV. To further characterize the microstructures, scanning transmission electron microscope (STEM) was performed on a JEOL ARM-200CF field emission microscope with a probe corrector and Gatan imaging filter (GIF) electron energyloss spectrometer (EELS) operating with an accelerating voltage of 200 kV. Collection semi angle of 57.1 mrad was used to record the EELS line scan. The high angle annular dark field (HAADF) image was simulated using a multislice method implemented in QSTEM image simulation software. Ultraviolet photoelectron spectroscopy (UPS) measurements were performed to obtain the valence band level. The UV-vis diffused reflection spectra of the materials were tested on a UV3600 spectrometer (MIOSTECHPTY Ltd.).
Cell construction and measurement
The devices used for measurements were constructed using 0.2 g CeO 2 powder sandwiched between two thin layers of LiNi 0.8 Co 0.15 Al 0.05 O 2 semiconductor pasted on nickel foam and pelletized at room temperature under a hydraulic press pressure of 200 MPa to obtain a simple symmetric configuration device of (Ni)NCAL/CeO 2 / NCAL(Ni). Two nickel foam acted as current collectors. The device was shaped as 13 mm in diameter, around 1.0 mm in thickness with an effective area of 0.64 cm 2 . Pure hydrogen and ambient air were supplied to each side of the cells as fuel and oxidant, respectively. The flow rates were controlled in the range of 80-120 ml min −1 for H 2 and 150-200 ml min −1 for air under 1 atm. To analyze the cell performance, the voltage and current readings were collected using a programmable electronic load (IT8511, ITECH Electrical Co., Ltd.) to plot the I-V and I-P characteristics.
Electrochemical impedance spectroscopy (EIS) was carried out by using an electrochemical workstation (Gamry Reference 3000, USA) in both air and fuel cell operation atmospheres, and the frequency ranged from 0.1 Hz to 1 MHz with an amplitude of 10 mV.
Results and discussion Figure 1 shows the XRD pattern of the CeO 2 powder synthesized at 900°C for 4 h compared with that of CeO 2 reduced in H 2 at 550°C for 1 h (R-CeO 2 ). The patterns exhibit the same fluorite structure. However, a shift towards lower angle is observed for R-CeO 2 sample in the expanded XRD pattern, as shown in the inset. The lattice parameters for the CeO 2 powder and R-CeO 2 calculated by Scherrer equation were 5.403 Å and 5.452 Å, respectively, suggesting a slight CeO 2 local lattice expansion. The XRD analysis indicates that i) the CeO 2 obtained at 900°C had a normal lattice structure that agreed with the standard lattice parameter of 0.5410 nm indicated in the JCPDS card; and ii) the hydrogen treatment led to a reduction of Ce 4+ to Ce 3+ , thereby causing lattice structural changes, i.e., the lattice expanded significantly from 5.403 to 5.452 A. The large Ce 3+ radius can bring about lattice expansion by forming non-stoichiometric CeO 2-δ in the CeO 2 fluorite structure within tolerance limitations. The effect is similar to that of large Sm 3+ and Gd 3+ rare earth ions doping into CeO 2 and cause corresponding lattice expansion.
Along with the production of Ce 3+ , oxygen vacancies are also created in the CeO 2 lattice. This process can be described by: This is a fundamental way to improve CeO 2 electrical properties. CeO 2 has stoichiometry valence of Ce 4+ which is located in the grain interior 36 . Hydrogen treatment leads to chemical defects on the CeO 2 particle surfaces. This process can be deemed as a surface doping process due to the replacement of Ce 4+ by Ce 3+ . The introduction of oxygen vacancies and accompanying large-sized Ce 3+ ions leads to a distortion of the local symmetry and results in an increased lattice expansion, thus causing strain and surface stresses. This speculation can be directly observed in the high-resolution STEM images and is discussed in the next section. The hydrogen reduction process, e.g., during fuel cell operation or at a low oxygen partial pressure, starts from CeO 2 particle surfaces and approaches to the bulk, it is reasonable to consider a different surface state from the bulk to be further characterized in the following sections. The production of Ce 3+ in CeO 2 can have the same effect as trivalent rare earth ions, e.g., replacing Ce 4+ with Sm 3+ or Gd 3+ , that are doped in CeO 2 to cause CeO 2 a lattice expansion. We noticed that Ce 3+ ions have an ionic radius of 1.03 Å that is larger than the value of Ce 4+ (0.92 Å), Gd 3+ (1.05 Å) and Sm 3 + (1.08 Å), respectively. Therefore, producing Ce 3+ in CeO 2 may result in doping effects, similar to Sm 3+ and Gd 3+ , and impact not only the lattice but also the ionic conductivity. However, it should be noted that Ce 3+ is on the surface, while Sm 3+ and Gd 3+ are doped in the bulk 37 .
We adjusted the synthesis conditions and found that the sintering temperature can play a role in determining the microstructure and electrical properties of as-prepared ceria. The detailed work with regard to sintering temperature was added in supplementary information (SI) and can be described as follows. The XRD patterns of the CeO 2 powder sintered at various temperatures are presented as Fig. S1 in the SI. The results can be summarized: i) different temperatures led to the same fluorite structure; ii) the CeO 2 crystallinity was enhanced with the sintering temperature; iii) the lattice constant decreased as the sintering temperature increased, indicating a change in the Ce 3+ /Ce 4+ ratio. Upon increasing the sintering temperature from 500 to 900°C, the lattice parameters decreased correspondingly from 5.416 to 5.403 Å, which was deduced from the XRD patterns. This may be due to Ce ions not being fully oxidized at low sintering temperature, i.e., some Ce 3+ coexisted with Ce 4+ . The large Ce 3+ can expand the ceria lattice, while sintering at increased temperatures can fully oxidize the Ce ions, converting Ce 3+ to Ce 4+ , and lead to a normal lattice constant that agrees with standard JCDPS data. Figure S2 shows the morphology evolution of the CeO 2 powder with sintering temperature through SEM characterization. A clear trend is discernible indicating that the grain size increased with the sintering temperature from several nanometers (500°C) to 200-300 nm (1000°C), which is closely related to the electrical conductivity and activation energy of ceria. The particular sintering temperature resulted in the formation of nanoscale CeO 2 , and the size effect possibly extended the interfacial area accompanying with a reduced enthalpy of defect formation on the CeO 2 crystallizes and caused a high oxygen deficiency on the ceria particle surfaces, significantly enhancing the electrochemical performance of the cells. While focusing on low-temperature (<600°C) SOFC electrolyte applications, we carefully optimized the synthesis conditions and fixed the sintering temperature at a sufficiently high temperature of 900°C for 4 h to address the material stability and produce excellent electrochemical performance. Figures 2a, b display the morphological change of the CeO 2 particles before and after fuel cell measurements. The original CeO 2 particles displayed a spherical shape with a 20-200 nm size distribution, and some pores were observed in the electrolyte layer, but the pores were enclosed without penetrating through the CeO 2 -electrolyte membrane. After the FC measurements, the gaps between the particles were filled, and the CeO 2 electrolyte layer presented a fair density and good gas-tightness, thus ensuring that the assembled cells possessed high OCVs (above 1 V) and excellent power outputs (see the cell performance section below) compared with conventional cells based on a dense doped ceria electrolyte. Figure 3a shows the HAADF-STEM image for individual CeO 2 particles reduced by H 2 for 2 h. The particle was an irregular sphere with a diameter of 190 nm. The energy dispersive X-ray spectrometer (EDXS) mapping of the main elements using O-k and Ce-L 3,2 lines for the CeO 2 particle is shown in Fig. 3b, c, and Fig. 3d is the survey image, which indicates an almost uniform element distribution throughout the entire particle. Figure 3e and f shows the atomically resolved HAADF-STEM image for the reduced CeO 2 particles. A high-resolution image is displayed in Fig. 3d, showing the atomic arrangement. An atomic structure model of the cubic phase of the CeO 2 along the [211] projection and a simulated HAADF image are superimposed on the HAADF image.
To further investigate the surface state of the reduced CeO 2 particles, the high spatial resolution of aberrationcorrected STEM combined with EELS analysis were allowed to detected the valence variations of superficial Ce at the atomic scale. Figure 4a displays the particle area for EELS analysis and the blue arrow indicates the line scan direction. The plot of EELS scan signal from surface (point A) to grain interior (point B) was presented in Fiure 4b. The Ce M 5 /M 4 ratio is sensitive to the chemical state of Ce; therefore, the oxidation state of Ce can be determined quantitatively from the M 5 /M 4 ratio using the positive part of the second derivative of the experimental spectra. Figure 4c gives the Ce M5,4 edges extracted from particle surface and 20 nm away from the surface. The resultant intensity ratios are listed in the inserted table. It can be found that the M 5 /M 4 ratio on the surface is higher than that of the interior grain. As reported, a small M 5 /M 4 ratio corresponds to Ce 4+ and a large ratio is associated with Ce 3+ . Therefore, Ce 3+ was produced on the surface of the CeO 2 particle, indicating the formation of a thin layer of oxygen-deficient CeO 2-δ on the surface. EELS measurements confirmed the presence of oxygen vacancies on the particle surface. When a neutral oxygen vacancy is formed, two electrons are left behind. It is generally accepted that these electrons are localized in the f-state of the nearest Ce atoms 38 , which changes their valence state from + 4 to + 3. In other words, the presence of Ce 3+ could be evidence of oxygen vacancy formation, which significantly improves ion conductivity on the surface. Therefore, the stoichiometric CeO 2 in the interior grains is an insulator, and the oxygen-deficient CeO 2-δ on the surface possesses promising electrical conducting properties. Therefore, a novel CeO 2-δ @CeO 2 structure with a topological configuration, i.e., an insulating core and high conducting shell, was formed as illustrated in Fig. 4d. This evidence clearly indicates that there is strong ionic conductivity for the reduced CeO 2 , which was reflected by the great power output of the fuel cells assembled from pure CeO 2 . In fact, it was reported that a surface layer of CeO 1.5 is formed on the nano-CeO 2 particles, and the CeO 1.5 fraction presented a significant increase when the particle sizes were below 15 nm and reached up to 90% at 3 nm 39 .
The electron core level XPS spectra of the reduced and as-prepared CeO 2 were obtained to peer the chemical composition and valance states of the elements. Figure 5a shows the Ce 3d spectra collected from the as-prepared CeO 2 . The spectrum is composed of two multiplets identified as V and U. These multiplets correspond to the spin-orbit split 3d 5/2 and 3d 3/2 core holes. The u″′ and v″′ peaks with a high binding energy indicate the final state of Ce 3d 9 4f 0 O2P 6 , and the peaks labeled as u, v, u″ and v″ with a low binding energy are attributed to the Ce 3d 9 4f 2 O2P 4 and Ce 3d 9 4f 1 O2P 5 final states. The six characteristic peaks can be indexed as the Ce 3d spectrum for Ce 4+ , which is consistent with previous reports 40,41 . Besides the six characteristic peaks of Ce 4+ , three extra peaks marked as u 0 , u′ and v′ appeared in the Ce 3d XPS of the R-CeO 2 sample as Fig. 5b shown, demonstrating the existence of the Ce 3+ oxidation state. The energy split between the v and v′ peaks is~3.0 eV, which is close to the value observed for the Ce 3+ compound 42 . Figure 5c, d presents the O1s XPS spectrum of the as-prepared and R-CeO 2 , which delivered an asymmetric feature that could be deconvoluted into different symmetrical signal. The spectrum for the as-prepared CeO 2 sample is fitted by two peaks centered at 529.5 eV and 530.5 eV, which are attributed to the lattice oxygen (marked as O I ) and surface adsorbed oxygen (marked as O II ), respectively. For the R-CeO 2 sample, the asymmetric O1s spectrum is deconvoluted into three peaks denoted as O I , O II and O III . The new peak (O III ) with a higher binding energy is related to the presence of the oxygen vacancies, possibly due to the existence of Ce 3+ produced by H 2 reduction, which is crucial for ionic conduction and dominated the electrochemical performance of the assembled fuel cell.
This surface layer of the core-shell structure was further characterized to understand the boundaries and buried interface effects on the ionic conduction origin and enhancement. We carried out more careful characterization to identify and determine the tension of the grain boundaries with agglomerated CeO 2-δ particles through STEM in combination with the EELS, as shown in Fig. 6. The HAADF images a and b show that the particle size was in the range of 10-200 nm. It is clear that all of the particles closely contacted each other, and the color of the interface region between the particles is different from the interior of the particles, indicating stress accumulation at the interface. As Fig. 6c shows, neither disordered nor amorphous structures are present at the grain boundaries, indicating that the boundaries are successfully joined at the atomic level.
In Fig. 6, there are two direct pieces of evidence to support the interfacial conduction mechanism. (i) First, an analysis of the stress was carried out at the interfaces. As shown in Fig. 6a, b for the HAADF images, the contrast is bright at the interfaces, which indicates that there was an accumulation of stress. (ii) An emphasis is paid on analysis of the valency state at the interfaces. The Ce valence state changes were extracted using Ce-M 5,4 edges in atomic resolution, as shown in Fig. 6e. Both the chemical shift and white-line ratio methods and analyses prove that there was an~1.5 nm buried interface, where Ce was in the 3 + valence state and is highlighted in red in Fig. 6d. This implies that oxygen vacancies were created in the buried interfaces. Because accompanying with the Ce 3+ formation, the oxygen vacancy generation process can be described by equation (1). The experimental evidence indicates that the surface and grain boundaries play a dominant role in ionic transport; it is well understood and reported in the literature 43-45 that stress and tension generate vacancies at interfaces to promote ion transport.
The electrical behavior of the CeO 2 pellets was examined by EIS analysis under air and H 2 /air FC environment at a device measuring temperature of 550°C. The EIS results are shown in Fig. 7. To understand the EIS behavior in more detail, a simulation was carried out using the equivalent circuit models of R o (R 1 -QPE 1 )(R 2 -QPE 2 ) (Insets of Fig. 7), where R is the resistance and QPE represents the constant phase element. The highfrequency intercept on the Z'-axis, as shown in the enlarged inset, reflects the entire ohmic resistance of the device, including the resistance of CeO 2 bulk, electrodes and connecting wires. Both EIS results were characterized as a semicircle followed by an inclined line, and the flat semicircle in the medium-frequency region could be superimposed by two standard semicircles. One is attributed to the grain boundary/surface effect in the middle-frequency range, and the other is due to the charge-transfer impedance on the electrode/CeO 2 interface. In addition, the inclined line in the low-frequency region corresponds to the ion-diffusion process in the electrode process. These processes also commonly exist in fuel cells based on doped ceria electrolytes 46 . It can be seen clearly that the CeO 2 based device under an air atmosphere exhibited a typical ion conducting nature and low conductivity, which are reflected by a large semicircle due to grain boundary and charge transfer processes. In the FC condition, the device immediately shows a mixed electron-ion conducting behavior and a rapid decrease in resistance of more than two orders of magnitude (see the inset of Fig. 7). The diameter of the semicircle in the medium-frequency region for the CeO 2 under FC conditions is much smaller than that of the CeO 2 device in air, indicating much lower grain boundary and chargetransfer resistances. The result suggests that the The fitting results show that the total electrical conductivity of the CeO 2 in air was low,~10 −4 s/cm, as estimated from the EIS result in air; in contrast, a drastic change occurred to bring about a high conductivity state under FC conditions, and the conductivity exceeded 10 −1 s/cm. In addition, the obtained capacitances displayed low values, revealing the ratedetermining processes in the fuel cell. This result is consistent with the reduction of the Ce 4+ to Ce 3+ in the H 2 /air environment. This occurred because H 2 can reduce CeO 2 and form large oxygen vacancies, further resulting in significant enhancement of both oxygen ion and electronic conduction.
The oxygen deficient layer on the surface can function as an oxygen ion transport pathway and significantly dominate the charge conduction, especially the grain boundary conductivity, which was deduced from the EIS plot of the pellet. To further verify the surface conduction, we specifically separated the grain boundary resistance from the EIS results and converted the resistance to conductivity by using the pellet dimensions. Figure 8 shows the grain boundary conductivity (σ gb ) of CeO 2 as a function of temperature obtained in air and H 2 /air atmospheres. The noteworthy point is that the σ gb obtained in the H 2 /air atmosphere was significantly higher than that in the air, possibly due to the formation of an oxygen-deficient layer on the particle surface under the H 2 /air atmosphere, which provided a pathway for oxygen transport to significantly enhance σ gb .
Based on the excellent electrical properties, the CeO 2 samples were used as the electrolytes for fuel cells, and the cell performances are shown in Fig. 9. It can be seen clearly that high OCV values (1.0 to 1.12 V) and power outputs (140-660 mW/cm 2 ) was achieved at operational temperatures of 400-550°C. To verify the reproducibility of the performance, we fabricated 8 cells from non-doped CeO 2 and evaluated their electrochemical performance. A box plot diagram was chosen to present the power maximum of the measured 8 cells at various testing temperatures, as shown in Fig. 10. The horizontal lines in the box denote the 25 th , 50 th and 75 th percentile values. Obviously, the performance presented in Fig. 9 is close to the mean value; therefore, the value is representative. Although there was high electronic conduction, as discussed above, the CeO 2 electrolyte exhibited no any electronic short-circuiting problem. These results obtained from non-doped CeO 2 surface conduction demonstrated significant advantages over doped ceria This indicates a very different ionic conduction mechanism and fuel cell principles between the doped bulk conducting SDC and non-doped surface conducting CeO 2 , which deserves further study.
It has been reported that nanoscale CeO 2 shows strong or dominant electronic conductivity, giving nanosized CeO 2 a mixed ionic and electronic conductivity (MIEC) state 33 . The grain boundary-enhanced electron concentration corresponding to depression in the positively charged ionic (oxygen vacancy) species is expected from space charge theory. It should be pointed out if the fuel cell electrolyte has significant electronic conductivity, i.e., a typical MIEC electrolyte, which will make significant How can the MIEC type CeO 2 be used as a fuel cell electrolyte and cause no additional losses in the OCV and power output? These conflict with conventional MIEC theory and SOFC devices for a doped ceria electrolyte fuel cell 33 . We propose a new scientific principle for a semiconductor junction combined with energy band alignment, which has been reported in other semiconductorionic membrane fuel cell systems 47,48 . In this case, the contacted CeO 2 on the anode side was reduced by H 2 to form Ce 3+ and released free electrons. The surface conduction was formed, and the extra electrons simultaneously brought about n-type conduction for the CeO 2 on the anode side. Martin and Duprez determined the oxygen and hydrogen surface diffusion on the oxide surfaces, and pointed out that both oxygen and hydrogen can transport rapidly on the CeO 2 surface 49,50 . Lai et al. reported that a Sm-doped CeO 2 thin film exhibited mixed ionic and electronic conductivity with a bulk ionic conductivity of 7mScm −1 and an electronic conductivity of 6mScm −1 under open-circuit conditions at 500°C 51 . These data agree well with our fuel cell results, although we used the pure CeO 2 phase, which possessed sufficient surface electron and ionic conductivities.
On the other hand, CeO 2 on the air side showed hole conduction 52 , i.e., p-type conduction, while the CeO 2 on the anode side reduced by H 2 turns to electron (n-type) conduction. Naturally, a p-n junction was formed between two parts of the CeO 2 electrolyte. In this case, we propose a double-layer electrolyte model for the fuel cell, as shown in Fig. 11f. Band energy alignment between the CeO 2 and R-CeO 2 is proposed to clarify the charge separation and barrier to block the electron passing through the CeO 2 electrolyte, even though it is an MIEC-type electrolyte. An oxygen vacancy is associated with the formation of two Ce 3+ ions and is a two-electron donor center, which can The electrons formed during reduction were treated as being localized on the cerium, thereby converting Ce 4+ to Ce 3+ ions. To verify this assumption, the accurate band energy of the CeO 2 /R-CeO 2 was determined by UPS combined with UV-vis diffused reflection. UPS of the CeO 2 and R-CeO 2 were carried out to determine their valence band. In the UPS spectra presented in Fig. 11b, the energy was calibrated with respect to He I photon energy (21.21 eV). As Fig. 11c, d shows, by defining the Fig. 11 a The diffused reflection spectra and b the UPS of as-prepared CeO 2 and reduced CeO 2 . c UPS plots of CeO 2 and R-CeO 2 with magnified views of the low binding energy cutoff and d the high binding energy cutoff regions. e The energy alignment diagram and f the configuration schematic for the double-layer fuel cell using CeO 2 as the electrolyte low-binding and high-energy cutoff, the valence band maximum below vacuum level was obtained to be −5.47 eV for CeO 2 and −5.74 eV for the R-CeO 2 sample. These band gaps were determined from the diffused reflection measurements (Fig. 11a) to be 3. 65 eV and 3.42 eV for CeO 2 and R-CeO 2 , respectively. On the basis of these results, we can further deduce the corresponding conduction band (CB) levels to be 1.85 eV for CeO 2 and 2.32 eV for R-CeO 2 . The final band alignment is sketched in Fig. 11e and clearly reveals that the CB position of CeO 2 is higher than that of R-CeO 2 , the extra electrons produced by the reduction atmosphere should aggregate in the CB of the R-CeO 2 , further decreasing the CB position. The conduction band offset formed potential barriers to prevent the electrons generated on the anode side from passing through the interface between CeO 2 and R-CeO 2 , thus avoiding a short-circuiting problem. In addition, the built-in field formed by the CeO 2 /R-CeO 2 band energy alignment should promote oxygen ion transport.
In the present work, we discovered that CeO 2 without doping can create much better electrical properties and fuel cell performance than those of conventional cation doped ceria, e.g., samarium-doped ceria (SDC) based on bulk ionic conduction. The possible underlying mechanism involves the formation of a surface oxygen-deficient layer and core-shell architecture for reduced CeO 2 that is accompanied by band energy alignment to avoid shorting, which is a novel mechanism for ceria electrolyte materials and a novel fuel cell principle. On the other hand, the H 2 supplied as fuel reduces the Ce 4+ into Ce 3+ , which has the same doping effect as Sm 3+ and improves the ionic conductivity, namely, "self-doping" occurs. However, cation doping and self-doping are different. For example, cation doping, such as with Sm 3+ or Gd 3+ takes place in the CeO 2 particle bulk to create oxygen vacancies, further developing bulk conduction, while self-doping occurs at the particle surface accompanied by oxygen vacancies, leading to different surface conduction mechanisms. Surface conduction has unique advantages, including low activation energy and fast ionic mobility. Both of these advantages contribute to better ionic conductivity and fuel cell performance than conventional cation doped cerium-based electrolytes. For example, Shen et al. reported a Gd-doped ceria (GDC) electrolyte for SOFCs with mixed electronic conduction, resulting in an OCV < 0.9 V and power output < 100 mW/cm 32 . In other words, the surface conduction induced by fuel cell conditions is distinct from the ordinary O 2− conduction mechanism in bulk doped ceria and appears to be a new methodology for the design of new functionalities for advanced technologies in the energy sector, especially for next generation SOFCs.
Conclusions
The occurrence of charged defects and the control of stoichiometry in fluorite CeO 2 materials can be accomplished by a reduction treatment, which can strongly affect the CeO 2 surface defects. Reducing and oxidizing conditions during cell operation produce CeO 2 semiconducting (n-type at the anode and p-type at the cathode)-ionic properties and greatly enhance both the electronic and ionic conductivities. Ionic conductivity may play a dominant role in fuel cell processes and device performance accompanied by sufficient electron conduction. High ionic conductivities have been realized by creating surface defects, e.g., oxygen vacancies and surface pathways. The CeO 2 should be reduced to nonstoichiometric CeO 2-δ at the anode region and combined with CeO 2 at the cathode side to form a doublelayer device. The energy band alignment between CeO 2-δ / CeO 2 can produce efficient charge separation and avoid the device short circuiting problem, while charge separation is an enormous challenge for conventional SOFCs based on a doped ceria electrolyte, where OCV and power losses generally occur to some extent due to the existence of electronic conduction. The semiconducting and ionic properties take advantage of the semiconductor energy band to prevent the electrons from internally migrated simultaneously enhance the ionic transport. The synergistic effect to enhance the ionic conductivity is also observed above 0.1 S/cm at 550°C. The non-doped CeO 2 approach may instigate very interesting new fundamental understanding of the science and promote SOFC development.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 8,440.6 | 2019-09-13T00:00:00.000 | [
"Materials Science",
"Engineering",
"Chemistry"
] |
Developmental Origins of Cardiometabolic Diseases: Role of the Maternal Diet
Developmental origins of cardiometabolic diseases have been related to maternal nutritional conditions. In this context, the rising incidence of arterial hypertension, diabetes type II, and dyslipidemia has been attributed to genetic programming. Besides, environmental conditions during perinatal development such as maternal undernutrition or overnutrition can program changes in the integration among physiological systems leading to cardiometabolic diseases. This phenomenon can be understood in the context of the phenotypic plasticity and refers to the adjustment of a phenotype in response to environmental input without genetic change, following a novel, or unusual input during development. Experimental studies indicate that fetal exposure to an adverse maternal environment may alter the morphology and physiology that contribute to the development of cardiometabolic diseases. It has been shown that both maternal protein restriction and overnutrition alter the central and peripheral control of arterial pressure and metabolism. This review will address the new concepts on the maternal diet induced-cardiometabolic diseases that include the potential role of the perinatal malnutrition.
INTRODUCTION
Cardiovascular and metabolic diseases, such as hypertension, type II diabetes, and dyslipidemia are highly prevalent in the world and have important effects on the public health, increasing risk factors for the development of other diseases, including coronary heart disease, stroke, and heart failure (Landsberg et al., 2013). The etiology of these cardiometabolic diseases includes a complex phenotype that arises from numerous genetic, environmental, nutritional, behavioral, and ethnic origins (Landsberg et al., 2013;Ng et al., 2014). In this regard, it has been observed that the eating habits and behaviors and nutritional condition in early phases of life may play a key role on the etiology of these diseases by inducing physiological dysfunctions (Lucas, 1998;Victora et al., 2008;Wells, 2012). This phenomenon can be understood in the context of phenotypic plasticity and it refers to the ability of an organism to react to both an internal and external environmental inputs with a change in the form, state, physiology, or rate of activity without genetic changes (West-Eberhard, 2005b). Indeed the nutritional factors rise as important element in this theme and it has been highlighted since Barker (Barker, 1990(Barker, , 1995(Barker, , 1998(Barker, , 1999a(Barker, ,b, 2000Barker and Martyn, 1992;Fall and Barker, 1997;Osmond and Barker, 2000). In this context, new evidence from epidemiological and clinical studies have showed the association of the maternal under-and overnutrition with development of cardiometabolic dysfuntions (Ashton, 2000;Hemachandra et al., 2006;Antony and Laxmaiah, 2008;Conde and Monteiro, 2014;Costa-Silva et al., 2015;Parra et al., 2015). Thus, this review will address the new concepts about the involvement of the maternal protein malnutrition and overnutrition on the development of the cardiometabolic diseases.
PERINATAL ORIGIN OF CARDIOMETABOLIC DISEASES: THE ROLE OF PHENOTYPIC PLASTICITY
Biological and medical consequences of perinatal nutritional factors have been extensively studied in the field of the "developmental origins of health and diseases" proposed by colleagues since 1986 (Barker andOsmond, 1986;Barker et al., 1989Barker et al., , 1993Barker, 2007). This field of research proposes that cardiometabolic diseases can be "programmed" by the "adaptative" effects of both under-and overnutrition during early phases of growth and development on the cell physiology (Barker and Osmond, 1986;Hales and Barker, 1992;Alfaradhi and Ozanne, 2011;Chavatte-Palmer et al., 2016). As stated before, it aims to study how an organism reacts to a different environmental input, such as malnutrition, and induces changes in the phenotype, but without altering the genotype (Barker et al., 2005;West-Eberhard, 2005a;Labayen et al., 2006;Andersen et al., 2009;Biosca et al., 2011). In this context, epigenetic alterations, such as DNA methylation, histone acetylation, and microRNA expression are considered the molecular basis of the phenotypic plasticity (Wells, 2011). These modifications termed as "epigenetic" were firstly described by Conrad Waddigton in 1940 and it studies the relationship between cause and effect in the genes to produce a phenotype (Jablonka and Lamb, 2002). Nowadays, this concept is employed to describe the process of the gene expression and its linking to modifications in the cromatin structure without altering DNA sequence (Chong and Whitelaw, 2004;Egger et al., 2004). Among all epigenetic modifications, the DNA methylation is one that has been best studied and is related to addition of methyl groups on DNA cytosine residues, normally on the cytosine followed by guanine residue (CpG dinucleotides), which can produce inhibition of the gene expression by impairing transcriptional factor binding (Waterland and Michels, 2007;Mansego et al., 2013;Chango and Pogribny, 2015;Mitchell et al., 2016). In this context, it has been investigated how nutritional aspect may induce these epigenetic modifications.
Macro-and micro-nutrient compositions have been identified as important nutritional factors inducing epigenetic processes, such as DNA methylation (Mazzio and Soliman, 2014;Szarc vel Szic et al., 2015). It is considered at least three ways by which Abbreviations: AKT/PKB, Protein kinase B; CB, Carotid body; CNS, Central nervous system; CRP, C-reactive protein; ERK, Extracellular signal-regulated kinase; GSH, Glutathione reduced; HFD, High fat diet; HIF-1α, Hypoxic inducible factor 1 alpha; IGF2, Insulin-like growth factor 2; IL-6, Interleukin-6; IR, Insulin receptor; IRS, Insulin receptor substrate; mTOR, Mammalian target of rapamycin; PI3K, Phosphatidylinositol 3-kinase; RAS, Renin-angiotensin system; ROS, Reactive oxygen species; TNF-α, Tumor necrosis factor alpha. nutrients can induce DNA methylation, alter gene expression, and modify cellular phenotype: (i) by providing methyl group supply for inducing S-adenosyl-L-methionine formation (genomic DNA methylation), modifying the methyltransferase activity, or impairing DNA demethylation process; (ii) by modifying chromatin remodeling, or lysine and arginine residues in the N-terminal histone tails; and (iii) by altering microRNA expression (Chong and Whitelaw, 2004;Egger et al., 2004;Hardy and Tollefsbol, 2011;Stone et al., 2011). In this context, altered contents of amino acids, such as methionine and cysteine, as well as reduced choline and folate diet amount can modify the process of the DNA methylation leading to both DNA hyper-and hypomethylation (Fiorito et al., 2014). For example, deficiency of choline can precipitate DNA hypermethylation associated with organ dysfunction, mainly in liver metabolism (Karlic and Varga, 2011;Wei, 2013).
High fat diet (HFD) during perinatal period has been identified as risk factor to predispose and induce epigenetic processes in the parents and their offspring (Mazzio and Soliman, 2014;Szarc vel Szic et al., 2015). Both hypo-and hypermethylation processes participate in this dysregulation attributed to HFD consumption (Ng et al., 2010;Milagro et al., 2013). In adipose tissue, for example, it was observed that gene promoter of the fatty acid synthase enzyme suffered methylation (Lomba et al., 2010) and that important obesity-related genes such as leptin have disruption on their methylation status (Milagro et al., 2009).
MATERNAL PROTEIN UNDERNUTRITION: EARLY-AND LONG-TERM OUTCOMES
Maternal malnutrition is associated with the risk of developing cardiovascular disease and co-morbidities in offspring's later life including hypertension, metabolic syndrome, and type-II diabetes Nuyt, 2008;Nuyt and Alexander, 2009). In humans, studies have provided support for the positive association between low birth weight and increased incidence of hypertension (Ravelli et al., 1976;Hales et al., 1991;Sawaya and Roberts, 2003;Sawaya et al., 2004).
Maternal low-protein diet model during both gestation and lactation is one of the most extensively studied animal models of phenotypic plasticity (Ozanne and Hales, 2004;Costa-Silva et al., 2009;Falcão-Tebas et al., 2012;Fidalgo et al., 2013;de Brito Alves et al., 2014;Barros et al., 2015). Feeding a low-protein diet (8% protein) during gestation and lactation is associated with growth restriction, asymmetric reduction in organ growth, elevated systolic blood pressure, dyslipidemia, and increased fasting plasma insulin concentrations in the most of studies in rodents (Ozanne and Hales, 2004;Costa-Silva et al., 2009;Falcão-Tebas et al., 2012;Fidalgo et al., 2013;Leandro et al., 2012;de Brito Alves et al., 2014, 2016Ferreira et al., 2015;Paulino-Silva and Costa-Silva, 2016). However, it is known that the magnitude of the cardiovascular and metabolic outcomes are dependent on the both time exposure to protein restricted-diet (Zohdi et al., 2012(Zohdi et al., , 2015 and growth trajectory throughout the postnatal period (Wells, 2007(Wells, , 2011. A rapid and increased catch-up growth and childhood weight gain appear to augment metabolic disruption in end organs, for example liver (Tarry-Adkins et al., 2016;Wang et al., 2016).
Although, the relationship between maternal protein restriction, sympathetic overactivity and hypertension have been suggested (Johansson et al., 2007;Franco et al., 2008;Barros et al., 2015), few studies have described the physiological dysfunctions responsible for producing these effects. Nowadays, it is well accepted that perinatal protein malnutrition raise risks of hypertension by mechanisms that include abnormal vascular function (Franco Mdo et al., 2002;Brawley et al., 2003;Franco et al., 2008), altered nephron morphology and function, and stimulation of the renin-angiotensin system (RAS) (Nuyt and Alexander, 2009;Siddique et al., 2014). Recently, studies have highlighted contribution of the sympathetic overactivity associated to enhanced respiratory rhythm and O 2 /CO 2 sensitivity on the development of the maternal low-protein diet-induced hypertension by mechanisms independent of the baroreflex function (Chen et al., 2010;Barros et al., 2015;Costa-Silva et al., 2015;de Brito Alves et al., 2015;Paulino-Silva and Costa-Silva, 2016). Offspring from dams subjected to perinatal protein restriction had relevant short-term effects on the carotid body (CB) sensitivity and respiratory control. With enhanced baseline sympathetic activity and amplified ventilatory and sympathetic responses to peripheral chemoreflex activation, prior to the establishment of hypertension (de Brito Alves et al., 2014. The underlying mechanism involved in these effects seems to be linked with up-regulation of hypoxic inducible factor (HIF-1α) in CB peripheral chemoreceptors (Ito et al., 2011(Ito et al., , 2012de Brito Alves et al., 2015). However, the epigenetic mechanisms in these effects are still unclear. It is hypothesized that epigenetic mechanism produced by DNA methylation could be involved (Altobelli et al., 2013;Prabhakar, 2013;Nanduri and Prabhakar, 2015). The central nervous system (CNS) compared to other organ systems has increased vulnerability to reactive oxygen species (ROS). ROS are known to modulate the sympathetic activity and their increased production in key brainstem sites is involved in the etiology of several cardiovascular diseases, for example, diseases caused by sympathetic overexcitation, such as neurogenic hypertension (Chan et al., 2006;Essick and Sam, 2010). Ferreira and colleagues showed that perinatal protein undernutrition increased lipid peroxidation and decreased the activity of several antioxidant enzymes (superoxide dismutase, catalase, glutathione peroxidase, and glutathione reductase activities) as well as elements of the GSH system, in adult brainstem. Dysfunction in the brainstem oxidative metabolism, using the same experimental model, were observed in rats immediately after weaning associated to the increase in ROS production, with a decrease in antioxidant defense and redox status (Ferreira et al., 2015. Related to the metabolic effects on the heart, it was observed that these animals showed decreased mitochondrial oxidative phosphorylation capacity and increased ROS in the myocardium. In addition, maternal low-protein diet induced a significant decrease in enzymatic antioxidant capacity (superoxide dismutase, catalase, glutathione-S-transferase, and glutathione reductase activities) and glutathione level when compared with normoprotein group (Nascimento et al., 2014).
Regarding hepatic metabolism, studies showed that protein restricted rats had suppressed gluconeogenesis by a mechanism primarily mediated by decrease on the mRNA level of hepatic phosphoenolpyruvate carboxykinase, a key gluconeogenic enzyme, and enhancement of the insulin signals through the insulin receptor (IR)/IR substrate (IRS)/phosphatidylinositol 3-kinase (PI3K)/mammalian target of rapamycin complex 1 (mTOR) pathway in the liver (Toyoshima et al., 2010). In relation to lipid metabolism, there was decreased liver triglyceride content in adult rats exposed to protein restriction during gestation and lactation. It was suggested that this effect could be due to increased fatty-acid transport into the mitochondrial matrix or alterations in triglyceride biosynthesis (Qasem et al., 2015). A maternal protein restriction was shown to reduce the lean and increase the fat contents of 6-month old offspring with a tendency for reduced number of muscle myofibers associated with reduced expression of mRNA of Insulin-like growth factor 2 gene (IGF2 mRNA) in pigs (Chavatte-Palmer et al., 2016).
MATERNAL OVERNUTRITION AND RISK FACTOR FOR THE CARDIOMETABOLIC DYSFUNTIONS
Nutritional transition is a phenomenon well documented in developing countries in the twentieth and twenty-first centuries, and has induced high incidence of the chronic diseases and high prevalence of the obesity (Batista Filho and Rissin, 2003;Batista Filho and Batista, 2010;Ribeiro et al., 2015). It is evident that protein malnutrition was an health problem in the first half of the twentieth century. Now, it was replaced by a diet enriched in saturated fat or other HFDs, predisposing to overweight, and obesity (Batista et al., 2013). Nowadays, it suggested that two billion people in the world are overweight and obese individuals, with major prevalence is related to diet induced-obesity, which have been associated to cardiovascular and endocrine dysfunctions (Hotamisligil, 2006;Aubin et al., 2008;Zhang et al., 2012;Ng et al., 2014;Wensveen et al., 2015).
Recently, the obesity has been considered a physiological state of chronic inflammation, characterized by elevated levels of inflammatory markers including C-reactive protein (CRP), interleukin-6 (IL-6), and tumor necrosis factor alpha (TNFα) (Wensveen et al., 2015;Erikci Ertunc and Hotamisligil, 2016;Lyons et al., 2016). Maternal HFD chronic consumption enhances the circulating free fatty acids and induce the activation of inflammatory pathways, enhancing chronic inflammation in offspring (Gruber et al., 2015). Studies of Roberts et al. (2015) found that cardiometabolic dysfunction was associated with changes such as elevated serum triglycerides, elevated oxidative stress levels, insulin resistance, vascular disorders, and development of hypertension .
In animals on a HFD the hormone leptin has been considered one of the most important physiological mediators of the cardiometabolic dysfunction (Correia and Rahmouni, 2006;. Since hyperleptinemia, common in overweight and obesity conditions, produce a misbalance in autonomic system, with sympathetic overactivation (Machleidt et al., 2013;Kurajoh et al., 2015;Manna and Jain, 2015), and reduced sensitivity of vagal afferent neurons (de Lartigue, 2016). This disorder of vagal afferent signaling can activate orexigenic pathways in the CNS and drive hyperphagia, obesity, and cardiometabolic diseases at long-term (de Lartigue, 2016). Some authors have described that, at least in part, cardiovascular dysfuntion elicited by HFD or obesity may be due to changes in the neural control of respiratory and autonomic systems (Bassi et al., 2012(Bassi et al., , 2015Hall et al., 2015;Chaar et al., 2016). Part of these effects were suggested to be influenced by atrial natriuretric peptide and renin-angiotensin pathways (Bassi et al., 2012;Gusmão, 2012).
Interestingly, it has been shown that offspring from mothers fed HFD have high risk to develop pathologic cardiac hypertrophy. This condition would be linked to re-expression of cardiac fetal genes, systolic, and diastolic dysfunction and sympathetic overactivity on the heart. These effects lead to reduced cardioprotective signaling that would predispose them to cardiac dysfunctions in adulthood Wang et al., 2010;Fernandez-Twinn et al., 2012;Blackmore et al., 2014). Regarding arterial blood pressure control, it has been described that maternal HFD induces early and persistent alterations in offspring renal and adipose RAS components (Armitage et al., 2005). These changes seem to be dependent upon the period of exposure to the maternal HFD, and contribute to increased adiposity and hypertension in offspring (Samuelsson et al., 2008;Elahi et al., 2009;Guberman et al., 2013;Mazzio and Soliman, 2014;Tan et al., 2015). Studies in baboons subjected to HFD showed that microRNA expression and putative gene targets involved in developmental disorders and cardiovascular diseases were up-regulated and others were down-regulated. The authors suggested that the epigenetic modifications caused by HFD may be involved in the developmental origins of cardiometabolic diseases (Maloyan et al., 2013).
Other metabolic outcomes induced by HFD have been pointed out in the last years and it has demonstrated that HFD displayed a drastic modification on metabolic control of the glucose metabolism and lead to increased insulin level in serum (Fan et al., 2013) and enhanced insulin action through AKT/PKB (protein kinase B) and ERK (extracellular signal-regulated kinase), and activation of mammalian target of rapamycin (mTOR) pathways in cardiac tissue (Fernandez-Twinn et al., 2012;Fan et al., 2013). Offspring from HFD mothers showed alterations in blood glucose and insulin levels, with high predisposition to insulin resistance and cardiac dysfunction Wang et al., 2010). Part of these effects are associated with enhanced production of ROS and reduction in the levels of the anti-oxidant enzymes, such as superoxide dismutase, suggesting a misbalance in the control of the oxidative stress (Fernandez-Twinn et al., 2012).
Altogether, this review addressed the new concept on the maternal diet induced-cardiometabolic diseases that include the potential role of the perinatal malnutrition. It showed that the etiology of these diseases is multifactorial involving genetic and environmental influences and their physiological integration. It is well recognized that both perinatal undernutrition and overnutrition are related with the risk of developing metabolic syndrome and hypertension in adult life (Figure 1). The underlying mechanism can be explained in the context of phenotypic plasticity during development that includes adaptive change on the CNS, heart, kidney, liver, muscle, and adipose tissue metabolisms with consequent physiology dysfunction and FIGURE 1 | Schematic drawing showing the physiological effects induced by maternal and fetus exposure to under-or overnutrition through DNA methylation and their consequences on the organ physiology and increased risk of the cardiometabolic diseases in the offspring.
Frontiers in Physiology | www.frontiersin.org with subsequent cardiometabolic diseases. Moreover, maternal undernutrition or overnutrition may predispose epigenetic modifications in dams and their offspring, with predominance of DNA methylation, leading to altered gene expression during development and growth. Further, it can provide a different physiological condition which may contribute to the developmental origins of the cardiometabolic diseases. These physiological dysfunctions seem to be linked to the impaired central and peripheral control of both metabolic and cardiovascular functions by mechanisms that include enhanced sympathetic-respiratory activities and disruption in metabolism of end organs at early life. It is suggested that those effects could be associated to inflammatory conditions and impaired oxidative balance, which may contribute to adult cardiometabolic diseases.
AUTHOR CONTRIBUTIONS
JC, AS, and MF drafted and revised critically the work for important intellectual content and final review of the manuscript. | 4,079.6 | 2016-11-16T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
New Trigonometric Basis Possessing Denominator Shape Parameters
Four new trigonometric Bernstein-like bases with two denominator shape parameters (DTB-like basis) are constructed, based on which a kind of trigonometric Bézier-like curve with two denominator shape parameters (DTB-like curves) that are analogous to the cubic Bézier curves is proposed. The corner cutting algorithm for computing the DTB-like curves is given. Any arc of an ellipse or a parabola can be exactly represented by using the DTB-like curves. A new class of trigonometric B-spline-like basis function with two local denominator shape parameters (DT B-spline-like basis) is constructed according to the proposed DTB-like basis. The totally positive property of the DT B-spline-like basis is supported. For different shape parameter values, the associated trigonometric B-spline-like curves with two denominator shape parameters (DT B-spline-like curves) can be C2 continuous for a non-uniform knot vector. For a special value, the generated curves can be C(2n−1) (n = 1, 2, 3, . . .) continuous for a uniform knot vector. A kind of trigonometric B-spline-like surfaces with four denominator shape parameters (DT B-spline-like surface) is shown by using the tensor product method, and the associated DT B-spline-like surfaces can be C2 continuous for a nonuniform knot vector.When given a special value, the related surfaces can be C(2n−1) (n = 1, 2, 3, . . .) continuous for a uniform knot vector. A new class of trigonometric Bernstein–Bézier-like basis function with three denominator shape parameters (DT BB-like basis) over a triangular domain is also constructed. A de Casteljau-type algorithm is developed for computing the associated trigonometric Bernstein–Bézier-like patchwith three denominator shape parameters (DTBB-like patch).The condition forG1 continuous jointing two DT BB-like patches over the triangular domain is deduced.
Introduction
The construction of basis functions has always been a difficulty of computer-aided geometric design (CAGD).A class of practical basis functions often plays a decisive role in the geometric industry.Conventional cubic B-spline curves and surfaces are widely applied for CAGD due to their remarkable local adjustment properties.However, given control points keep a generated conventional cubic B-spline curve over a single location.Although cubic rational B-spline curves and surfaces can adjust positions and shapes by changing the weighting factor [1][2][3], their adjustment effect is difficult to predict due to its own defects.In recent years, trigonometric polynomials and splines with one or more shape parameters have been widely used with CAGD, especially in the design of curves and surfaces.Details can be found in [4][5][6][7] and the corresponding references therein.For example, researchers have used shape parameters to propose quadratic and cubic trigonometric polynomial splines [8,9].In [10], the extended cubic trigonometric spline curve of [8] was given.In [11], a class of C-Bézier curves was constructed in the space span {1, , sin , cos }, where the length of the interval serves as shape parameter.The sine and ellipse curves can be represented by the C-Bézier curves.
Although many improved methods are available, they are rarely applied in solving practical problems.In the final analysis, these techniques increase the flexibility of the curve by adding shape parameters compared with the traditional Bézier and B-spline methods.However, the technique itself cannot replace the traditional method, and several aspects still need improvement.For example, the majority of these methods discuss only basic properties, such as nonnegativity, partition of unity, symmetry, and linear independence.Shape preservation, total positivity, and variation diminishing, which are important properties for curve design, are often overlooked.However, the basis function, which has total positivity, ensures that the related curve contains variation diminishing and shape preservation.Therefore, possessing total positivity is highly important for basis functions.In addition, constructing cubic curves and surfaces remains the main method among the improved techniques.In general, these improved methods have 2 continuity, thereby meeting engineering requirements.However, in many practical applications, if the requirement for continuity is high, then these methods are slightly insufficient and often need to increase the number of times the curve is constructed.The B-spline curve and surface are regarded as examples.Notably, the continuity and locality of the curved surface are directly related to the number of times.The more times the curve is constructed, the higher the continuous order, but the locality is poor, and the computational complexity is high.Therefore, sacrificing the local property of its dominant position is necessary to achieve the special requirements of high-order continuity.Therefore, in the construction of curves and surfaces, the importance of meeting high-order continuity without increasing the computational complexity and without affecting its local properties is highlighted.
The traditional surface over rectangular domains, which possesses research and application value, has been widely used in CAGD.Obtaining a surface over a rectangular domain is easy because the traditional surface over such domain is a direct extension of the traditional Bézier curve by the tensor product method.However, by using the tensor product method, we fail to extend the patch over a triangular domain because it is not a tensor product surface.In many practical applications, surface modeling based on patch construction over triangular domains is important.Thus, the study of patches over triangular domains is of considerable interest.Therefore, the construction of a practical method that generates patches is important.For this reason, researchers have conducted numerous works.In [34], Cao showed a class of basis functions over a triangular domain.The related patch can be rendered flexible by an adjustment of the values.In [35], Han proposed a patch over a triangular domain, which can construct boundaries that can exactly represent elliptic arcs.A kind of quasi-Bernstein-Bézier polynomials over a triangular domain was proposed in [36].Recently, Zhu constructed the -Bernstein-Bézier-like basis, which possesses 10 functions; the related exponential parameter has tension effect.
This study proposes a class of DTB-like bases with tension effects that is based on previous studies.The proposed basis has two denominator shape parameters constructed in the space {1, sin 2 , (1 cos ]} and can form an optimal normal normalized totally positive basis (B-basis) and a new class of DT BB-like basis functions over a triangular domain with three denominator shape parameters.The presented DT Bspline-like curves and surfaces are 2 continuous with respect to a nonuniform knot vector.The corresponding curves and surfaces are 2−1 ( = 1, 2, . ..) continuous for the shape parameters, which select a special value with respect to a uniform knot vector.The denominator parameter introduced in the basis function has a tension effect, and the parameters can be used to predictably adjust the corresponding curves and surfaces generated.
The remainder of this work is organized as follows.Section 2 provides the definition and properties of the DTBlike basis functions and shows the corresponding curves.Section 3 presents a class of DT B-spline-like basis with two denominator shape parameters.The properties of the proposed basis are analyzed, and the associated DT B-like curves are shown.Section 4 proposes a class of DT BB-like basis over a trigonometric domain with three denominator shape parameters.We provided the definition and properties of related DT BB-like patches on the basis of the presented basis functions.Then, we developed a de Casteljau algorithm to calculate the proposed patch.Finally, 1 connecting conditions of the two proposed patches are given.Section 5 presents the conclusion.
Trigonometric Bernstein-Like Basis Functions
2.1.Preliminaries.For a good understanding of this study, related background knowledge about the extended Chebyshev (EC) space and the extended completed Chebyshev (ECC) space is provided in this subsection.Additional details are available in [37][38][39]. < for any closed bounded interval [, ], which can be denoted by .The function space ( 0 , . . ., ) is called + 1-dimension ECC-space, which is generated by the positive weight function ∈ − () in canonical form.The weight function shows that The necessary and sufficient condition of +1-dimension function space ( 0 , . . ., ) ⊂ () is called an ECC-space on that is for arbitrary , 0 ≤ ≤ , and an arbitrary nontrivial linear combination of the elements of the subspace ( 0 , . . ., ) with the most zeros (counting multiplicities).
If the collocation matrix ( ( )) 0≤,≤ related to the basis ( 0 , . . ., ) for an arbitrary sequence of points ≤ 0 < 1 < ⋅⋅⋅ < ≤ is totally positive, then the basis is deemed totally positive on [, ].If a function space has a totally positive basis, then the other totally positive basis can be formed by multiplying the optimal normalized totally positive basis (B-basis) by a totally positive matrix.Moreover, this basis is unique in space and has optimal shape preservation properties [40][41][42].
Proof that , is an EC-space on [0, /2] is provided.Thus, we must verify that the arbitrary nonzero element of , has, at most, two zeros (counting multiplicities) on [0, /2].
DTB-Like Curve with Denominator Shape Parameters Definition 4. Given control points 𝑃
are called a cubic DTB-like curve with two denominator shape parameters and .
Thus, the corresponding DTB-like curve given in ( 26) has the properties of affine invariance, convex hull, and variation diminishing, which are crucial properties in curve design, given that (22) possesses the properties of partition of unity, nonnegativity, and total positivity.Moreover, we have the following end-point property: For arbitrary , ∈ [2, +∞), the curve given in ( 26) has the end-point interpolation property, and 0 1 and 2 3 are the tangent lines of the curve at points 0 and 3 , respectively.From these properties, we can easily find that the curve given in (26) has similar geometric properties to the classical cubic Bézier curve.
The corner cutting algorithm is a steady and highefficiency algorithm for generating the presented DTB-like curves.We rewrite (26) into the following matrix to develop the algorithm: ) .(28) By rewriting the curve expression into matrix form, we can rapidly obtain this algorithm.Figure 2 shows an example of this algorithm.
In addition, for ∈ [0, /2], we can rewrite (26) into the following form: For arbitrary fixed ∈ (0, /2), 0 (; ) monotonically decreases with respect to the shape parameter .This phenomenon also means that, as shape parameter increases, the generated curve moves in the same direction as the edge 0 − 1 .By contrast, as shape parameter decreases, the opposite is true for the generated curve.On the edge 3 − 2 , parameter has similar influences.When = , as the shape parameters increase or decrease, the generated curve moves to the edge 2 − 1 in the same or opposite direction, respectively.Thus, the two denominator shape parameters have a tension effect.Figure 3 shows the generated curves for different shape parameter values.
The discussion indicates that any arc of an ellipse or parabola can be exactly represented by using the proposed DTB-like curves.Figure 4 shows the elliptic and parabolic segments generated by using the cubic DTB-like curves (marked with solid black lines).
Proof.Without loss of generality, we consider the knot +1 .For arbitrary , ∈ [2, +∞), we have From the aforementioned calculations and Lemma 7, we can easily find that the theorem is established at knot +1 .The case of the continuity at other knots can be discussed similarly.
Proof.We use mathematical induction to prove that the (2 − 1)-order derivative of basis function (22) has the following form: When = 1, the derivative of the basis functions ( 22) is These forms are satisfied when = 1.
We assume that the aforementioned forms are also satisfied when = .Therefore, the (2 − 1)-order derivative of the basis functions (22) is (57) By direct computing, we have These forms are satisfied when = + 1.In summary, the 2 − 1-order derivative of the basis functions ( 22) has the form of (55).Finally, we prove that the basis function () is 2−1 ( = 1, 2, . ..) continuous at each knot.Without loss of generality, we first consider the continuity at the knot +1 .From here and Remark 6, direct computation gives Thus, we can immediately conclude that (2−1) ( + +1 ) = (2−1) ( − +1 ) for a uniform knot vector and all = = 2.In summary, the theorem is established at knot +1 .The continuity of the basis functions with respect to other knots can be discussed similarly.
DT B-Spline-Like Curves
Definition 14.Given control points ( = 0, 1, . . ., ) in 2 or 3 and a knot vector , for arbitrary real numbers is called a DT B-spline-like curve with two denominator shape parameters , .
Proof.Without loss of generality, we consider the knot +1 .From Theorem 15 and Remark 6, for all = = 2, we have 3 3 We can immediately conclude that (2−1) ) for a uniform knot vector and all = = 2.In summary, the theorem is established at knot +1 .The continuity of () with respect to other knots can be similarly discussed.
Figure 6 shows () with different denominator shape parameters.The left figure shows (), which is generated by setting all = = 2 (black lines); the blue line is generated by changing one to 4; and the green line is generated by changing one to 4. The right figure shows that () is generated by setting all = 2 or 5 and = 2 or 5.
Theorem 18.Given two nonuniform knot vectors and , the DT B-spline-like surface (, V) possesses 2 continuity at each knot.
Proof.Without loss of generality, we consider the continuity at the region From Definition 17, we have We prove that the continuity of the direction and the continuity of the V direction can be similarly discussed.
Considering the continuity at the knot ( +1 , V +1 ), we have From Theorem 16, for all = = 2, we have In summary, the theorem is established at knot ( +1 , V +1 ).
These findings imply the theorem.Figure 8 shows four images of DT BB-like basis functions with the parameter values = = = 3.
Next, we provide the properties of the DT BB-like patch given in (73).
(a) Affine invariance and convex hull property: the related patch (73) has affine invariance and convex hull property because the basis functions (69) possess the properties of partition of unity and nonnegativity.
(89) Thus, we can summarize the following theorems.The aforementioned theorem shows that the two DT BBlike patches that connect the conditions are similar to those of the two triangular Bernstein-Bézier-like patches.Detailed content is available in [44].The only difference is that we can obtain different 1 continuous surfaces by changing the value of the denominator shape parameter.Figure 11 shows the 1 continuous surface under different shape parameters, where = 1, = −1.
Conclusion
In this study, the proposed DT B-like basis function forms a set of optimal normalized totally positive bases under the framework of the ECC-space, which leads to the DT BBspline basis function and the DT-BB-like basis function.Curve and surface construction and related discussions are based on these kinds of basis functions.Compared with the traditional Bézier method and the B-spline technique, the proposed method not only retains all the remarkable properties of the traditional method, such as variation diminishing, but can also accurately represent special industrial curves, such as parabolic and elliptical arcs.In special cases, the B-spline curves and surfaces constructed in this study can automatically reach (2−1) ( = 1, 2, 3, . ..) continuity, thereby satisfying the geometric design requirements of highorder continuity, which is not possible in the traditional literature.In addition, this study introduces a new method for the construction of triangular domain patches.This technique can flexibly adjust the patch with parameters and can accurately represent the boundary as a parabolic arc, an elliptical arc, or even an arc surface.Meanwhile, a de Casteljau algorithm is given to efficiently generate triangular domain patches and the 1 connecting conditions of the patch.Although the method in this study solves the problem of the traditional method and has many advantages, the construction of the basis function is only the first step.In the design of a curve or surface that is closely in-line with the requirements of the geometric industry, then many problems still need to be addressed.These issues include the accurate quantitative analysis of the influence of the denominator parameter on the DT B-like curve and the DT BB-spline curve; shape analysis of the DT B-like curve and the DT BBspline curve (convexity, cusp, inflection point, monotonicity, heavy node, etc.); and the higher-order continuous problem analysis of the DT BB-like patch on the triangular domain, except for the 1 continuity.These issues will be the focus of future research.
Figure 4 :
Figure 4: Representation of elliptic and parabolic arcs.
Figure 7 shows (, V) with different shape parameters.The figure on the left shows (, V) generated when all shape parameters are set 1 = 1 = 2 = 2 = 2.The figure on the right shows (, V) generated when all shape parameters are set 1 = 1 = 2 = 2 = 5.
Figure 9 :
Figure 9: DT BB-like patches whose boundaries are arcs of elliptic, circle, and parabola. | 3,913 | 2018-10-24T00:00:00.000 | [
"Mathematics"
] |
Smart Handoff Technique for Internet of Vehicles Communication using Dynamic Edge-Backup Node
: A vehicular adhoc network (VANET) recently emerged in the the Internet of Vehicles (IoV); it involves the computational processing of moving vehicles. Nowadays, IoV has turned into an interesting field of research as vehicles can be equipped with processors, sensors, and communication devices. IoV gives rise to handoff, which involves changing the connection points during the online communication session. This presents a major challenge for which many standardized solutions are recommended. Although there are various proposed techniques and methods to support seamless handover procedure in IoV, there are still some open research issues, such as unavoidable packet loss rate and latency. On the other hand, the emerged concept of edge mobile computing has gained crucial attention by researchers that could help in reducing computational complexities and decreasing communication delay. Hence, this paper specifically studies the handoff challenges in cluster based handoff using new concept of dynamic edge-backup node. The outcomes are evaluated and contrasted with the network mobility method, our proposed technique, and other cluster-based technologies. The results show that coherence in communication during the handoff method can be upgraded, enhanced, and improved utilizing the proposed technique. Traditional concept handoff/handover.
Introduction
A vehicular adhoc network (VANET) is mainly introduced to manage communication between vehicle-to-vehicle (V2V) and vehicle-to-road side infrastructure (V2I) for exchanging road information for providing safety for onboard passengers, by promptly exchanging vehicles' crash information to the surroundings as a way to avoid what is known as a "chain-of-accidents." On the other hand, VANET is meant to provide entertainment, commercial-advertisements and infotainment applications to the road's users. As a result of these developed applications that require reliable network infrastructure, Internet of Vehicles (IoV) has picked up a great deal of fame in scholarly research and industry for a long time. In [1], it is shown that giving consistent mobility is a serious prerequisite for the upcoming generation of network applications. Mainly, VANET uses broadcasting for sending and receiving the messages that show the existence of vehicle nodes in the vicinity.
1.
Commercial orientation gives entertainment and services to the drivers.
Convenience orientation deals with traffic management to enhance or upgrade traffic efficiency. 4.
Internet connectivity, as shown in Figure 1, is required to provide uninterrupted provision of the above three services. In order to form an IoV-enabled technology, vehicles should be interconnected efficiently with less linkage breakdown probability. In achieving this, a robust handoff mechanism needs to be properly involved. Accordingly, handling such a process, the handoff algorithm is normally implemented to manage the procedure. The aim of the handoff management is to maintain the active connections when a mobile node (MN) changes its point of connection. This handoff management involves computational process and overview image of the wider scope of the mobility cluster that with conventional vehicular communication will not be visible. Hence, recently the concept of mobile-edge computing (MEC) has shifted most of the processing overhead from the end mobile device to the edge-server that is physically closer to the MN and provides very low latency and prompts responses for the computational tasks. Hence, utilizing this concept for IoV could achieve a tremendous benefit out of MEC.
To accomplish the handoff in the IoV networks, the handoff system is directed to be effective in assigning and reassigning the IP addresses. This process known as a mobility handoff stage, and guarantees the consistent Internet availability while handoff is being performed. The Mobile Internet Protocol Version 4 (MIPv4) proposed in [3], contains a few issues, such as triangular routing, the short scope of communication, and the feeble security component. The MIPv6 was acquainted with the necessary improvements which defeated the issues from MIPv4. It provides a preferable security system and effectiveness over MIPv4. The necessity of a consistent network in VANET was not satisfied by MIPv6 [4] due to its elevated mobility of nodes. Hierarchical Mobile Internet Protocol (HMIPv6) was proposed to give an answer to MIPv6. In HMIPv6 protocol, in points of on-link, few entities are involved in providing IP-Mobility handoff process. The care of address (CoA) is locally obtained via Mobile anchor point (MAP) and the regional care of address (RCoA) is globally obtained. The MAP is utilized to deal with mobility area-handoff of the mobile client. The MAP is classified into two sections, micro-mobility and macro mobility. Mobile host (MH) uses the micro-mobility management method to produce an LCoA and launches a compulsory message to MAP, whereas mobile host (MH) uses a macro mobility management method to produce an LCoA and launches a compulsory message to its home agent (HA). It includes a handoff process which basically means changing the connection point during communication. In [5] the authors proposed the advanced mobility handoff scheme, whereby the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care of address during the roaming process. Moreover, a temporary MN-ID was generated by an access point (AP) each time an MN was associated with a particular AP and temporarily saved in a developed table inside the AP. However, as was highlighted by the authors in this work, there is still a need for a predictor to act proactively in a smart way to anticipate the possible handoff about to take palace. Figure 2 shows the traditional concept that whenever a mobile station is connected to a base station(BS), there is a need to change to the BS. This is called handoff or handover. Generally speaking, handoff processes could be performed as reactive or proactive; they use a similar concept as in any wireless technology. The handoff process involves one client node and at least two infrastructure nodes in which the client node should be attached to one of the infrastructure nodes. In order to achieve a low latency handoff, the process should be triggered proactively; hence, numerous methods were proposed over the past few years to address this challenge within vehicular communication. In [6] the author proposed a model which was made such that VANET still experiences issues of handoff due to delay and failure of packet delivery. As discussed in [7,8], the procedure of handoff generally occurs when a mobile user moves into the range of neighboring cell and can be controlled within the BS; the controller of BS manages numerous BS. Handoff Management in IoV is performed by re-routing to construct a new path to the destination. When a MN moves, a cluster of neighbor's changes and a new path of data transfer needs to be established rapidly without any delay for better handoff performance, as shown in Figure 3. Having this process managed from a distant cloud server makes the process introduce some delay-more than required. In MEC, the concept is to occupy each BS controller with computational resources to shift this handoff processing to the edge of vehicles to achieve the minimum possible delay. This paper specifically tends to the handoff challenges, cluster-based handoff using the proposed dynamic edge-backup node (DEBCK). Furthermore, this paper also examines the handoff delay, throughput techniques, and parameters of handoff for some related studies. Many strategies are used for improvements of this issue. We also outline the handoff research challenges that need to be addressed for better handoff performance. Handoffs can be categorized on the basis of different factors; i.e., types of network, elements involved in network or live connections, and kind of traffic the network provides. In order to familiarize the reader with the main types of handoff, a list of these types is given below along with the discussion.
In a hard handoff channel, the source cell will be out of coverage and only the channel within the specified cell is involved [9]. Therefore the connection between source and destination will breakdown before or as the connection to target gets created; for this purpose, handovers are also called hard handoffs. When a node is between BS, then the node can move with any BS. So, BS springs up the link with the node back and forth. This mechanism is also called ping-ponging. On the other hand, when a new connection is established before the previous connection is broken, this defined as a soft handoff [9]. In a soft handoff, the channel in a source area is reserved and used in parallel with channel in target area. In this scenario the connection to the target is established before the connection to source is breakdown. Therefore, this handoff is known as make-before-break. Soft handovers may include connections to more than two cells. As another scenario, when nodes switch between networks of the same infrastructure type while changing their connection in order to maintain active connection, this switching process is called horizontal handoff. E.g., from one GSM ne twork to another or from one WiFi AP to another. In contrast, as a way to maintain an active connection while nodes switch between networks of different types while changing the connection, the process is known as vertical handoff.
Vertical handoff is the process of changing the mobile active connection between different wireless te chnologies. Vertical handoffs can be further distinguished into downward vertical handoff (DVH) and upward vertical handoff (UVH). In DVH, the mobile user hands off to the network that has higher bandwidth and limited coverage, while in UVH the mobile user transfers its connection to a network with a lower bandwidth and wider coverage.
The structure of rest of the paper is as follows. Section 2 includes a literature review of some related research and a comparative analysis of the existing handoff techniques. Section 3 includes proposed methodology. In Section 4, we discuss the performance evaluation while Section 5 concludes the paper. All abbreviations and mathematical notations are listed in Appendix A, Tables A1 and A2 accordingly.
Literature Review
This section provides a background survey of the existing handoff methods and techniques in vehicular networks. For instance, the authors in [10] proposed an adaptive handover prediction (AHP) method for seamless mobility based network that merges the AP prediction model with fuzzy logic and gives the cognitive ability to handover decision making. On the other hand, the study in [11] provides a brief review of the handoff process for VANET over LTE-A wireless networks. This work also discusses some key parameters for handoff scheme. Through studying this paper, new research options can be explored by understanding the gaps in previous studies. Paper [1] presented a handoff protocol, discussed necessities of protocol, improved the vehicular handoff in detail, and analyzed the partner selection problems on VANET by initiating a vehicle link expiration time (VLET) metric. The authors in [12] describe some information about the background of the problem in VANET, and the scheme being used by the authors provides speedy handoff methods by using RFID tags which shortened the handoff latency and minimized the Quality of Service (QoS) of the network. The IEEE802.11p is a [13] draft standard whose aim is to support wireless access in vehicular environment (WAVE). The IEEE802.11p is a draft standard intended to support WAV, which operates over a frequency band of 5.9 GHz. WAVE consists of IEEE 802.11p and IEEE 1609.X. The IEEE 802.11p deals with the physical layer (PHY)/medium access layer (MAC) and IEEE 1609.X considers upper layers. In 1609.X standard family, 1609.3 defines network and transport layers and 1609.4 specifies the multi-channel operation. In the multi-channel operation, a WAVE system uses one common control channel (CCH) and several service channels (SCHs). It was reported in the literature to support the flawless handoff procedure for real-time service of the demanding application and decrease the whole time taken by the handoff. The research [14] mainly focused on quick and recent location-based handoff scheme for VANETs. Two schemes are discussed in this paper; one is called detection scheme, and the second one is called the AP selection scheme.
The work in [15] reviews the literature related to VANETs, and the authors proposed an efficient proxy mobile IPv6 (E-PMIPv6)-based on handoff management that assured session continuity for urban mobile users. A series of previous studies [16] has indicated that the recent techniques of vertical handoff decision algorithm are used by comparing the multiple parameter values to choose the top available networks. In this paper [17] for fast handoff using AP in VANET, a proactive approach was proposed. For VANETs, previous schemes need to minimize the delay of handoff and for fast handoff, existing schemes are purely based on context transfer. In the literature review [18], the main focus is on the quality of the video streaming which has been downloaded by the moving vehicle. Vehicles move from one road side unit (RSU) to another RSU during the process of video streaming. A more comprehensive description [19] can be found in WAVE, in which handoff is most the repeated in wireless networks. In this paper, a prescanning method and quality scan scheme are proposed for improving the performance of handoff in VANETs.
The authors of [20] provide a considerable body of literature in VANET state which consists of wireless local area networks (WLAN) and universal mobile telecommunication systems (UMTS) as they relate to the vertical handoff performance of vehicles. The NS2 simulator is used for implementing the vertical handoff scheme. In [21], there is a brief review on the standard protocols improved by new proposals. For dealing with the handoff problem some standard approaches are used on the basis of new proposals, such as multiway proactive caching, VFMIPv6, and EBR-PMIPv6 to reducing the handover expenses. The authors of [22] give a progressively exhaustive analysis, including the beacon frequency, the speed of the vehicle, and the size of the beacon.
In [23] the authors improved the handoff process of network-based DMM by using the HO-initiate procedure and IEEE 802.21 media independent handoff services. Their proposed technique begins the problem of compulsory registration delay by using the HO-initiate process and lessens the delay of determining the next access network and candidate mobile anchor access routers (MAARs). The authors of [11] provide a brief overview of communication and information exchange between vehicles and provide the latest scheme for handoff algorithm which relies on the vehicular communications specifically focused on the vehicle to infrastructure (V2I).
In highlighting the impact of the new emerged concept of mobile edge computing in the field of VANET, see the authors of [24,25]. In this paper, vehicular edge computing (VEC) was investigated as an important application of mobile-edge computing in vehicular networks.
Handoff Techniques
Some handoff techniques in the literature are the vehicular fast handoff scheme, the IP passing scheme, the cluster-based handoff scheme, the two antenna scheme and the network mobility (Nemo) scheme. The [26] vehicular fast handoff scheme is known as the layer handoff scheme, which classifies three kinds of vehicles. These are RV (relay vehicle), OSV (oncoming side vehicle) and BV (broken vehicle). The [27] discussed the vehicular fast handoff scheme (VFHS) as RV is a larger vehicle than the others. The [28] discussed that for the spreading network topology message (NTM) to BV and OSV utilize predefined frequency channel. Reference [29] explains the preferred AP and early handoff. Each AP makes a list of PAP, which gives early information and helps out the vehicle to carry out the handoff from continuing AP. In [30] they proposed an intelligent network selection (INS) method focused on enhancing the scoring function to rank efficiently from existing wireless network nodes.
IP Passing Scheme
IP addresses the switch between the vehicles when V-1 moves from the space of Base Station-1 (BS-1) towards BS-2 and V-2 moves from the area of BS-2 towards BS-1. The IP addresses are switched when these two vehicles traverse each other in inverse ways. A vehicle resources another vehicle behind it so as to play out a handoff. DHCP is capable of controlling the allocating of IP delivers to ensure extraordinary distinguishing proof after handoff is performed.
Following 3 steps are necessary for checking the passed IP address.
1.
The IP address is produced by an authentic DHCP.
2.
No manufacturing is done previously or amid the going of an IP address from vehicle to vehicle.
3.
The IP address is checked at the receiving vehicle, on the off chance that its manufactured IP will be disposed of, and asked for from DHCP straight forwardly, rather than tolerating a passed IP from a vehicle.
Cluster-Based Handoff Scheme
In cluster-based handoff, vehicles are moving into a gathering or a cluster in highway and each group contains a cluster head. Cluster head straightforwardly communicates with BS and another vehicle communicates with the cluster head. Reference [31] mentions that cluster heads are responsible for maintaining the whole network, so a cluster head must choose the basic metrics like storage capacity, communication range, energy, etc. In [32] authors proposed the CBHBCK in which a predefined backup mobile edge-node and cluster head were used. Backup node prepares the handoff and updates the list of BSs while cluster head performs the physical handoff.
The overhead of a cluster node decreases, but the drawback is if the backup mobile edge-node or cluster head stops working then the communication can be distorted.
Two Antenna Scheme
This methodology clarifies that two remote interface cards are given to the vehicles while each card has an antenna. These two antennas deal with common base usefulness. One antenna has given an undertaking to check the quality of the flag and perform handoff when required. The other antenna conveys flawlessly, which results in less parcel misfortune and hands-off time necessity.
Network Mobility (NEMO) Scheme
In this method collection of vehicles is set up on the premise of the multi-bounce transfer idea. For communication with BS, NEMO utilizes a mobile router (MR). Its idea is same as that of a bus that contains two MRs in which the front switch is known as Front MR and a back switch is known as Rear MR. Front MR is appointed to handoff and Rear MR gives administrations to systems. Table 1 shows the comparative analysis of existing handoff techniques in VANENT. The authors in [1] have proposed approach named vehicle link expiration time (VLET) metric. In this approach packet loss rate cannot decrease but handoff delay reduces. RFID Tags proposed by Midya, S., et al. (2016) [12]. Packet loss rate decreases in the proposed scheme and handoff latency reduces. In another attempt, the work presented in [13] gives the multichannel functions method. The authors have maintained the handoff process in vehicular network based on IEEE 802.11p network standard. Wi-Fi was also utilized to interface every one of the hosts with MRs and WI-MAX is utilized to associate MRs to the web. In NEMO clusters of vehicles are considered as transports. The primary vehicle in the gathering is considered as front MR, while the last vehicle is assigned to be as back MR, which is the proposed mobile edge-node. Pre-handoff performance for last MR is the requirement of front MR.
Comparative Analysis
This approach reduces the handoff delay and support real-time services. Almulla, M., et al. (2014) [14] proposed schemes called turned detection scheme and AP selection scheme. They worked on handoff delay by using location-based handoff approach specifically build for vehicular communication system. According to movement route and positions of vehicle information and nearby APs information, the proposed protocol guesses many APs that the vehicle may be visit in future and allocates these APs different priority levels.
E-PMIPv6 allows mobile nodes to get continuous internet connectivity from static roadside units or mobile routers and enhance cache usage at the local movement fixed by binding the cache data of the mobile users. In the handoff procedure E-PMIPv6 focuses on multiple handoff circumstances in the urban vehicular surroundings and gives mobility support to individual mobile users or group of users in an identical network without disturbing continuous sessions. Dawei, M., G. Xianlei, and C. Rong. (2013) [17] provides a scheme named the vertical handoff scheme. Packet loss rate decreases and also handoff delay reduces in this scheme. In this approach vertical handoff evaluation of vehicular users in VANET environment comprise of different mobile systems and wireless networks. Simulation is performed in network simulator 2.
This work details the investigation of the handoff challenges in cluster-based handoff using proposed DEBCK. The outcomes are contrasted with NEMO and cluster-based Technologies. The result portrays that coherence in communication during handoff method can be upgraded, enhanced, and improved utilizing the proposed technique.
Proposed Methodology
There are some existing methods which are furnishing us with network connectivity among nodes and cluster nodes during the handoff. The proposed technique in this study is purely dependent on the clustering mechanism of the vehicle. In IoV group is arranged by various clusters of vehicles by means of relative movement and afterward information is distributed amongst all the vehicles in a group.
In the proposed methodology for maintaining the safe track of the handoff process, two nodes are used. The following lines provide discussion of the two nodes. The first node being used here is called head node and the second node is called an edge backup node. It is assumed that in the working of a network system, nodes are further divided by making more groups later on. Both nodes being used here are dynamically selected on basis of the following three parameters.
A list of candidate backup mobile edge-nodes will be created and scores will be calculated on the bases of these parameters. When a node is entered in the cluster, its score will be calculated and entered into the candidate backup list. Both cluster head and backup edge node have a copy of this list. If the cluster head faces disconnection, the backup mobile edge-node will become the cluster head and a high-scoring backup mobile edge node from candidate backup list will be selected. If the backup node faces disconnection. the head node will select a high-scoring backup mobile edge-node from the list. Issues of overhead on cluster head and packet loss rate are decreased and improved through this methodology. The following points show the proficient performance of the proposed technique.
1.
Provision of non-stop and steady connectivity through handoff performance; 2.
Maximized and efficient network performance; 3.
Reduced problem of the cluster heads' overhead.
Cluster stability can be explained with multiple methods; our procedure computes the stability from the distance, variation in speed, and probability arguments. A cluster head is selected with keen value stability. We assume that vehicles are equipped with GPS systems through which each one can gain information related to current position, and IEEE 802.11p radio transceivers which are used for communication with other vehicles. In the proposed algorithm, each vehicle can run the clustering method. When a vehicle needs to discover a cluster head, it propagates a message to its neighbors; if no response is received, it begins the group formation procedure. Vehicles moving in the same direction are grouped in the one cluster. Each vehicle transmits the message to its group vehicle; the message contains: the location, ID, and speed. After that, it places the arguments in an array of the neighborhood and calculates: • The distance between the neighbor Q and vehicle N itself: where (x, y) is the location of the vehicle.
• The variation in speed among the neighbor ∆ V and vehicle N itself.
• The probability of the vehicle being the cluster head: where E is the energy intake of the node for sending/receiving a packet, (here, energy refers to the power that a node consumes for the transmission process) and V is the speed of vehicle. Probability value will be between 0 and 1. We have normalized the range of E, vehicle density, and velocity within the range of (0-1). All captured values will be divided by the maximum value of the obtained probability. In each iteration, the maximum obtained probability-by setting the maximum values of all mentioned metrics, Pmax-can be defined. Example, P max = E max + 2 * Density max + V max » P max = 1 + 2 × 1 + 1 = 4; hence, each time, calculating the probability of having a vehicle as the cluster head will be divided by the value 4 (P max ); hence the range value of probability will end up with the scale of 0-1. Bear in mind that when the value gets close to 1, it indicates a poor candidate; while when it is closer to 0 it is considered the best candidate, as it would be a minimization problem; the minimum is the best. Obviously this value is highly dependent on the energy consumption of vehicles, density of the node, and speed of the vehicles. Cluster heads are selected according to the probability of optimal cluster heads being decided by the this equation, taking into account the metrics of each network's cluster. After the selection of cluster heads, the clusters are constructed, and the cluster heads start communicating. The cluster head which would consume the least energy during its communication would have a greater chance of becoming the cluster head. Here, energy means the power that a node consumes for transmission. E = K * E con (4) where E con describes the energy intake of the wireless send/receive circuit.
where S, output, and X are the impacts of matrix. S is the stability of cluster. Cluster stability is an important goal that clustering algorithms try to achieve, and it is to be considered a measure of performance of a clustering algorithm. Cluster stability can be explained using multiple methods; our procedure computes the stability from the distance, variation in speed, and probability arguments. Cluster head is selected with keen value stability [33].
Results of the experimentation used to change all the three components (the difference in speed between the vehicle, the distance ∆V and probability) in the following limits: Algorithm Algorithm 1 of the proposed methodology works through the following steps: A list of Candidate backup mobile edge-nodes will be created on the basis of storage capacity, communication range, and energy. If a backup mobile edge-node has gone away from its cluster (took another intersection), then it sends a message to the cluster head prior to its movement to provide an alert to start new election process for new backup node. The cluster head will select another high scoring backup node from the candidate list. At the time when handoff is not performing, after the time span of one minute, the backup mobile edge-node sends a hello message. Location is improved and the list of BS is handled and maintained. Associate BS1 and BS2, which are as of late framed and set up close to the cluster head. For physical handoff, send a physical affirmation of handoff to the cluster's head A vehicle cj from set H d 1 is selected as one of the cluster midpoints 8: The distance, dist(cj, xi), between cj and every xi vehicle in set V d 1 is 9: calulated 10: While |Clj| <= 30 && max (dist (cj, xi)) <= d do 11: The vehicles xi having the minimum distance form cj are associated to 12: Cj and is sorted in set Cli 13: Done 14: Done 15: Until (All the vehicles from set V d 1 have been assigned to any one of the Clj) 16: The new cluster midpoints of formed clusters are calculated using to find a) 17: Better solution and stored as cj (new) 18: Repeat steps 4-6 to generate new members of Clj(new), which replaces 19: Clj taking cj (new) as new midpoint 20: Step 4 to 9 are repeated until to new cluster midpoints cj (new) are found.
Due to the complex nature of a road network with vehicles frequently joining and leaving the network (like an urban network), this challenge of dynamic topology that exists with vehicular communication makes the connection link unstable and there is high possibility of the link breaking down [34]. For this reason, our proposed DEBCK was introduced to overcome this issue as the dynamic nature inherently involved with it in proactively forming the backup cluster node that will take over the lead whenever a breakdown with the topology has occurred, due to vehicles' redirections. Hence, the periodic info exchanged along with direction each time will be calculated to form a decision in deciding on the backup node so that can avoid such a situation.
As BS1 is more remote BS along these lines, separate from it and make another connection with BS3 because of its closest area. Set up a physical connection with new BS and begin refreshing the list of nodes more than once. When handoff is not playing out, the location is refreshed through the backup mobile edge-node by sending a hello message within range of a 30-meter range BS. At that point, the adjacent BS gets a route request (RREQ) message through the backup mobile edge-node and send route reply (RREP) messages to the node. The separation of accessible BS is determined through the backup mobile edge-node and spares them in its list. The packet loss ratio is reduced by BS which is closest and has a far stronger signal capacity. When the connection between the old BS and cluster head detaches, the productive communication between the backup mobile edge-node and the devices builds up to increase steadily and ceaselessly on the network. It reduces the packet loss rate and low handoff aggravation is accomplished.
Finally, cluster head sets up a connection with new BS. The area is again refreshed through the backup mobile edge-node. Support of run down is done through seeking BS by backup mobile edge-node. The delicate signal comes, and connection with a different BS is set up as indicated by the closest area.
Performance Evaluation
In this section, the simulation result of our work is described. The simulation parameters are shown in Table 2. Network Simulator2 NS-2 [35] simulator was used to implement the proposed method. The simulation running time was 200 s. The plotting of average results evaluates the performance of the system. It is worth mentioning that our proposed DEBCK method was implemented using street map of the city of Erlangen, which was integrated via "Simulation of Urban Mobility" SUMO [36] into our simulated scenario of NS-2 simulator. This map was extracted from the open-source "OpenStreetMap," which was widely used by the researchers in deploying their VANET methods [37,38]. Figure 4 demonstrates the selected city sector that was simulated of Erlangen city. Performance metrics of interest are:
1.
Packet Loss: It describes the number of lost packets in the handoff procedure.
2.
Handoff jitter: The handoff jitter is calculated during the handoff time.
4.
Throughput: It describes the average rate of successful message delivery to a destination with NEMO and cluster-based methodologies. The proposed work is compared to them. The results clearly describe that cluster-based handoff with backup mobile edge-node works better. Figure 5 shows that for 30 and 60 vehicles, the packet loss rates is the same as NEMO and DEBCK. With an increased number of vehicles, the packet loss rate goes on increasing in NEMO and cluster-based methods due to overhead of the cluster head. Seamless hand-off of the cluster head is achieved through the backup mobile edge-node. Communication between the cluster head and other nodes minimizes the packet loss rate, even with hundreds of vehicles in the cluster. In Figure 6, simulations of cluster-based and DEBCK are shown. Diagrammatically it is shown that for 10 vehicles, throughout is probably the same. With a cluster expansion and an increment in the number of vehicles, throughput decreases. Throughput is gradually minimized in cluster-based technology, whereas in DEBCK throughput did not decreased as much as cluster-based. In Figure 7 the proposed DEBCK algorithm was benchmarked with its representatives from the related work in-terms of delay measuring metric with respect to vehicle velocity. It was observed that our proposed algorithm could maintain on average a lower delay while the velocity was increasing. The reason behind the improvement is that our proposed algorithm has utilized the introduced backup mobile edge-node that could maintain an intermediate session to backup the directed traffic data to the vehicle that performed the handoff procedure; thus transport protocol did not face increasing packet loss. When the connection is maintained with a smaller number of retransmissions, this will reflect directly on the delay produced on data communication and on the handoff latency. Another winning factor from the proposed concept of the backup mobile edge-node was having more resources dedicated to such a node in support of the handoff process. Figure 8 shows the performance measure of our proposed DEBCK algorithm along with the other methods with regard to handoff jitter variations as the speed of the vehicle increases. It was observed that our proposed method could maintain on average a lower delay while the speed of vehicle increased. The reason behind this improvement is that our proposed algorithm has utilized the backup mobile edge-node that could maintain an intermediate session to backup the directed traffic data of the vehicle performing the handoff procedure; thus transport protocol has not faced an increasing packet loss rate in comparison with the other benchmark methods.
On the other hand, Figure 9 demonstrates the handoff delay compared to the velocities of vehicles. Diagrammatically it is shown that an up to 3 msec delay is the same when compared to simple cluster methodology. While the velocity of vehicles increases, delay is reduced when compared to a simple cluster-based method. This improvement was due to the factor introduced to the probabilistic method used: the cluster head would have been highly selected when it held a lower velocity value. In other words, a vehicle that travels with relatively low velocity has a higher chance of being selected as a cluster head. Hence, this factor will lead to a lower handoff delay, as there will be a lower frequency of obtaining a network switching over time. In order to investigate the cost value when a cluster head attempts to leave the existing clustered network, an experiment was conducted with 10 scenarios. Figure 10 shows the obtained average cost factor of our proposed DEBCk algorithm compared across the benchmark methods. We have conducted 10 different simulation scenarios; the first starts with a single intersection and increases additively up to 10. The obtained cost was the average data loss during the cluster head forming process. We observed that our proposed method could maintain a relatively low average within no more than 0.6 (60 percent). The reason behind this is that our proposed methodology can maintain a safety net for the handoff process; two nodes are used. The first node being used here is called the head node and the second node is called an edge backup node. It was assumed that in the working of a network system, nodes are further divided by making more groups later on. Both nodes being used here are dynamically selected on the basis of the following three parameters.
A list of candidate backup mobile edge-nodes will be created, and scores will be calculated on the bases of these parameters. When a node is entered in the cluster, its score will be calculated and entered into the candidate backup list. Both cluster head and backup edge nodes have a copy of this list. If the cluster head faces disconnection, the backup mobile edge-node will become the cluster head and a high-scoring backup mobile edge node from the candidate backup list will be selected. Hence, we have observed that when our selected backup node faces disconnection the head node will select a high-scoring backup mobile edge-node from the list. Issues of overhead on the cluster head and packet loss rate are decreased and improved through this methodology compared NEMO and simple cluster methods.
Conclusions
In this paper, we have reviewed some of the handoff management techniques, their importance, and major issues within handoff process in VANET. Additionally, different technologies and algorithms of handoff management with respect to throughput, latency, and delay in VANET were investigated to point out the gaps and challenges that led us to conduct this study. The proposed technique was founded on cluster handoff exploring the recent concept of mobile edge computing in IoV communication. The backup mobile edge-node was added to upgrade the current cluster-based methodology, which enables the group of connected vehicles to play out its obligations. The weight on a cluster is limited so that it is responsible for communication and physical handoff of the dynamic feature of the backup mobile edge-node. Accordingly, the responsibility of the proposed node was to refresh the BS list and the location as the cluster of vehicles moves. This method was formed and named DEBCK. At the point when a cluster is connected with BS, the proposed backup node is configured to send a hello message. When a backup mobile edge-node or cluster head, at runtime, faces disconnection, the new backup provision's ability to deal with the situation makes the proposed method better than other methods. The proposed technique was contrasted with other related methods in regard to network disconnectivity, packet loss rate, and throughput. The proposed DEBCK could effectively minimize the packet loss rate and decrease the overhead on the clustering forming process. On top of that, DEBCK has increased the throughput and provided a more reliable connection using the concept we introduced: the edge mobile node. In the future work, research will be conducted to optimize the handoff decision scheme by constructing an objective function for minimizing packet loss, which the authors of this paper are currently working on. Furthermore, radiowave propagation, Doppler phenomena, and terrain affects are very important factors; taking them into consideration in our improved version of the DEBCK method would provide more reliability during the process of backup node formulation; these factors are to be further investigated in our future work.
Conflicts of Interest:
The authors declare there is no conflict of interest in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 8,876 | 2020-03-16T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Four Common Vascular Endothelial Growth Factor Polymorphisms (−2578C>A, −460C>T, +936C>T, and +405G>C) in Susceptibility to Lung Cancer: A Meta-Analysis
Background and Objective Vascular endothelial growth factor (VEGF) is one of the key initiators and regulators of angiogenesis and it plays a vital role in the onset and development of malignancy. The association between VEGF gene polymorphisms and lung cancer risk has been extensively studied in recent years, but currently available results remain controversial or ambiguous. The aim of this meta-analysis is to investigate the associations between four common VEGF polymorphisms (i.e., −2578C>A, −460C>T, +936C>T and +405C>G) and lung cancer risk. Methods A comprehensive search was conducted to identify all eligible studies to estimate the association between VEGF polymorphisms and lung cancer risk. Crude odds ratios (ORs) with 95% confidence intervals (CIs) were used to evaluate the strength of this association. Results A total of 14 published case-control studies with 4,664 cases and 4,571 control subjects were identified. Our meta-analysis provides strong evidence that VEGF −2578C>A polymorphism is capable of increasing lung cancer susceptibility, especially among smokers and lung squamous cell carcinoma (SCC) patients. Additionally, for +936C>T polymorphism, increased lung cancer susceptibility was only observed among lung adenocarcinoma patients. In contrast, VEGF −460C>T polymorphism may be a protective factor among nonsmokers and SCC patients. Nevertheless, we did not find any association between +405C>G polymorphism and lung cancer risk, even when the groups were stratified by ethnicity, smoking status or histological type. Conclusion This meta-analysis recommends more investigations into the relationship between −2578C>A and −460C>T lung cancer risks. More detailed and well-designed studies should be conducted to identify the causal variants and the underlying mechanisms of the possible associations.
Introduction
Lung cancer, characterized by uncontrolled cell growth in tissues of the lung [1], accounts for 13% (1.6 million) of the total cancer cases and 18% (1.4 million) of total deaths in 2008 [2]. Lung cancer has become a major public health challenge all over the world, especially in China [3]. Thus, understanding the molecular biology and etiology of lung cancer will be pivotal in designing targeted therapies and personalized medicines. The link to smoking as a definite causative agent for lung cancer has been well established from epidemiologic evidence since the 1950s [4,5]. However, epidemiological data showed that only 10-15% of heavy tobacco smokers ultimately develop lung cancer [6,7], suggesting that certain common genetic variants or polymorphisms may influence the risk of lung cancer, particularly among those who have developed lung cancer. Vascular endothelial growth factor (VEGF), also known as vascular permeability factor, is one of the key initiators and regulators of angiogenesis and it plays a critical role in the progress and prognosis of malignancy [8][9][10]. Evidence from in vitro and in vivo experiments have shown that high levels of VEGF expression were found to be associated with tumor growth and metastasis, whereas the inhibition of VEGF signaling results in suppression of both tumor-induced angiogenesis and tumor growth [11][12][13]. Bevacizumab, one of the agents for recognizing and blocking vascular endothelial growth factor A (VEGF-A), has been a promising agent in a combination regimen in improving the overall survival and progression-free survival of breast cancer, non-small-cell lung cancer, renal cell carcinoma, and other solid malignancies [14,15].
The VEGF gene, which contains a 14-kb coding region with eight exons and seven introns, is located on chromosome 6p21.3 [16]. At least 30 single nucleotide polymorphisms (SNPs) in VEGF gene have been identified and described, and some have even been shown to affect the expression of VEGF gene [17,18]. Several previously published meta-analyses showed that VEGF +936C.T (rs3025039), one of the most common polymorphisms, was not associated with gastric cancer [19][20][21], colorectal cancer [22], or breast cancer [23][24][25]. Additionally, these published meta-analyses also showed that other three common VEGF polymorphisms, 21154G.A (rs1570360), 2634G.C (rs2010963) and 2460C.T (rs833061), were not associated with colorectal cancer [26] or breast cancer [24], whereas the VEGF 2634G/C polymorphism was found to be associated with gastric cancer [20]. In recent years, four common polymorphisms in VEGF gene, 22578C.A, 2460C.T, +936C.T, and +405C.G, have been described in several literatures to appear to be involved in the development of lung cancer [27][28][29][30][31]. However, the results remain controversial or inconclusive. To the best of our knowledge, there were no published meta-analyses investigating the association between VEGF gene polymorphisms and lung cancer susceptibility. Therefore, we performed a meta-analysis of all eligible casecontrol or cohort studies to investigate whether these functional VEGF polymorphisms are associated with any increased risk of lung cancer and whether the associations are modulated by smoking status, histological type or other risk factors. We hope our meta-analysis can potentially be important in early lung cancer identification and become part of the therapeutic strategies in combating lung cancer.
Literature Search
Relevant papers for this meta-analysis were systematically identified through literature searches on PubMed, Embase, Web of science and Chinese National Knowledge Infrastructure (CNKI), and Chinese Biomedical Literature Database (CBM) of publications published up to March 9, 2013 relating to VEGF gene polymorphisms and lung cancer risk. As the main search criteria, we used combinations of the following terms: "VEGF", "vascular endothelial growth factor A", "vascular permeability factor", "vascular endothelial growth factor", "lung neoplasms", "pulmonary neoplasms", "bronchial neoplasms", "lung cancer", "bronchial neoplasm", "genetic polymorphism", "single nucleotide polymorphism", "SNP", "mutant", "gene variation". We also reviewed the reference lists of articles retrieved to identify relevant publications.
Inclusion and Exclusion Criteria
Our meta-analysis included genetic association studies fulfilling the following inclusion criteria: (a) a case-control, cohort or crosssectional study must evaluated at least one of four polymorphisms of VEGF gene and lung cancer risk; (b) the diagnosis of lung cancer patients was confirmed pathologically and controls were confirmed as cancer-free patients; (c) inclusion of sufficient data on the size of the sample, odds ratio (OR), and 95% confidence interval (CI) and (d) articles were published in the English or Chinese language.
Studies were excluded when they represented duplicates of previous publications, or were meta-analyses, case report, letters, reviews or editorial articles. Studies investigating the progression, severity, phenotype modification, response to treatment, or survival were also excluded. Additionally, when data was included in multiple studies using the same case series, either the study with the largest sample size or most recent publication was selected. Finally, family-based studies were excluded because of different design settings. Any disagreements on study inclusion were resolved through discussions between the authors. To ensure the rigour of the current meta-analysis, it was designed and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement. The relevant checklist is shown in Supplement S1.
Data Extraction
All data from the included studies were extracted independently by two investigators, using a piloted data standardized form (when it came to conflicting evaluations, an agreement was settled after a discussion): the first author's surname, year of publication, country of origin, published language, gender of study individuals and ethnic subgroups, study design, number of subjects, smoking status, histological types of lung cancer, SNP genotyping methods, genotyping method and detected sample, allele and genotype frequencies, and evidence of Hardy-Weinberg equilibrium (HWE) in controls. In addition, we also compared key study characteristics such as location, study time and authorship to determine the existence of multiple publications from the same study.
Quality Assessment of Included Studies
Two authors independently assessed the quality of the published articles according to the modified STROBE quality score systems [32]. Forty assessment items matching with the quality appraisals were used in this meta-analysis, with scores ranging from 0 to 40. Scores of 0-20, 20-30 and 30-40 were defined as low, moderate and high quality, respectively. The two authors resolved their differences through discussions; if no agreement could be reached, a third author decided on a decision. The modified STROBE quality score system is available in Supplement S2.
Statistical Analysis
Crude ORs together with their corresponding 95% CIs were used to calculate and assess the strength of association between VEGF gene polymorphisms and lung cancer risk under five genetic models: allele, dominant, recessive, homozygous, and heterozygous models. The deviation of frequency from those expected under Hardy-Weinberg equilibrium (HWE) was assessed by Chisquared goodness of fit tests in controls. We explored inter-study variation through prespecified subgrouping of studies according to ethnicity (ie, Caucasian or Asian), gender (ie, female or male), smoking status (ie, smoker or non-smoker), and histological type of lung cancer (ie, adenocarcinoma, squamous cell carcinoma (SCC), and small cell carcinoma (SCLC), where applicable. The statistical significance of the pooled OR was assessed with a Z test. Betweenstudy variation and heterogeneity were estimated using Cochran's Q-statistic, with P,0.05 as a cutoff for statistically significant heterogeneity [33].
We also quantified the effect of heterogeneity with the I 2 test (ranges from 0 to 100%), which represents the proportion of interstudy variability that can be attributed to heterogeneity rather than to chance [34]. The fixed effects model (Mantel-Haenszel method) was used, except when a significant Q-test (P,0.05) or I 2 .50% indicated the existence of heterogeneity among studies; otherwise, the random effects model (DerSimonian-Laird method) was applied for meta-analysis. In order to ensure the reliability of our results, sensitivity analysis was performed by omitting individual studies. Begger's funnel plots were used to detect publication bias. In addition, Egger's linear regression test, which measures funnel plot asymmetry via a natural logarithm scale of OR, was also used to evaluate publication bias [35]. All P-values were two-sided. Analyses were conducted with STATA Version 12.0 software (Stata Corp, College Station, TX).
The Characteristics of Included Studies
Our initial literature search yielded 546 reports, which included 13 population-based [28][29][30][31][36][37][38][39][40][41][42][43][44] and one hospital-based [27] case-control studies meeting the inclusion criteria based on the search criteria for lung cancer susceptibility linking to at least one of four common SNPs of VEGF gene, 22578C.A, 2460C.T, +936C.T, and +405C.G. The flow diagram of the selection of studies and specific reasons for exclusion from the meta-analysis are shown in Figure 1. We studied four VEGF SNPs in 4,664 unrelated lung cancer cases and 4,571 unrelated controls from 14 case-control studies. In the eligible studies, there were 12 studies of subjects of Asian descent and only two studies of subjects of Caucasian descent. All included studies extracted DNA from peripheral blood and the VEGF polymorphisms were determined by classic PCR-RFLP in 12 studies, by TaqMan in 1 study, and by PIRA-PCR in another study. SNP genotypes were tested for departures from HWE for controls and all SNPs were in HWE. The qualities of the included studies were moderately high, with a STROBE score of greater than 20. The selected study characteristics were summarized in Table 1. The evaluation of the associations between VEGF 22578C.A, 2460C.T, +936C.T, and +405C.G polymorphisms and lung cancer risk are presented in Tables 2, 3, 4 and 5.
VEGF 22578C.A Polymorphism and Risk of Lung Cancer
A total of 7 studies with 22 data sets involving 1,596 cases and 1,857 controls were included in the pooled analysis. All subjects were of Asian ethnicity. Meta-analysis results showed that a statistically significant correlation was found between 22578C.A polymorphism and susceptibility to lung cancer in Asians under allele and homozygous models (for OR = 1.31, 95%CI = 1.10-1.57, P = 0.003; OR = 1.79, 95%CI = 1.30-2.46, P,0.001). We Figure 2C).
VEGF +936C.T Polymorphism and Risk of Lung Cancer
Eight studies investigated the association between +936C.T polymorphism and lung cancer susceptibility with a total of 3,288 cases and 3,092 controls. We did not find any association between
Sensitivity Analysis and Publication Bias
Sensitivity analysis was performed to assess the influence of each study on the pooled ORs by omitting individual studies. The analysis results suggested that no individual study significantly altered the pooled ORs in VEGF 22578C.A, 2460C.T, +936C.T, and +405C.G polymorphisms under the allele model (data not shown), which indicates that our studies were statistically accurate.
Begger's funnel plot and Egger's linear regression test were performed on the metadata to assess publication bias of the individual studies. The shapes of the funnel plots did not reveal any evidence of obvious asymmetry in VEGF 22578C.A (A),
Discussion
Evidence from preclinical and clinical studies shows that VEGF, as a predominant angiogenic factor in human cancers, plays a vital role in the carcinogenesis pathway, which has been proved to be a key step in tumor occurrence, progression and prognosis [12,45]. Several functional polymorphisms of VEGF gene have been confirmed to be correlated with high levels of VEGF protein in cancer cells and high tumor angiogenic activity, and they also contribute to the susceptibility and severity of cancer, including lung cancer [36]. Although cigarette smoking is the major cause of lung cancer, only a small fraction of smokers develop this disease during their lifetime, which suggests that both genetic factors and lifestyle risk factors are modulating individual susceptibility to lung cancer risk. A study by Koukourakis et al. reported that non-small cell lung cancer patients with specific VEGF gene polymorphisms develop tumors with low VEGF expression and poor vascularization [46]. In recent years, the associations between VEGF and risk of lung cancer have been extensively investigated, obtaining conflicting results. Therefore, we employed a meta-analysis to explore a more precise evaluation for the associations. To our knowledge, this is the first meta-analysis on this topic.
The present meta-analysis, including 4,664 cases and 4,571 controls from 14 published case-control studies, explored the association between VEGF 22578C.A, 2460C.T, +936C.T, and +405C.G polymorphisms and lung cancer risk. According to our pooled analysis, 22578C.A polymorphism may have a correlation with increased lung cancer risk. This finding may be biologically plausible since Koukourakis et al. observed that 22578CC was associated with lower VEGF expression and lower vascular density in lung cancer tissues compared to the 22578C C/A [46]. When lung cancer cases were stratified by histological subtype, the data indicated that the presence of 22578A was strongly associated with SCC, while similar finding was not observed in SCLC and adenocarcinoma. Although a research reported by Jin et al. demonstrated that 22578AA genotype was significantly associated with low histologic grade tumors [47], the reason for such a divergence of VEGF expression and angiogenic status in tumors of similar histologic type and differentiation remains obscure. Thus, more studies should be conducted to further examine the underlying mechanism. Furthermore, the stratified analysis according to smoking status revealed that 22578A is significantly correlated with increased risk of lung cancer among smokers, suggesting that this polymorphism may not be an independent risk factor, but perhaps an effect modifier that acts synergistically with smoking in lung cancer risk.
As for VEGF 2460C.T polymorphism, the overall data did not show a marked association of this polymorphism with lung cancer risk in any genetic model, even in the subgroup analyses according to ethnicity. However, when stratified analysis by smoking status and histological type were performed, a lower prevalence of 2460T allele was observed among nonsmokers, lung adenocarcinoma cases, and SCC cases. Some clinical evidence suggests that cigarette smoking may stimulate both angiogenesis and VEGF expression, which exacerbates the rapid cancer progression effect of angiogenesis [48,49]. Thus, it is possible that cigarette smoke and VEGF activate multiple effects in lung cancer. For VEGF +936C.T, +405C.G polymorphisms, we found no overall association between these two polymorphisms or its interaction with smoking on lung cancer risk in any genetic model. When stratified analyses were conducted according to ethnicity and histological types of cancer, increased lung cancer susceptibility was only observed among the adenocarcinoma subgroup for +936C.T polymorphism, while there was no statistical difference in genotype distributions between cases and controls for any different subgroups for +405C.G polymorphism. Actually, there exist conflicting reports in some literatures regarding the exact function of the +405G/C polymorphism. Some clinical studies suggested that +405C allele has been associated with lower VEGF production, while some groups reported higher VEGF levels or even no association with +405C/C genotype [17,50,51]. Thus, whether these polymorphisms are truly functional requires further investigation through confirmatory studies and in vitro functional assays.
The current meta-analysis has several limitations that should be noted. First, the sample size in the present study was relatively small, so small, but potential, genetic effects may not be detectable. A small sample size may not have enough statistical power to explore the real association, especially in subgroup analysis. Additionally, as with other complex traits, lung cancer risk may also be modulated by several other genetic markers beyond VEGF, and our meta-analysis emphasized that elucidating the pathogenesis of lung cancer would demand an investigation into the association for many gene variants that may constitute distinct pathophysiological pathways. Third, we identified two studies from Caucasian populations and obtained no data from African populations, thus the two racial groups need to be further studied in the future. Therefore, the results should ideally be confirmed in further studies to strengthen the conclusions. Aside from the limitations listed above, our meta-analysis still has some strength.
To the best of our knowledge, this is the first meta-analysis on the relationship between VEGF gene polymorphisms and lung cancer. We also explored inter-study variations by prespecified subgrouping of studies according to ethnicity, smoking status, gender, and histological type among cases. Furthermore, although this metaanalysis does not accommodate all previously published data, they are limited compared to the evidence that we generated.
In conclusion, this meta-analysis provides strong evidence that VEGF 22578C.A polymorphism is capable of increasing lung cancer susceptibility, especially among smokers and lung SCC patients. Additionally, for +936C.T polymorphism, increased lung cancer susceptibility was only observed among lung adenocarcinoma patients. In contrast, VEGF 2460C.T polymorphism may be a protective factor among nonsmokers, lung adenocarcinoma and SCC patients. However, we did not find any association between +405C.G polymorphism and lung cancer risk, even when the groups were stratified by ethnicity, smoking status or histological type. More detailed and well-designed studies with larger population and different ethnicities are needed to further evaluate these associations.
Supporting Information
Supplement S1 PRISMA Checklist. | 4,153.8 | 2013-10-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Atypical Presentation of Methicillin-Susceptible Staphylococcus aureus Infection in a Dengue-Positive Patient: A Case Report with Virulence Genes Analysis
Concurrent bacteraemia in patients with dengue fever is rarely reported. We report a case of a patient who initially presented with symptoms typical of dengue fever but later succumbed to septic shock caused by hypervirulent methicillin-susceptible Staphylococcus aureus (MSSA). A 50-year-old female patient with hypertension and diabetes mellitus presented with typical symptoms of dengue fever. Upon investigation, the patient reported having prolonged fever for four days prior to hospitalization. Within 24 hours post-admission, the patient developed pneumonia and refractory shock, and ultimately succumbed to multiple-organs failure. Microbiological examination of the blood culture retrieved a pan susceptible MSSA strain. Genomic sequence analyses of the MSSA strain identified genes encoding staphylococcal superantigens (enterotoxin staphylococcal enterotoxin C 3 (SEC3) and enterotoxin-like staphylococcal enterotoxins-like toxin L (SElL)) that have been associated with toxic shock syndrome in human hosts. Genes encoding important toxins (Panton-Valentine leukocidins, alpha-haemolysin, protein A) involved in the development of staphylococcal pneumonia were also present in the MSSA genome. Staphylococcus aureus co-infections in dengue are uncommon but could be exceptionally fatal if caused by a toxin-producing strain. Clinicians should be aware of the risks and signs of sepsis in dengue fever, thus allowing early diagnosis and starting of antibiotic treatment in time to lower the mortality and morbidity rates.
Introduction
Staphylococcus aureus (S. aureus) infection manifests in a wide range of clinical symptoms, including but not limited to bacteraemia, infective endocarditis, infections of the skin, soft tissue, bones, and joints; pneumonia, device-related infections, meningitis, toxic shock syndrome, and urinary tract infections [1]. One of the most frequently encountered infections caused by S. aureus is bacteraemia, which is often associated with significant mortality once manifested in patients [2]. Multiple factors influence S. aureus bacteraemia mortality, including host factors, pathogen-host interactions, and
Case History
A 50-year-old lady, Madam N, was admitted to a hospital in the Segamat district of Johor, Malaysia with acute fever for four days with no diurnal variation or precipitating factors. It was associated with headache, nausea, myalgia, arthralgia, lethargy and reduced oral intake. She had diabetes mellitus and hypertension, but she was not compliant with her treatment and follow up. Her glycaemic control was poor with glycated haemoglobin (HbA1c) of 10.4%. Upon presentation to the emergency department on 15 February 2017, her condition was stable with a blood pressure of 135/75 mmHg and a heart rate of 90 beats per minute. She was febrile at 38.7 • C. Clinically, she had good pulse volume, warm peripheries, and examination of the cardiovascular, respiratory and abdominal system was unremarkable. Further history revealed that she worked as a canteen operator. There was no history of any recent travels, but she stayed in a dengue-endemic area in Malaysia, where dengue fever is a major public health concern.
Her initial blood tests showed haemoglobin of 10.4 g/dL, white blood cell count 6.4 × 10 9 /L, platelet 102 × 10 9 /L and haematocrit of 31.1%. Her renal function and liver function test did not show any significant abnormality at that time (Table 1). A presumptive diagnosis of dengue fever day 4 of illness was made. A dengue serology rapid test was done which showed dengue NS1 positive, while dengue IgM and IgG were negative by Rapid test. As the initial assessment was stable, she was admitted to the general medical ward. She was started on intravenous normal saline hydration of 2 cc/kg/hr. Upon review in the morning, about 8 hours after admission, she was noted to be breathless. Her peripheries were cold, and saturation was only 92% under high flow oxygen of 15 L/min. Her blood pressure was 180/90 mmHg, and she was tachycardic at 148 beats/min. Auscultation of the lungs revealed bilateral crepitations. Arterial blood gas showed severe lactic acidosis with type 1 respiratory failure. She was immediately transferred to the intensive care unit (ICU) and was intubated. She was given a dose of ceftriaxone 2 g to treat for pneumonia. The total fluid balance overnight was positive 1505 cc.
Over the next 12 hours, her condition deteriorated rapidly despite adequate fluid resuscitation. Fluid given was adjusted accordingly to central venous pressure monitoring. She was also given a whole blood transfusion and supported with maximum inotropic support. Antibiotics were changed to meropenem as her condition further deteriorated and doxycycline was added for atypical infections coverage. There was no evidence of any occult bleeding to suggest dengue haemorrhagic fever. She remained persistently lactic acidosis. Unfortunately, the district hospital did not have continuous renal replacement therapy service and she was not stable to be transferred to a tertiary centre. A bedside transthoracic echocardiogram showed a good heart function with an ejection fraction of 65%, no regional wall motion abnormalities and no obvious vegetation. An electrocardiogram showed sinus tachycardia with no ischaemic changes to suggest a cardiac event.
She developed multi-organ dysfunction and refractory shock, and she succumbed to her condition in less than 24 hours from admission. Subsequently, her blood culture grew MSSA and her C-reactive protein was elevated. Her hepatitis B, hepatitis C and human immunodeficiency virus (HIV) screening was negative. There was no obvious wound or abscess seen on clinical examination. In view of her rapid deterioration and subsequent death due to severe sepsis with concomitant dengue fever, we sent her blood culture sample for further genomic analysis to further study the strain's virulence and pathogenicity factors.
Identification of Virulence Determinants in MSSA Genome
The MSSA strain was sent to a research laboratory at the University of Malaya for genomic assessment and was coded as HS-MSSA. The genomic DNA of the HS-MSSA strain was extracted using a commercial DNA extraction kit (Qiagen, Hilden, Germany) and subjected to whole-genome sequencing via the Illumina Miseq platform (GA2x, pipeline version 1.80) by a commercial sequencing vendor. This Whole Genome Shotgun project has been deposited at the National Center for Biology Information (NCBI) GenBank under the accession number VCMW00000000. The version described in this paper is version VCMW01000000. Virulence determinants in the draft genome of HS-MSSA were identified by using the web-based tool VFanalyzer which retrieves virulence gene sequences from the curated database of Virulence Factors for Pathogenic Bacteria (VFDB; http://www.mgc.ac.cn/VFs/) [17]. Virulence factors related to bacterial adherence, enzyme production, host immune evasion, secretion system and toxins production were identified in the genome of the HS-MSSA strain ( Table 2). Multiple toxin genes encoding for exfoliative toxin (type A), haemolysins (alpha, beta, gamma, and delta), staphylococcal enterotoxin C (subtype SEC3) and SE-like toxin L (SElL), exotoxins, leukotoxins, and Panton-Valentine leukocidins (PVL) were identified in the genome of HS-MSSA strain. The global accessory gene regulator operon (agrABCD) and the staphylococcal accessory regulator (sar) system were also identified in the HS-MSSA genome.
Antimicrobial Susceptibility Testing
The Kirby-Bauer disk diffusion method was used to determine the antimicrobial susceptibility profile of the HS-MSSA strain according to the Clinical and Laboratory Standards Institute (CLSI) guideline (2018) [20]. The HS-MSSA strain was found to be pan-susceptible to all antimicrobial agents tested. The growth of the HS-MSSA strain was effectively inhibited by amoxicillin-clavulanate, cefoxitin, ceftriaxone, ciprofloxacin, clindamycin, cloxacillin, erythromycin, fusidic acid, gentamicin, penicillin-G, piperacillin-tazobactam, rifampicin, and sulfamethoxazole-trimethoprim.
Discussion
This is the first case report of recovery and identification of a hypervirulent MSSA co-infection in a dengue patient in Malaysia. The bacterial infection was classified as community-acquired due to its rapid onset within 24 hours post-admission. The patient initially presented as dengue fever and not in shock at the time of admission. However, the patient rapidly developed symptoms of pneumonia and refractory shock and succumbed to multiorgan failures despite intensive therapeutic efforts. Isolation of an MSSA strain from the patient's blood culture confirmed bacteraemia in the patient. Genomic sequence analyses revealed multiple virulence determinants within the genome of HS-MSSA.
Upon admission, the patient was presumptively diagnosed as dengue fever based on clinical presentations, and subsequent serology, tests showed positive dengue NS1 protein. NS1-based dengue tests are often specific (specificity ranged between 86.1% and 100%), but the sensitivity varies greatly depending on the dengue virus serotypes, duration of illness, and types of infection (primary or secondary dengue) [21]. Although uncommon, a false-positive result could occur in dengue NS1 testing due to cross-reactions with other flaviviruses, such as zika virus [22,23]. False-positive dengue NS1 tests have also been documented in patients with haematological malignancies [24]. Simultaneous testing of NS1 with IgM and IgG antibodies is essential for enhanced accuracy in dengue diagnosis [25,26]. However, dengue-specific IgM and IgG antibodies tested negative in the patient's serum during the serology testing. Nonetheless, the patient was at day 4 of illness during the time of admission, and very often, the detection of dengue-specific antibodies is only possible after 4 or 5 days of disease onset, with acceptable diagnostic accuracy on the sixth day onwards [27,28]. Unfortunately, the patient had succumbed to multiple organ failure within 24 hours post-admission. Hence, further testing could not be done to confirm dengue virus infection.
S. aureus often harbours multiple toxin genes in its genome, and the most prominent toxins are the staphylococcal superantigens (staphylococcal enterotoxins, SEs; SE-like proteins, SEls; toxic shock syndrome toxin 1, TSST-1) [4]. The presence of genes encoding for SEC3 and SElL in the HS-MSSA genome infers its ability to produce these potent immunostimulatory superantigenic toxins. These superantigens are capable of binding directly to the major histocompatibility complex class II molecules on the antigen-presenting cells (APC) outside of the antigen groove, thus bypassing APC processing prior to T cell presentation [4]. This subsequently leads to the activation of an abundance of T cells, causing the massive release of chemokines and pro-inflammatory cytokines that eventually result in lethal toxic shock syndrome. The SEC, together with SEB, is the predominant SE serotype of S. aureus that causes non-menstrual toxic shock syndrome. Besides superantigenic activity, SEs and SEls are also pyrogenic, emetic (only for SEs), and capable of inducing lethal hypersensitivity to endotoxins [29].
Staphylococcal pneumonia is often associated with the production of PVL, protein A, and alpha-haemolysin by S. aureus [30]. The presence of these genes in the genome of HS-MSSA could have explained the manifestation of pneumonia in the patient. A previous study by Labandeira-Rey et al. [31] showed that PVL alone is sufficient to cause pneumonia. Moreover, the expression of PVL could further induce changes in the transcriptional levels of the genes encoding both secreted and cell-associated virulence factors in S. aureus [31]. Alpha-haemolysin is capable of causing damage to lung epithelial tissue, and when combined with high levels of PVL, often results in severe necrotizing pneumonia in humans [32]. Protein A, at the same time, induces an inflammatory response of the airway by activating the tumour-necrosis factor-α receptor, TNFR1, hence playing an essential role in the pathogenesis of staphylococcal pneumonia [33].
PVL-producing S. aureus has been found to cause life-threatening infection and does not respond to antimicrobial treatment despite in vitro susceptibility to the antimicrobial agents, resulting in a clinical condition termed PVL syndrome [34]. PVL syndrome is commonly caused by community-acquired S. aureus infection, with a higher frequency of methicillin-susceptible subtypes [34]. A similar observation was made in this clinical case, whereby the patient's condition did not improve after the administration of ceftriaxone, meropenem, and doxycycline, despite in vitro susceptibility of the HS-MSSA strain to these antimicrobial agents later being verified by laboratory tests. In a previously reported PVL-producing S. aureus infection, the patient was eventually treated with a combination of clindamycin and daptomycin [34]. This notion supported the empirical use of clindamycin in patients to prevent possible toxin-mediated sepsis.
Virulence genes expression in S. aureus is mainly regulated by the agr system (agrABCD and delta-haemolysin) [35,36]. Together with the staphylococcal accessory regulator (sar) system, the agr system is found to upregulate the expression of SEB and SEC [4]. Besides that, the role of the agr system is also proved to be essential in the pathogenesis of acute lethal staphylococcal pneumonia by regulating the expression of alpha-haemolysin [37]. The pathogenicity of the HS-MSSA strain might be attributed to the concerted efforts of the intact agr and sar systems in regulating the expression of the array of virulence genes in its genome. Moreover, the hypervirulence of the HS-MSSA strain could explain the predominance of MSSA over MRSA in concurrent bacteraemia in dengue patients. Nonetheless, future studies involving more S. aureus isolates from dengue dual infections should be conducted to verify this hypothesis.
The majority of the prominent toxin-producing genes are found in regions with considerable genetic variations, mostly associated with phage regions within the HS-MSSA genome (Unpublished data). Contig-13, which harbours multiple enterotoxins and exotoxins, was predicted as an incomplete phage region associated with the superantigen-encoding staphylococcal pathogenicity island (SaPI). Contig-4 harbours multiple virulence factors including leukotoxins (PVL and LukED) within a phage region that also encodes for SaPIn3-associated proteins. The leukotoxin LukGH identified in contig-11 was found located next to a phage region. All these evidences support the previous notion that highly mobile genetic elements, such as phage-associated pathogenicity islands, play an important role in the evolution of this hypervirulent S. aureus lineage by providing survival advantages to enhance its pathogenicity and host-adaptability [38].
Most often, the outcome of a disease does not depend solely on the pathogenicity of the etiologic agent, but host factors play an equally important role. The vulnerability in the host immune defence mechanisms and microbial virulence involves a two-way host-pathogen interaction, causing severe sepsis in the host [39]. In the fatal sepsis reported in this case, the patient's immune system might have been suppressed by the dengue virus, resulting in host vulnerability to MSSA co-infection. Evidence has shown that in neonates and the elderly, immune cells infected by dengue virus produced fewer cytokines leading to an immunosuppressed state of the hosts [40]. Indeed, dengue patients with concurrent bacteraemia are often older and tended to have prolonged fever [41]. In Malaysia, 90% of fatal cases of dengue haemorrhagic fever occurred in adult females with a median age of 32-years and an average of 4.7 days of illness prior to hospitalization [42]. Old age is the most prominent risk factor associated with dengue mortality in Malaysia [43]. Moreover, underlying comorbidities, particularly hypertension and diabetes mellitus, are commonly associated with more severe dengue cases and greater mortality rates [8,12,42]. The combined effect of old age (50-year-old), prolonged duration of fever (4 days), comorbid chronic illnesses (hypertension and diabetes mellitus), and concurrent bacteraemia by an enterotoxin-producing strain of MSSA, might have caused the rapid deterioration and eventually the death of the dengue patient in this case.
To date, there are still limited effective emergency therapeutic options for established refractory shock. Short-term mortality occurs in 50% of critically ill patients who developed refractory shock [44]. Therefore, early and aggressive interventions should be implemented before refractory shock develops. Nandhabalan and colleagues who are experienced in toxin-mediated sepsis have recommended the use of adjunctive antimicrobial therapy, where clindamycin is administered empirically in addition to broad-spectrum antibiotics until microbiological analyses have proven the absence of toxin-producing pathogens, or until organs' dysfunction are stabilised [45]. Clindamycin functions to inhibit bacterial protein synthesis and most importantly, prevents the production of superantigens and is clinically proven to be effective in the treatment of toxic shock syndrome [46]. Nonetheless, increasing clindamycin resistance has been observed among S. aureus populations during recent years [47]. Therefore, more regional susceptibility data should be collected, and clinicians should be aware of antimicrobial resistance patterns of the MSSA population when choosing empirical regimens.
This case report documents the first virulence analysis of an MSSA strain isolated in a fatal case of dengue dual infections in Malaysia. The presence of multiple virulence factors, especially the superantigens SEC3 and SElL, together with the regulatory genes, might have contributed to the hypervirulence of the HS-MSSA strain. Although concurrent bacteraemia in dengue patients remains rarely reported, especially one caused by S. aureus, the greater mortality risk of such dual infections should not be overlooked. Early diagnosis and interventions are essential to prevent unfavourable clinical outcomes in patients. Upon examination of patients presenting symptoms of dengue fever, the attending physicians should also consider the possibility of false-positive serology test results and bacteraemia. Delayed diagnosis can prove fatal especially when patients are infected by toxin-producing bacterial pathogens, where bacteraemia can rapidly develop into refractory septic shock and death may occur within a short time. Further, the lack of reporting and attention for concurrent bacteraemia in dengue patients in Malaysia necessitates increasing efforts in surveillance. The actual prevalence of co-infections and the characteristics of the etiologic agents may help medical practitioners in making appropriate medical decisions and treatment regimens to prevent dengue mortality. | 3,868.2 | 2020-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Copy number amplification of ENSA promotes the progression of triple-negative breast cancer via cholesterol biosynthesis
Copy number alterations (CNAs) are pivotal genetic events in triple-negative breast cancer (TNBC). Here, our integrated copy number and transcriptome analysis of 302 TNBC patients reveals that gene alpha-endosulfine (ENSA) exhibits recurrent amplification at the 1q21.3 region and is highly expressed in TNBC. ENSA promotes tumor growth and indicates poor patient survival in TNBC. Mechanistically, we identify ENSA as an essential regulator of cholesterol biosynthesis in TNBC that upregulates the expression of sterol regulatory element-binding transcription factor 2 (SREBP2), a pivotal transcription factor in cholesterol biosynthesis. We confirm that ENSA can increase the level of p-STAT3 (Tyr705) and activated STAT3 binds to the promoter of SREBP2 to promote its transcription. Furthermore, we reveal the efficacy of STAT3 inhibitor Stattic in TNBC with high ENSA expression. In conclusion, the amplification of ENSA at the 1q21.3 region promotes TNBC progression and indicates sensitivity to STAT3 inhibitors.
1. The FUSCC TNBC used for integrated analysis included 302 women with TNBC. This cohort was from Fudan University Shanghai Cancer Center (FUSCC) TNBC cohort (Sequence Read Archive (SRA) dataset: SRP157974; Gene Expression Omnibus (GEO) dataset: GSE118527) .Among these 465 patients, 302 patients who had both RNA-seq data and copy number data were included in our study for screening candidate genes. 2. Other sample sizes determined for experiments are included in the relevant figure legends.
no data were excluded The reproducibility for each analysis was confirmed by at least three independent experiments.
For animal experiments, Female 6-week-old NOD.CB17-Prkdc scid/JSlac mice underwent randomization before cell injection. For the treatment groups, when the tumor volumes reached 50-100 mm3, mice underwent the second randomization before beding assigned to vehicle or Stattic (10 mg/kg) treatment.
Data acquisition in the studies, was conducted in a blinded manner. For animal experiments, tumor volumes were measured without knowing the cell type injected (by assigning numerical IDs). For the other experiments, samples were also assigned numerical IDs to process in a blind fashion.
Briefly describe the study type including whether data are quantitative, qualitative, or mixed-methods (e.g. qualitative cross-sectional, quantitative experimental, mixed-methods case study).
State the research sample (e.g. Harvard university undergraduates, villagers in rural India) and provide relevant demographic information (e.g. age, sex) and indicate whether the sample is representative. Provide a rationale for the study sample chosen. For studies involving existing datasets, please describe the dataset and source.
Describe the sampling procedure (e.g. random, snowball, stratified, convenience). Describe the statistical methods that were used to predetermine sample size OR if no sample-size calculation was performed, describe how sample sizes were chosen and provide a rationale for why these sample sizes are sufficient. For qualitative data, please indicate whether data saturation was considered, and what criteria were used to decide that no further sampling was needed.
Provide details about the data collection procedure, including the instruments or devices used to record the data (e.g. pen and paper, computer, eye tracker, video or audio equipment) whether anyone was present besides the participant(s) and the researcher, and whether the researcher was blind to experimental condition and/or the study hypothesis during data collection.
Indicate the start and stop dates of data collection. If there is a gap between collection periods, state the dates for each sample cohort.
If no data were excluded from the analyses, state so OR if data were excluded, provide the exact number of exclusions and the rationale behind them, indicating whether exclusion criteria were pre-established.
State how many participants dropped out/declined participation and the reason(s) given OR provide response rate OR state that no participants dropped out/declined participation.
If participants were not allocated into experimental groups, state so OR describe how participants were allocated to groups, and if allocation was not random, describe how covariates were controlled.
Ecological, evolutionary & environmental sciences study design
All studies must disclose on these points even when the disclosure is negative. Note the sampling procedure. Describe the statistical methods that were used to predetermine sample size OR if no sample-size calculation was performed, describe how sample sizes were chosen and provide a rationale for why these sample sizes are sufficient.
Describe the data collection procedure, including who recorded the data and how.
Indicate the start and stop dates of data collection, noting the frequency and periodicity of sampling and providing a rationale for these choices. If there is a gap between collection periods, state the dates for each sample cohort. Specify the spatial scale from which the data are taken If no data were excluded from the analyses, state so OR if data were excluded, describe the exclusions and the rationale behind them, indicating whether exclusion criteria were pre-established.
Describe the measures taken to verify the reproducibility of experimental findings. For each experiment, note whether any attempts to repeat the experiment failed OR state that all attempts to repeat the experiment were successful.
Describe how samples/organisms/participants were allocated into groups. If allocation was not random, describe how covariates were controlled. If this is not relevant to your study, explain why.
Describe the extent of blinding used during data acquisition and analysis. If blinding was not possible, describe why OR explain why blinding was not relevant to your study.
Describe the study conditions for field work, providing relevant parameters (e.g. temperature, rainfall).
State the location of the sampling or experiment, providing relevant parameters (e.g. latitude and longitude, elevation, water depth).
Describe the efforts you have made to access habitats and to collect and import/export your samples in a responsible manner and in compliance with local, national and international laws, noting any permits that were obtained (give the name of the issuing authority, the date of issue, and any identifying information).
Describe any disturbance caused by the study and how it was minimized.
March 2021
Antibodies Tick this box to confirm that the raw and calibrated dates are available in the paper or in Supplementary Information.
Ethics oversight
Note that full information on the approval of the study protocol must also be provided in the manuscript.
The cell were authenticated by ATCC service-STR profiling.
The cell lines were regularly confirmed to be negative for mycoplasma contamination with a Mycoplasma Detecting Kit (Vazyme).
no Provide provenance information for specimens and describe permits that were obtained for the work (including the name of the issuing authority, the date of issue, and any identifying information). Permits should encompass collection and, where applicable, export.
Indicate where the specimens have been deposited to permit free access by other researchers.
If new dates are provided, describe how they were obtained (e.g. collection, storage, sample pretreatment and measurement), where they were obtained (i.e. lab name), the calibration program and the protocol for quality assurance OR state that no new dates are provided.
Identify the organization(s) that approved or provided guidance on the study protocol, OR state that no ethical approval or guidance was required and explain why not.
Female 6-week-old NOD.CB17-Prkdc scid/JSlac mice were used for the in vivo mouse xenograft models. Female 5-week-old nu/nu mice were used for the establishment of mini-PDX models. Mice were exposed to 12h light, 12h darkness cycle at temperature of 21 ±3 خ and an average of 55% humidity.
March 2021
Wild animals Field-collected samples
Ethics oversight
Note that full information on the approval of the study protocol must also be provided in the manuscript.
Human research participants
Policy information about studies involving human research participants Population characteristics
Recruitment
Ethics oversight Note that full information on the approval of the study protocol must also be provided in the manuscript.
Clinical data Policy information about clinical studies
All manuscripts should comply with the ICMJEguidelines for publication of clinical research and a completedCONSORT checklist must be included with all submissions.
Clinical trial registration
Study protocol
Data collection
Outcomes Dual use research of concern Policy information about dual use research of concern Hazards Could the accidental, deliberate or reckless misuse of agents or technologies generated in the work, or the application of information presented in the manuscript, pose a threat to: Provide the trial registration number from ClinicalTrials.gov or an equivalent agency.
Note where the full trial protocol can be accessed OR if not available, explain why.
Describe the settings and locales of data collection, noting the time periods of recruitment and data collection.
Describe how you pre-defined primary and secondary outcome measures and how you assessed these measures.
March 2021
Experiments of concern Does the work involve any of these experiments of concern: No Yes Confirm that both raw and final processed data have been deposited in a public database such as GEO.
Confirm that you have deposited or provided access to graph files (e.g. BED files) for the called peaks.
Data access links
May remain private before publication. The axis labels state the marker and fluorochrome used (e.g. CD4-FITC).
Files in database submission
The axis scales are clearly visible. Include numbers along axes only for bottom left plot of group (a 'group' is an analysis of identical markers).
All plots are contour plots with outliers or pseudocolor plots.
A numerical value for number of cells or percentage (with statistics) is provided.
Sample preparation
For "Initial submission" or "Revised version" documents, provide reviewer access links. For your "Final submission" document, provide a link to the deposited data.
Provide a list of all files available in the database submission.
Provide a link to an anonymized genome browser session for "Initial submission" and "Revised version" documents only, to enable peer review. Write "no longer applicable" for "Final submission" documents.
Describe the experimental replicates, specifying number, type and replicate agreement.
Describe the sequencing depth for each experiment, providing the total number of reads, uniquely mapped reads, length of reads and whether they were paired-or single-end.
Describe the antibodies used for the ChIP-seq experiments; as applicable, provide supplier name, catalog number, clone name, and lot number.
Specify the command line program and parameters used for read mapping and peak calling, including the ChIP, control and index files used.
Describe the methods used to ensure data quality in full detail, including how many peaks are at FDR 5% and above 5-fold enrichment.
Describe the software used to collect and analyze the ChIP-seq data. For custom code that has been deposited into a community repository, provide accession details.
For cell cycle analysis, a total of 1×106 cells were fixed with precooled 70% ethanol overnight and then processed using the Cell Cycle and Apoptosis Analysis Kit (Yeasen, #40301ES50) according to the manufacturer's instructions. For the cell apoptosis assay, 5×105 cells were collected and incubated with annexin V-fluorescin isothiocyanate (FITC) and propidium iodide (PI) staining solution from the Annexin V-FITC/PI Apoptosis Detection Kit (Yeasen, #40302ES50). | 2,479.4 | 2022-02-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Strong nonlinear instability and growth of Sobolev norms near quasiperiodic finite-gap tori for the 2D cubic NLS equation
We consider the defocusing cubic nonlinear Schr\"odinger equation (NLS) on the two-dimensional torus. The equation admits a special family of elliptic invariant quasiperiodic tori called finite-gap solutions. These are inherited from the integrable 1D model (cubic NLS on the circle) by considering solutions that depend only on one variable. We study the long-time stability of such invariant tori for the 2D NLS model and show that, under certain assumptions and over sufficiently long timescales, they exhibit a strong form of transverse instability in Sobolev spaces $H^s(\mathbb{T}^2)$ ($0<s<1$). More precisely, we construct solutions of the 2D cubic NLS that start arbitrarily close to such invariant tori in the $H^s$ topology and whose $H^s$ norm can grow by any given factor. This work is partly motivated by the problem of infinite energy cascade for 2D NLS, and seems to be the first instance where (unstable) long-time nonlinear dynamics near (linearly stable) quasiperiodic tori is studied and constructed.
Introduction
A widely held principle in dynamical systems theory is that invariant quasiperiodic tori play an important role in understanding the complicated long-time behavior of Hamiltonian ODE and PDE. In addition to being important in their own right, the hope is that such quasiperiodic tori can play an important role in understanding other, possibly more generic, dynamics of the system by acting as islands in whose vicinity orbits might spend long periods of time before moving to other such islands. The construction of such invariant sets for Hamiltonian PDE has witnessed an explosion of activity over the past thirty years after the success of extending KAM techniques to infinite dimensions. However, the dynamics near such tori is still poorly understood, and often restricted to the linear theory. The purpose of this work is to take a step in the direction of understanding and constructing non-trivial nonlinear dynamics in the vicinity of certain quasiperiodic solutions for the cubic defocusing NLS equation. In line with the above philosophy emphasizing the role of invariant quasiperiodic tori for other types of behavior, another aim is to push forward a program aimed at proving infinite Sobolev norm growth for the 2D cubic NLS equation, an outstanding open problem. where (x, y) ∈ T 2 = R 2 /(2πZ) 2 , t ∈ R and u : R × T 2 → C. All the results in this paper extend trivially to higher dimensions d ≥ 3 by considering solutions that only depend on two variables 1 . This is a Hamiltonian PDE with conserved quantities: i) the Hamiltonian |∇u(x, y)| 2 + 1 2 |u(x, y)| 4 dx dy, We shall exhibit solutions whose energy moves from very high frequencies towards low frequencies (backward or inverse cascade), as well as ones that exhibit cascade in the opposite direction (forward or direct cascade). Such cascade phenomena have attracted a lot of attention in the past few years as they are central aspects of various theories of turbulence for nonlinear systems. For dispersive PDE, this goes by the name of wave turbulence theory which predicts the existence of solutions (and statistical states) of (2D-NLS) that exhibit a cascade of energy between very different length-scales. In the mathematical community, Bourgain drew attention to such questions of energy cascade by first noting that it can be captured in a quantitative way by studying the behavior of the Sobolev norms of the solution In his list of Problems on Hamiltonian PDE [Bou00], Bourgain asked whether there exist solutions that exhibit a quantitative version of the forward energy cascade, namely solutions whose Sobolev norms H s , with s > 1, are unbounded in time We should point out here that such growth cannot happen for s = 0 or s = 1 due to the conservation laws of the equations. For other Sobolev indices, there exists polynomial upper bounds for the growth of Sobolev norms (cf. [Bou96,Sta97,CDKS01,Bou04,Zho08,CW10,Soh11a,Soh12,Soh11b,CKO12,PTV17]). Nevertheless, results proving actual growth of Sobolev norms are much more scarce. After seminal works by Bourgain himself [Bou96] and Kuksin [Kuk96,Kuk97a,Kuk97b], the landmark result in [CKS + 10] played a fundamental importance in the recent progress, including this work: It showed that for any s > 1, δ 1, K 1, there exist solutions u of (2D-NLS) such that (1.5) u(0) H s ≤ δ and u(T ) H s ≥ K for some T > 0. Even if not mentioned in that paper, the same techniques also lead to the same result for s ∈ (0, 1). This paper induced a lot of activity in the area [ The above-cited works revealed an intimate connection between Lypunov instability and Sobolev norm growth. Indeed, the solution u = 0 of (2D-NLS) is an elliptic critical point and is linearly stable in all H s . From this point of view, the result in [CKS + 10] given in (1.5) can be interpreted as the Lyapunov instability in H s , s = 1, of the elliptic critical point u = 0 (the first integrals (1.1) and (1.2) imply Lyapunov stability in the H 1 and L 2 topology). It turns out that this connection runs further, particularly in relation to the question of finding solutions exhibiting (1.4). As was observed in [Han14], one way to prove the existence of such solutions is to prove that, for sufficiently many φ ∈ H s , an instability similar to that in (1.5) holds, but with u(0) − φ H s ≤ δ. In other words, proving long-time instability as in (1.5) but with solutions starting δ−close to φ, and for sufficiently many φ ∈ H s implies the existence (and possible genericness) of unbounded orbits satisfying (1.4). Such a program (based on a Baire-Category argument) was applied successfully for the Szegö equation on T in [GG15].
Motivated by this, one is naturally led to studying the Lyapunov instability of more general invariant objects of (2D-NLS) (or other Hamiltonian PDEs), or equivalently to investigate whether one can achieve Sobolev norm explosion starting arbitrarily close to a given invariant object. The first work in this direction is by one of the authors [Han14]. He considers the plane waves u(t, x) = Ae i(mx−ωt) with ω = m 2 + A 2 , periodic orbits of (2D-NLS), and proves that there are orbits which start δ-close to them and undergo H s Sobolev norm explosion, 0 < s < 1. This implies that the plane waves are Lyapunov unstable in these topologies. Stability results for plane waves in H s , s > 1, on shorter time scales are provided in [FGL14].
The next step in this program would be to study such instability phenomena near higher dimensional invariant objects, namely quasiperiodic orbits. This is the purpose of this work, in which we will address this question for the family of finite-gap tori of (1D-NLS) as solutions to the (2D-NLS). To control the linearized dynamics around such tori, we will impose some Diophantine (strongly non-resonant) conditions on the quasiperiodic frequency parameters. This allows us to obtain a stable linearized operator (at least with respect to the perturbations that we consider), which is crucial to control the delicate construction of the unstable nonlinear dynamics.
1.3. Statement of results. Roughly speaking, we will construct solutions to (2D-NLS) that start very close to the finite-gap tori in appropriate topologies, and exhibit either backward cascade of energy from high to low frequencies, or forward cascade of energy from low to high frequencies. In the former case, the solutions that exhibit backward cascade start in an arbitrarily small vicinity of a finite-gap torus in Sobolev spaces H s (T 2 ) with 0 < s < 1, but grow to become larger than any pre-assigned factor K 1 in the same H s (higher Sobolev norms H s with s > 1 decrease, but they are large for all times). In the latter case, the solutions that exhibit forward cascade start in an arbitrarily small vicinity of a finite-gap torus in L 2 (T 2 ), but their H s Sobolev norm (for s > 1) exhibits a growth by a large multiplicative factor K 1 after a large time. We shall comment further on those results after we state the theorems precisely.
To do that, we need to introduce the Birkhoff coordinates for equation 1D-NLS. Grébert and Kappeler showed in [GK14a] that there exists a globally defined map, called the Birkhoff map, such that ∀s ≥ 0 Therefore in these coordinates, called Birkhoff coordinates, equation (1D-NLS) becomes a chain of nonlinear harmonic oscillators and it is clear that the phase space is foliated by finite and infinite dimensional tori with periodic, quasiperiodic or almost periodic dynamics, depending on how many of the actions I m (which are constant!) are nonzero and on the properties of rational dependence of the frequencies.
In this paper we are interested in the finite dimensional tori with quasiperiodic dynamics. Fix d ∈ N and consider a set of modes Fix also a value for the actions . . , d, z m = 0 for m ∈ S 0 , which is supported on the set S 0 . Any orbit on this torus is quasiperiodic (or periodic if the frequencies of the rigid rotation are completely resonant). We will impose conditions to have non-resonant quasiperiodic dynamics. This will imply that the orbits on T d are dense. By equation (1.7), it is clear that this torus, as an invariant object of equation 1D-NLS, is stable for this equation for all times in the sense of Lyapunov.
The torus (1.9) (actually, its pre-image Φ −1 (T d ) though the Birkhoff map) is also an invariant object for the original equation (2D-NLS). The main result of this paper will show the instability (in the sense of Lyapunov) of this invariant object. Roughly speaking, we show that under certain assumptions (on the choices of modes (1.8) and actions (1.9)) these tori are unstable in the H s (T 2 ) topology for s ∈ (0, 1). Even more, there exist orbits which start arbitrarily close to these tori and undergo an arbitrarily large H s -norm explosion.
We will abuse notation, and identify H s (T) with the closed subspace of H s (T 2 ) of functions depending only on the x variable. Consequently, Theorem 1.1. Fix a positive integer d. For any choice of d modes S 0 (see (1.8)) satisfying a genericity condition (namely Definition 4.1 with sufficiently large L), there exists ε * > 0 such that for any ε ∈ (0, ε * ) there exists a positive measure Cantor-like set I ⊂ (ε/2, ε) d of actions, for which the following holds true for any torus T d = T d (S 0 , I 0 m ) with I 0 m ∈ I: (1) For any s ∈ (0, 1), δ > 0 small enough, and K > 0 large enough, there exists an orbit u(t) of (2D-NLS) and a time (2) For any s > 1, and any K > 0 large enough, there exists an orbit u(t) of (2D-NLS) and a time Here σ, σ > 0 are independent of K.
1.4. Comments and remarks on Theorem 1.1: (1) The relative measure of the set I of admissible actions can be taken as close to 1 as desired. Indeed, by taking smaller ε * , one has that the relative measure satisfies for some constant C > 0 and 0 < κ < 1 independent of ε * > 0. The genericity condition on the set S 0 and the actions (I m ) m∈S 0 ∈ I ensure that the linearized dynamics around the resulting torus T d is stable for the perturbations we need to induce the nonlinear instability. In fact, a subset of those tori is even linearly stable for much more general perturbations as we remark below.
(2) Why does the finite gap solution need to be small? To prove Theorem 1.1 we need to analyze the linearization of equation (2D-NLS) at the finite gap solution (see Section 4). Roughly speaking, this leads to a Schrödinger equation with a quasi-periodic potential. Luckily, such operators can be reduced to constant coefficients via a KAM scheme. This is known as reducibility theory which allows one to construct a change of variables that casts the linearized operator into an essentially constant coefficient diagonal one. This KAM scheme was carried out in [MP18], and requires the quasi-periodic potential, given by the finite gap solution here, to be small for the KAM iteration to converge. That being said, we suspect a similar result to be true for non-small finite gap solutions.
(3) To put the complexity of this result in perspective, it is instructive to compare it with the stability result in [MP18]. In that paper, it is shown that a proper subset I ⊂ I of the tori considered in Theorem 1.1 are Lyapunov stable in H s , s > 1, but for shorter time scales than those considered in this theorem. More precisely, all orbits that are initially δ-close to T d in H s stay Cδ-close for some fixed C > 0 for time scales t ∼ δ −2 . The same stability result (with a completely identical proof) holds if we replace H s by F 1 norm (functions whose Fourier series is in 1 ). In fact, by trivially modifying the proof, one could also prove stability on the δ −2 timescale in F 1 ∩ H s for 0 < s < 1. What this means is that the solutions in the first part of Theorem 1.1 remains within Cδ of T d up to times ∼ δ −2 but can diverge vigorously afterwards at much longer time scales. It is also worth mentioning that the complementary subset I \ I has a positive measure subset where tori are linearly unstable since they possess a finite set of modes that exhibit hyperbolic behavior. In principle, hyperbolic directions are good for instability, but they are not useful for our purposes since they live at very low frequencies, and hence cannot be used (at least not by themselves alone) to produce a substantial growth of Sobolev norms. We avoid dealing with these linearly unstable directions by restricting our solution to an invariant subspace on which these modes are at rest. (4) It is expected that a similar statement to the first part of Theorem 1.1 is also true for s > 1.
This would be a stronger instability compared to that in the second part (for which the initial perturbation is small in L 2 but not in H s ). Nevertheless, this case cannot be tackled with the techniques considered in this paper. Indeed, one of the key points in the proof is to perform a (partial) Birkhoff normal form up to order 4 around the finite gap solution. The terms which lead to the instabilities in Theorem 1.1 are quasi-resonant instead of being completely resonant. Working in the H s topology with s ∈ (0, 1), such terms can be considered completely resonant with little error on the timescales where instability happens. However, this cannot be done for s > 1, for which one might be able to eliminate those terms by a higher order normal form (s > 1 gives a stronger topology and can thus handle worse small divisors). This would mean that one needs other resonant terms to achieve growth of Sobolev norms. The same difficulties were encountered in [Han14] to prove the instability of the plane waves of (2D-NLS). (5) For finite dimensional Hamiltonian dynamical systems, proving Lyapunov instability for quasiperiodic Diophantine elliptic (or maximal dimensional Lagrangian) tori is an extremely difficult task. Actually all the obtained results [CZ13,GK14b] deal with C r or C ∞ Hamiltonians, and not a single example of such instability is known for analytic Hamiltonian systems. In fact, there are no results of instabilities in the vicinity of non-resonant elliptic critical points or periodic orbits for analytic Hamiltonian systems (see [LCD83,Dou88,KMV04] for results on the C ∞ topology). The present paper proves the existence of unstable Diophantine elliptic tori in an analytic infinite dimensional Hamiltonian system. Obtaining such instabilities in infinite dimensions is, in some sense, easier: having infinite dimensions gives "more room" for instabilities. (6) It is well known that many Hamiltonian PDEs possess quasiperiodic invariant tori [Way90, Most of these tori are normally elliptic and thus linearly stable. It is widely expected that the behavior given by Theorem 1.1 also arises in the neighborhoods of (many of) those tori. Nevertheless, it is not clear how to apply the techniques of the present paper to these settings.
1.5. Scheme of the proof. Let us explain the main steps to prove Theorem 1.1.
(1) Analysis of the 1-dimensional cubic Schrödinger equation. We express the 1-dimensional cubic NLS in terms of the Birkhoff coordinates. We need a quite precise knowledge of the Birkhoff map (see Theorem 3.1). In particular, we need that it "behaves well" in 1 . This is done in the paper [Mas18b] and summarized in Section 3. In Birkhoff coordinates, the finite gap solutions are supported in a finite set of variables. We use such coordinates to express the Hamiltonian (1.1) in a more convenient way.
(2) Reducibility of the 2-dimensional cubic NLS around a finite gap solution. We reduce the linearization of the vector field around the finite gap solutions to a constant coefficients diagonal vector field. This is done in [MP18] and explained in Section 4. In Theorem 4.4 we give the conditions to achieve full reducibility. In effect, this transforms the linearized operator around the finite gap into a constant coefficient diagonal (in Fourier space) operator, with eigenvalues {Ω } ∈Z 2 \S 0 . We give the asymptotics of these eigenvalues in Theorem 4.6, which roughly speaking look like (1.10) for frequencies = (m, n) satisfying |m|, |n| ∼ J. This seemingly harmless O(J −2 ) correction to the unperturbed Laplacian eigenvalues is sharp and will be responsible for the restriction to s ∈ (0, 1) in the first part of Theorem 1.1 as we shall explain below. (3) Degree three Birkhoff normal form around the finite gap solution. This is done in [MP18], but we shall need more precise information from this normal form that will be crucial for Steps 5 and 6 below. This is done in 5 (see Theorem 5.2). (4) Partial normal form of degree four. We remove all degree four monomials which are not (too close to) resonant. This is done in Section 6, and leaves us with a Hamiltonian with (close to) resonant degree-four terms plus a higher-degree part which will be treated as a remainder in our construction. (5) We follow the paradigm set forth in [CKS + 10, GK15] to construct solutions to the truncated Hamiltonian consisting of the (close to) resonant degree-four terms isolated above, and then afterwards to the full Hamiltonian by an approximation argument. This construction will be done at frequencies = (m, n) such that |m|, |n| ∼ J with J very large, and for which the dynamics is effectively given by the following system of ODE We remark that the conditions of the set R( ) are essentially equivalent to saying that ( 1 , 2 , 3 , ) form a rectangle in Z 2 . Also note that by the asymptotics of Ω mentioned above in (1.10), one obtains that Γ = O(J −2 ) if all the frequencies involved are in R( ) and satisfy |m|, |n| ∼ J.
The idea now is to reduce this system into a finite dimensional system called the "Toy Model" which is tractable enough for us to construct a solution that cascades energy. An obstruction to this plan is presented by the presence of the oscillating factor e iΓt for which Γ is not zero (in contrast to [CKS + 10]) but rather O(J −2 ). The only way to proceed with this reduction is to approximate e iΓt ∼ 1 which is only possible provided J −2 T 1. The solution coming from the Toy Model is supported on a finite number of modes ∈ Z 2 \ S 0 satisfying |j| ∼ J, and the time it takes for the energy to diffuse across its modes is T ∼ O(ν −2 ) where ν is the characteristic size of the modes in 1 norm. Requiring the solution to be initially close in H s to the finite gap would necessitate that νJ s δ which gives that T δ J −2s , and hence the condition J −2 T 1 translates into the condition s < 1. This explains the restriction to s < 1 in the first part of Theorem 1.1. If we only require our solutions to be close to the finite gap in L 2 , then no such restriction on ν is needed, and hence there is no restriction on s beyond being s > 0 and s = 1, which is the second part of the theorem. This analysis is done in Section 7 and 8. In the former, we perform the reduction to the effective degree 4 Hamiltonian taking into account all the changes of variables performed in the previous sections; while in Section 8 we perform the above approximation argument allowing to shadow the Toy Model solution mentioned above with a solution of (2D-NLS) exhibiting the needed norm growth, thus completing the proof of Theorem 1.1. 2. Notation and functional setting 2.1. Notation. For a complex number z, it is often convenient to use the notation For any subset Γ ⊂ Z 2 , we denote by h s (Γ) the set of sequences (a ) ∈Γ with norm Our phase space will be obtained by an appropriate linearization around the finite gap solution with d frequencies/actions. For a finite set S 0 ⊂ Z × {0} of d elements, we consider the phase space X = (C d × T d ) × 1 (Z 2 \ S 0 ) × 1 (Z 2 \ S 0 ). The first part (C d × T d ) corresponds to the finite-gap sites in action angle coordinates, whereas 1 (Z 2 \ S 0 ) × 1 (Z 2 \ S 0 ) corresponds to the remaining orthogonal sites in frequency space. We shall often denote the 1 norm by · 1 . We shall denote variables on X by We shall use multi-index notation to write monomials like Y l and m α,β = a αāβ where l ∈ N d and α, β ∈ (N) Z 2 \S 0 . Often times, we will abuse notation, and simply use the notation a ∈ 1 to mean a = (a,ā) ∈ 1 (Z 2 \ S 0 ) × 1 (Z 2 \ S 0 ), and a 1 = a 1 (Z 2 \S 0 ) .
Definition 2.1. For a monomial of the form e i ·θ Y l m α,β , we define its degree to be 2|l| + |α| + |β| − 2, where the modulus of a multi-index is given by its 1 norm.
Regular Hamiltonians.
Given a Hamiltonian function F (Y, θ, a) on the phase space X , we associate to it the Hamiltonian vector field where we have used the standard complex notation to denote the Fréchet derivatives of F with respect to the variable a ∈ 1 .
We will often need to complexify the variable θ ∈ T d into the domain T d ρ := {θ ∈ C d : Re(θ) ∈ T d , |Im(θ)| ≤ ρ} and consider vector fields which are functions from , X (ā) ) which are analytic in Y, θ, a. Our vector fields will be defined on the domain On the vector field, we use as norm All Hamiltonians F considered in this article are analytic, real valued and can be expanded in Taylor Fourier series which are well defined and pointwise absolutely convergent Correspondingly we expand vector fields in Taylor Fourier series (again well defined and pointwise absolutely convergent) To a vector field we associate its majorant α,β, | e ρ | | Y l m α,β and require that this is an analytic map on D(r). Such a vector field is called majorant analytic. Since Hamiltonian functions are defined modulo constants, we give the following definition of the norm of F : Note that the norm | · | ρ,r controls the | · | ρ ,r whenever ρ < ρ, r < r.
Finally, we will also consider Hamiltonians F (λ; θ, a,ā) ≡ F (λ) depending on an external parameter λ ∈ O ⊂ R d . For those, we define the inhomogeneous Lipschitz norm: 2.3. Commutation rules. Given two Hamiltonians F and G, we define their Poisson bracket as {F, G} := dF (X G ); in coordinates Given α, β ∈ N Z 2 \S 0 we denote m α,β := a αāβ . To the monomial e i ·θ Y l m α,β with ∈ Z d , l ∈ N d we associate various numbers. We denote by We also associate to e i ·θ Y l m α,β the quantities π(α, β) = (π x , π y ) and π( ) defined by The above quantities are associated with the following mass M and momentum P = (P x , P y ) functionals given by Remark 2.2. An analytic hamiltonian function F (expanded as in (2.2)) commutes with the mass M and the momentum P if and only if the following selection rules on its coefficients hold: where η(α, β), η( ) are defined in (2.3) and π(α, β), π( ) are defined in (2.4).
Definition 2.3. We will denote by A ρ,r the set of all real-valued Hamiltonians of the form (2.2) with finite | · | ρ,r norm and which Poisson commute with M, P. Given a compact set O ⊂ R d , we denote by A O ρ,r the Banach space of Lipschitz maps O → A ρ,r with the norm | · | O ρ,r . From now on, all our Hamiltonians will belong to some set A ρ,r for some ρ, r > 0.
Adapted variables and Hamiltonian formulation
3.1. Fourier expansion and phase shift. Let us start by expanding u in Fourier coefficients Then, the Hamiltonian H 0 introduced in (1.1) can be written as where the means the sum over the quadruples is a constant of motion, we make a trivial phase shift and consider an equivalent Hamiltonian corresponding to the Hamilton equation Clearly the solutions of (3.2) differ from the solutions of (2D-NLS) only by a phase shift 3 . Then, 3.2. The Birkhoff map for the 1D cubic NLS. We devote this section to gathering some properties of the Birkhoff map for the integrable 1D NLS equation. These will be used to write the Hamiltonian (3.3) in a more convenient way. The main reference for this section is [Mas18b]. We shall denote by B s (r) the ball of radius r and center 0 in the topology of h s ≡ h s (Z).
Theorem 3.1. There exist r * > 0 and a symplectic, real analytic map Φ with dΦ(0) = I such that ∀s ≥ 0 one has the following The same estimate holds for Φ −1 − I or by replacing the space h s with the space 1 .
(ii) Moreover, if q ∈ h s for s ≥ 1, Φ introduces local Birkhoff coordinates for (NLS-1d) in h s as follows: the integrals of motion of (NLS-1d) are real analytic functions of the actions In order to show the equivalence we consider any solution u(x, t) of (3.2) and consider the invertible map Then a direct computation shows that v solves 2D-NLS.
have the form ∂Im , ∀m ∈ Z. Then one has the asymptotic expansion where m (I) is at least quadratic in I.
Proof. Item (i) is the main content of [Mas18b], where it is proved that the Birkhoff map is majorant analytic between some Fourier-Lebesgue spaces. Item (ii) is proved in [GK14a]. Item (iii) is Theorem 1.3 of [KST17].
To begin with, we start from the Hamiltonian in Fourier coordinates (3.3), and set We rewrite the Hamiltonian accordingly in increasing degree in a, obtaining Step 1: First we do the following change of coordinates, which amounts to introducing Birkhoff coordinates on the line Z × {0}. We set In those new coordinates, the Hamiltonian becomes where Step 2: Next, we go to action-angle coordinates only on the set In those coordinates, the Hamiltonian becomes (using (3.4)) Step 3: Now, we expand each line by itself. By Taylor expanding around the finite-gap torus corresponding to (Y, θ, a) = (0, θ, 0) we obtain, up to an additive constant, where we have used formula (3.4) in order to deduce that ∂ 2 h nls1d ∂Im∂In (0) = −δ m n where δ m n is the Kronecker delta.
Lemma 3.3 (Frequencies around the finite gap torus).
Denote Then, (1) The map (I 1 , . . . , I d ) → λ(I 1 , . . . , I d ) = ( λ i (I 1 , . . . , I d )) 1≤i≤d is a diffeomorphism from a small neighborhood of 0 of R d to a small neighborhood of 0 in R d . Indeed, λ =Identity +(quadratic in I). More precisely, there exists ε 1d > 0 such that if 0 < ε < ε 1d and . From now on, and to simplify notation, we will use the vector λ as a parameter as opposed to (I 1 , . . . , I d ), and we shall set the vector to denote the frequencies at the tangential sites in S 0 .
Proceeding as in [MP18], one can prove the following result:
Reducibility theory of the quadratic part
In this section, we review the reducibility of the quadratic part N + H (0) (see (3.14) and (3.15)) of the Hamiltonian, which is the main part of the work [MP18]. This will be a symplectic linear change of coordinates that transforms the quadratic part into an effectively diagonal, time independent expression. 4.1. Restriction to an invariant sublattice Z 2 N . For N ∈ N, we define the sublattice Z 2 N := Z×N Z and remark that it is invariant for the flow in the sense that the subspace is invariant for the original NLS dynamics and that of the Hamiltonian (3.13). From now on, we restrict our system to this invariant sublattice, with The reason for this restriction is that it simplifies (actually eliminates the need for) some genericity requirements that are needed for the work [MP18] as well as some of the normal forms that we will perform later. It will also be important to introduce the following two subsets of Z 2 N : Definition 4.1 (L−genericity). Given L ∈ N, we say that S 0 is L-generic if it satisfies the condition
4.2.
Admissible monomials and reducibility. The reducibility of the quadratic part of the Hamiltonian will introduce a change of variables that modifies the expression of the mass M and momentum P as follows. Let us set (4.4) These will be the expressions for the mass and momentum after the change of variables introduced in the following two theorems. Notice the absence of the terms 1≤i≤d from the expressions of M and P x above. These terms are absorbed in the new definition of the Y and a variables.
(2) For each λ ∈ C (0) and all r ∈ [0, r 0 ], ρ ∈ [ ρ 0 64 , ρ 0 ], there exists an invertible symplectic change of variables L (0) , that is well defined and majorant analytic from D(ρ/8, ζ 0 r) → D(ρ, r) (here ζ 0 > 0 is a constant depending only on ρ 0 , max(|m k | 2 )) and such that if a ∈ h 1 (Z 2 N \ S 0 ), then (3) The mass M and the momentum P (defined in (2.5)) in the new coordinates are given by (4) The map L (0) maps h 1 to itself and has the following form The same holds for the inverse map (L (0) ) −1 . (5) The linear maps L(λ; θ, ε) and Q(λ; θ, ε) are block diagonal in the y Fourier modes, in the sense that L = diag n∈N N (L n ) with each L n acting on the sequence {a (m,n) , a (m,−n) } m∈Z (and similarly for Q). Moreover, L 0 = Id and L n is of the form Id + S n where S n is a smoothing operator in the following sense: with the smoothing norm · ρ,−1 defined in (4.8) below where P {|m|≥K} is the orthogonal projection of a sequence (c m ) m∈Z onto the modes |m| ≥ K.
The above smoothing norm is defined as follows: Let S(λ; θ, ε) be an operator acting on sequences (c k ) k∈Z through its matrix elements S(λ; θ, ε) m,k . Let us denote by S(λ; , ε) m,k the θ-Fourier coefficients of S(λ; θ, ε) m,k . For ρ, ν > 0 we define S(λ; θ, ε) ρ,ν as: This definition is equivalent to the more general norm used in Definition 3.9 of [MP18]. Roughly speaking, the boundedness of this norm means that, in terms of its action on sequences, S maps k ν 1 → 1 . As observed in Remark 3.10 of [MP18], thanks to the conservation of momentum this also means that S maps 1 → k −ν 1 .
Remark 4.5. Note that in [MP18] Theorem 4.4 is proved in h s norm with s > 1, for instance in (4.8) the 1 norm is substituted with the h s one. However the proof only relies on momentum conservation and on the fact that h s is an algebra w.r.t. convolution, which holds true also for 1 . Hence the proof of our case is identical and we do not repeat it.
We are able to describe quite precisely the asymptotics of the frequencies Ω of Theorem 4.4.
Here the {µ i (λ)} 1≤i≤d are the roots of the polynomial Finally Theorems 4.4 and 4.6 follow from Theorems 5.1 and 5.3 of [MP18], together with the observation that the set C defined in Definition 2.3 of [MP18] We conclude this section with a series of remarks.
Remark 4.7. Notice that the {µ i (λ)} 1≤i≤d depend on the number d of tangential sites but not on the {m i } 1≤i≤d .
Remark 4.8. The asymptotic expansion (4.9) of the normal frequencies does not contain any constant term. The reason is that we canceled such a term when we subtracted the quantity M (u) 2 from the Hamiltonian at the very beginning (see the footnote in Section 3.1). Of course if we had not removed M (u) 2 , we would have had a constant correction to the frequencies, equal to q(ωt, ·) 2 L 2 . Since q(ωt, x) is a solution of (2D-NLS), it enjoys mass conservation, and thus q(ωt, ·) 2 L 2 = q(0, ·) 2 L 2 is independent of time.
Remark 4.9. In the new variables, the selection rules of Remark 2.2 become (with H expanded as in (2.2)):
Elimination of cubic terms
If we apply the change L (0) obtained in Theorem 4.4 to Hamiltonian (3.13), we obtain As a direct consequence of Lemma 3.4 and Theorem 4.4, estimates (3.19) hold also for K (j) , j = 1, 2 and K (≥3) . We now perform one step of Birkhoff normal form change of variables which cancels out K (1) completely. In order to define such a change of variables we need to impose third order Melnikov conditions, which hold true on a subset of the set C (0) of Theorem 4.4.
This lemma is proven in Appendix C of [MP18]. The main result of this section is the following theorem.
Lemma 5.3. For every ρ, r > 0 the following holds true: (i) Let h, f ∈ A O ρ,r . For any 0 < ρ < ρ and 0 < r < r, one has where υ := min 1 − r r , ρ − ρ . If υ −1 |f | O ρ,r < ζ sufficiently small then the (time-1 flow of the) Hamiltonian vector field X f defines a close to identity canonical change of variables T f such that |h for all 0 < ρ < ρ , 0 < r < r .
(ii) Let f, g ∈ A O ρ,r of minimal degree respectively d f and d g (see Definition 2.1) and define the function Then T i (f ; g) is of minimal degree d f i + d g and we have the bound Proof of Theorem 5.2. We look for L (1) as the time-one-flow of a Hamiltonian χ (1) . With N := ω · Y + ∈Z 2 We choose χ (1) to solve the homological equation { N , χ (1) } + K (1) = 0. Thus we set .
since the terms q fg m appearing in H (1) (and hence K (1) ) are O( √ ε). We come to the terms of line (5.8). First we use the homological equation { N , χ (1) } + K (1) = 0 to get that Therefore, we set Q (2) as in (5.3) and By Lemma 5.3, Q (≥3) has degree at least 3 and fulfills the quantitative estimate (5.4). To prove (iv), we use the fact that { M, χ (1) } = { P, χ (1) } = 0 follows since K (1) commutes with M and P, hence its monomials fulfill the selection rules of Remark 4.9. By the explicit formula for χ (1) above, it follows that the same selection rules hold for χ (1) , and consequently L (1) preserves M and P.
Analysis of the quartic part of the Hamiltonian
At this stage, we are left with the Hamiltonian Q given in (5.2). The aim of this section is to eliminate non-resonant terms from Q (2) . First note that Q (2) contains monomials which have one of the two following forms with |l| = 1.
In order to cancel out the terms quadratic in a by a Birkhoff Normal form procedure, we only need the second Melnikov conditions imposed in (4.6). In order to cancel out the quartic tems in a we need fourth Melnikov conditions, namely to control expressions of the form We start by defining the following set R 4 ⊂ A 4 (see Definition 4.2), R 4 := (j, , σ) : = 0 and 1 , 2 , 3 , 4 / ∈ S form a rectangle (6.2) = 0 and 1 , 2 / ∈ S , 3 , 4 ∈ S form a horizontal rectangle (even degenerate) = 0, 1 , 2 , 3 ∈ S , 4 ∈ S and |m 4 | < M 0 , where M 0 is a universal constant = 0, 1 , 2 , 3 , 4 ∈ S form a horizontal trapezoid where S is the set defined in (4.2). Here a trapezoid (or a rectangle) is said to be horizontal if two sides are parallel to the x-axis.
Proposition 6.1. Fix 0 < ε 2 < ε 1 sufficiently small and τ 2 > τ 1 sufficiently large. There exist positive γ 2 > 0, L 2 ≥ L 1 (with L 2 depending only on d), such that for all 0 < ε ≤ ε 2 and for an L 2 -generic choice of the set S 0 (in the sense of Definition 4.1), the set has positive measure and C (1) \ C (2) ε κ 2 2 for some κ 2 > 0 independent of ε 2 . The proof of the proposition, being quite technical, is postponed to Appendix A. An immediate consequence, following the same strategy as for the proof of Theorem 5.2, is the following result. We define Π R 4 as the projection of a function in D(ρ, r) onto the sum of monomials with indexes in R 4 . Abusing notation, we define analogously Π R 2 as the projection onto monomials e i ·θ Y l a σ 1 1 a σ 2 2 with |l| = 1 and ( 1 , 2 , , σ 1 , σ 2 ) ∈ R 2 . Figure 1. The black dots, are the points in S 0 . The two rectangles and the trapezoid correspond to cases 1,2,4 in R 4 . In order to represent case 3. we have highlighted three points in S. To each such triple we may associate at most one = 0 and one 4 ∈ Z, which form a resonance of type 3.
Construction of the toy model
Once we have performed (partial) Birkhoff normal form up to order 4, we can start applying the ideas developed in [CKS + 10] to Hamiltonian (6.3). Note that throughout this section ε > 0 is a fixed parameter. Namely, we do not use its smallness and we do not modify it.
We first perform the (time dependent) change of variables to rotating coordinates to the Hamiltonian (6.3), which leads to the corrected Hamiltonian We split this Hamiltonian as a suitable first order truncation G plus two remainders, where Q (2) Res and Q (≥3) are the Hamiltonians introduced in Theorem 6.2. For the rest of this section we focus our study on the truncated Hamiltonian G. Note that the remainder J 1 is not smaller than G. Nevertheless it will be smaller when evaluated on the particular solutions we consider. The term R is smaller than G for small data since it is the remainder of the normal form obtained in Theorem 6.2. Later in Section 8 we show that including the dismissed terms J 1 and R barely alters the dynamics of the solutions of G that we analyze. 7.1. The finite set Λ. We now start constructing special dynamics for the Hamiltonian G with the aim of treating the contributions of J 1 and R as remainder terms. Following [CKS + 10], we do not study the full dynamics of G but we restrict the dynamics to invariant subspaces. Indeed, we shall construct a set Λ ⊂ Z := (Z × N Z) \ (S 0 ∪ S ) for some large N , in such a way that it generates an invariant subspace (for the dynamics of G) given by Thus, we consider the following definition.
Definition 7.1 (Completeness). We say that a set Λ ⊂ Z is complete if U Λ is invariant under the dynamics of G.
Remark 7.2. It can be easily seen that if Λ is complete, U Λ is also invariant under the dynamics of G + J 1 .
We construct a complete set Λ ⊂ Z (see Definition 7.1) and we study the restriction on it of the dynamics of the Hamiltonian G in (7.3). Following [CKS + 10], we impose several conditions on Λ to obtain dynamics as simple as possible.
The set Λ is constructed in two steps. First we construct a preliminary set Λ 0 ⊂ Z 2 on which we impose numerous geometrical conditions. Later on we scale Λ 0 by a factor N to obtain Λ ⊂ The set Λ 0 is "essentially" the one described in [CKS + 10]. The crucial point in that paper is to choose carefully the modes so that each mode in Λ 0 only belongs to two rectangles with vertices in Λ 0 . This allows to simplify considerably the dynamics and makes it easier to analyze. Certainly, this requires imposing several conditions on Λ 0 . We add some extra conditions to adapt the set Λ 0 to the particular setting of the present paper.
• Property V Λ 0 (Faithfulness): Apart from nuclear families, Λ 0 contains no other rectangles. In fact, by the closure property I Λ 0 , this also means that it contains no right angled triangles other than those coming from vertices of nuclear families. • Property VI Λ 0 : There are no two elements in Λ 0 such that 1 ± 2 = 0. There are no three elements in Λ 0 such that 1 − 2 + 3 = 0. If four points in Λ 0 satisfy 1 − 2 + 3 − 4 = 0 then either the relation is trivial or such points form a family. • Property VII Λ 0 : There are no points in Λ 0 with one of the coordinates equal to zero i.e.
• Property VIII Λ 0 : There are no two points in Λ 0 which form a right angle with 0. Condition I Λ 0 is just a rephrasing of the completeness condition introduced in Definition 7.1. Properties II Λ 0 , III Λ 0 , IV Λ 0 , V Λ 0 correspond to being a family tree as stated in [CKS + 10].
The construction of such kind of sets was done first in [CKS + 10] (see also [GK15,GK17,Gua14,GHP16]) where the authors construct sets Λ satisfying Properties I Λ -V Λ and estimate (7.8). The proof of Theorem 7.3 follows the same lines as the ones in those papers. Indeed, Properties VI Λ -VIII Λ can be obtained through the same density argument. Finally, the estimate (7.7), even if it is not stated explicitly in [CKS + 10], it is an easy consequence of the proof in that paper (in [GK15, GK17, GHP16] a slightly weaker estimate is used).
Remark 7.4. Note that s ∈ (0, 1) implies that were are constructing a backward cascade orbit (energy is transferred from high to low modes). This means that the modes in each generation of Λ 0 are just switched oppositely Λ 0j ↔ Λ 0,g−j+1 compared to the ones constructed in [CKS + 10]. The second statement of Theorem 1.1 considers s > 1 and therefore a forward cascade orbit (energy transferred from low to high modes). For this result, we need a set Λ 0 of the same kind as that of [CKS + 10], which thus satisfies instead of estimate (7.5).
We now scale Λ 0 by a factor N satisfying (4.1) and we denote by Λ := N Λ 0 . Note that the listed properties I Λ 0 -VIII Λ 0 are invariant under scaling. Thus, if they are satisfied by Λ 0 , they are satisfied by Λ too.
Lemma 7.5. There exists a set Λ satisfying all statements of Theorem 7.3 (with a different f (g) satisfying (7.6)) and also the following additional properties.
First we note that m 1 m 2 + n 1 n 2 = 0 by property VIII Λ 0 , since m = 0 cannot be a solution. Now consider the discriminant ∆ = (m 1 + m 2 ) 2 − 4(m 1 m 2 + n 1 n 2 ). If ∆ < 0, then no right angle is possible. If ∆ = 0, then clearly |m| ≥ 1/2, since once again m = 0 is not a solution. Finally let ∆ > 0. Then Denoting by γ := 4(m 1 m 2 +n 1 n 2 ) (m 1 +m 2 ) 2 , the condition ∆ > 0 implies that −∞ < γ < 1. Splitting in two cases: |γ| ≤ 1 and γ < −1 one can easily obtain that either way m satisfies (7.9). Now it only remains to scale the set Λ by a factor (f (g)) 4 . Then, taking as new f (g), f (g) := (f (g)) 5 , the obtained set Λ satisfies all statements of Theorem 7.3 and also the statements of Lemma 7.5. 7.2. The truncated Hamiltonian on the finite set Λ and the [CKS + 10] toy model. We use the properties of the set Λ given by Theorem 7.3 and Lemma 7.5 to compute the restriction of the Hamiltonian G in (7.3) to the invariant subset U Λ (see (7.4)).
Lemma 7.6. Consider the set Λ ⊂ N Z × N Z obtained in Theorem 7.3. Then, the set is invariant under the flow associated to the Hamiltonian G. Moreover, G restricted to M Λ can be written as and the remainder J 2 satisfies (7.12) |J 2 | ρ,r r 2 (f (g)) − 4 5 .
Proof. First we note that, since Y = 0 on M Λ , Res is the Hamiltonian defined in Theorem 6.2. We start by analyzing the Hamiltonian Q (2) introduced in Theorem 5.2, which is defined as We analyze each term. Here it plays a crucial role that Λ ⊂ N Z × N Z with N = f (g) 4/5 . In order to estimate K (2) , defined in (5.1), we recall that Λ does not have any mode in the x-axis and therefore the original quartic Hamiltonian has not been modified by the Birkhoff map (1.6) (this is evident from the formula for H (2) in (3.17)). Thus, it is enough to analyze how the quartic Hamiltonian has been modified by the linear change L (0) analyzed in Theorems 4.4 and 4.6. Using the smoothing property of the change of coordinates L (0) given in Statement 5 of Theorem 4.4, one obtains Now we deal with the term {K (1) , χ (1) }. Since we only need to analyze Π R 4 {K (1) , χ (1) } M Λ , we only need to consider monomials in K (1) and in χ (1) which have at least two indexes in Λ. We represent this by setting #Λ≥2 , where #Λ ≥ 2 means that we restrict to those monomials which have at least two indexes in Λ. We then have We estimate the size of χ #Λ≥2 has coefficients (7.13) χ (1) We first estimate the tails (in ) of χ (1) and then we analyze the finite number of cases left. For the tails, it is enough to use Theorem 5.2 to deduce the following estimate for any ρ ≤ ρ 1 /2, where ρ 1 is the constant introduced in that theorem, We restrict our attention to monomials with | | ≤ 4 √ N . We take 2 , 3 ∈ Λ and we consider different cases depending on 1 and the properties of the monomial. In each case we show that the denominator of (7.13) is larger than N . Case 1. Suppose that 1 / ∈ S . The selection rules are (according to Remark 4.9) η( ) + σ 1 + σ 2 + σ 3 = 0 , m · + σ 1 m 1 + σ 2 m 2 + σ 3 m 3 = 0 , σ 1 n 1 + σ 2 n 2 + σ 3 n 3 = 0 and the leading term in the denominator of (7.13) is where m 2 = (m 2 1 , . . . , m 2 d ). We consider the following subcases: A1 σ 3 = σ 1 = +1, σ 2 = −1. In this case 1 − 2 + 3 − v = 0, where v := (− m · , 0). We rewrite (7.14) as Assume first 2 = 3 . Since the set Λ satisfies Lemma 7.5 1. and | m · | 4 √ N f (g) 1/5 , we can ensure that 2 and 3 do not form a right angle with v, thus Actually by the second statement of Lemma 7.5, 3 − 2 ∈ N Z 2 and hence, using also | | ≤ 4 √ N , Now it remains the case 2 = 3 . Such monomials cannot exist in H (1) in (3.16) since the monomials with two equal modes have been removed in (3.3) (it does not support degenerate rectangles). Naturally a degenerate rectangle may appear after we apply the change L (0) introduced in Theorem 4.4. Nevertheless, the map L (0) is identity plus smoothing (see statement 5 of that theorem), which leads to the needed N −1 factor. B1 σ 3 = σ 2 = +1, σ 1 = −1. Now the selection rule reads − 1 + 2 + 3 − v = 0, with again v = (− m · , 0). We rewrite (7.14) as By the first statement of Lemma 7.5, v − 2 , v − 3 = 0. By Property VIII Λ and the second statement of Lemma 7.5, one has |( 2 , 3 )| ≥ N 2 and estimate (7.7) implies | 2 |, | 3 | ≤ N 3/2 .
and one concludes as in A1.
In conclusion we have proved that Item (i) of Lemma 5.3, jointly with estimate (7.16), implies that, for ρ ∈ (0, ρ/2] and r ∈ (0, r/2] This completes the proof of Lemma 7.6. The Hamiltonian G 0 in (7.11) is the Hamiltonian that the I-team derived to construct their toy model. A posteriori we will check that the remainder J 2 plays a small role in our analysis.
The properties of Λ imply that the equation associated to G 0 reads (7.17) iβ = −β |β | 2 + 2β child 1 β child 2 β spouse + 2β parent 1 β parent 2 β sibling for each ∈ Λ. In the first and last generations, the parents and children are set to zero respectively. Moreover, the particular form of this equation implies the following corollary.
Corollary 7.7 ([CKS + 10]). Consider the subspace where all the members of a generation take the same value. Then, U Λ is invariant under the flow associated to the Hamiltonian G 0 . Therefore, equation (7.17) restricted to U Λ becomes The dimension of U Λ is 2g, where g is the number of generations. In the papers [CKS + 10] and [GK15], the authors construct certain orbits of the toy model (7.18) which shift its mass from being localized at b 3 to being localized at b g−1 . These orbits will lead to orbits of the original equation (2D-NLS) undergoing growth of Sobolev norms.
Theorem 7.8 ( [GK15]). Fix a large γ 1. Then for any large enough g and µ = e −γg , there exists an orbit of system (7.18), κ > 0 (independent of γ and g) and T 0 > 0 such that Moreover, there exists a constant K > 0 independent of g such that T 0 satisfies This theorem is proven in [CKS + 10] without time estimates. The time estimates were obtained in [GK15].
The approximation argument
In Sections 4, 5 and 6 we have applied several transformations and in Sections 6 and 7 we have removed certain small remainders. This has allowed us to derive a simple equation, called toy model in [CKS + 10]; then, in Section 7, we have analyzed some special orbits of this system. The last step of the proof of Theorem 1.1 is to show that when incorporating back the removed remainders (J 1 and R in (7.3) and J 2 in (7.10)) and undoing the changes of coordinates performed in Theorems 4.4 and 5.2, in Proposition 6.2 and in (7.1), the toy model orbit obtained in Theorem 7.8 leads to a solution of the original equation (2D-NLS) undergoing growth of Sobolev norms. Now we analyze each remainder and each change of coordinates. From the orbit obtained in Theorem 7.8 and using (7.19) one can obtain an orbit of Hamiltonian (7.11). Moreover, both the equation of Hamiltonian (7.11) and (7.18) are invariant under the scaling By Theorem 7.8, the time spent by the solution b ν (t) is where T 0 is the time obtained in Theorem 7.8. Now we prove that one can construct a solution of Hamiltonian (7.2) "close" to the orbit β ν of Hamiltonian (7.11) defined as where b(t) is the orbit given by Theorem 7.8. Note that this implies incorporating the remainders in (7.3) and (7.10). We take a large ν so that (8.3) is small. In the original coordinates this will correspond to solutions close to the finite gap solution. Taking J = J 1 + J 2 (see (7.3) and (7.10)), the equations for β and Y associated to Hamiltonian (7.2) can be written as Now we obtain estimates of the closeness of the orbit of the toy model obtained in Theorem 7.8 and orbits of Hamiltonian (7.2).
The proof of this theorem is deferred to Section 8.1. Note that the change to rotating coordinates in (7.1) does not alter the 1 norm and therefore a similar result as this theorem can be stated for orbits of Hamiltonian (6.3) (modulus adding the rotating phase).
Proof of Theorem 1.1. We use Theorem 8.1 to obtain a solution of Hamiltonian (3.13) undergoing growth of Sobolev norms. We consider the solution (Y * (t), θ * (t), a * (t)) of this Hamiltonian with initial condition Y * = 0 for an arbitrary choice of θ 0 ∈ T d . We need to prove that Theorem 8.1 applies to this solution. To this end, we perform the changes of coordinates given in Theorems 4.4, 5.2 and 6.2, keeping track of the 1 norm.
For L (j) , j = 1, 2, Theorems 5.2 and 6.2 imply the following. Consider (Y, θ, a) ∈ D(ρ, r) and define π a (Y, θ, a) := a. Then, we have This estimate is not true for the change of coordinates L (0) given in Theorem 4.4. Nevertheless, this change is smoothing (see Statement 5 of Theorem 4.4). This implies that if all ∈ supp{a} satisfy | | ≥ J then Thanks to Theorem 7.3 (more precisely (7.7)), we can apply this estimate to (8.6) with J = Cf (g).
Using the fact that a * 1 ν −1 g2 g and the condition on ν in (8.5), one can check Therefore, we can conclude We define ( Y * , θ * , a * ) the image of the point (8.6) under the composition of these three changes. We apply Theorem 8.1 to the solution of (7.2) with this initial condition. Note that Theorem 8.1 is stated in rotating coordinates (see (7.1)). Nevertheless, since this change is the identity on the initial conditions, one does not need to make any further modification. Moreover, the change (7.1) leaves invariant both the 1 and Sobolev norms. We show that such solution ( Y * (t), θ * (t), a * (t)) expressed in the original coordinates satisfies the desired growth of Sobolev norms. Define To estimate the initial Sobolev norm of the solution (Y * (t), θ * (t), a * (t)), we first prove that The initial condition of the considered orbit given in (8.6) has support Λ (recall that Y = 0). Therefore, Then, taking into account Theorem 7.8, From Theorem 7.3 we know that for i = 3, Therefore, to bound these terms we use the definition of µ from Theorem 7.8. Taking γ > 1 2κ and taking g large enough, we have that a * (0) 2 h s ≤ 2ν −2 S 3 . To control the initial Sobolev norm, we need that 2ν −2 S 3 ≤ δ 2 . To this end, we need to use the estimates for ν given in Theorem 8.1, and the estimates for | | ∈ Λ and for f (g) given in Theorem 7.3. Then, if we choose ν = (f (g)) 1−σ , we have Note that Theorem 8.1 is valid for any fixed small σ > 0. Thus, provided s < 1, we can choose 0 < σ < 1 − s and take g large enough, so that we obtain an arbitrarily small initial Sobolev norm.
Remark 8.2. In case we ask only the 2 norm of a * (0) to be small we can drop the condition s < 1. Indeed a * (0) 2 ν −1 2 g g which can be made arbitrary small by simply taking g large enough (and ν as in (8.5)). Now we estimate the final Sobolev norm. First we bound a * (T ) h s in terms of S g−1 . Indeed, Thus, it is enough to obtain a lower bound for a * (T ) for ∈ Λ g−1 . To obtain this estimate we need to express a * in normal form coordinates and use Theorem 8.1. We split |a * (T )| as follows. Define ( Y * (t), θ * (t), a * (t)) the image of the orbit with initial condition (8.6) under the changes of variables in Theorems 4.4 and 5.2, Proposition 6.2 and in (7.1). Then, The first term, by Theorem 7.8, satisfies |β ν (T )| ≥ ν −1 /2. For the second one, using Theorem 8.1, we have a * (T ) − β ν (T )e iΩ (λ,ε)T ≤ ν −1−σ . Finally, taking into account the estimates (8.7) and (8.8), the third one can be bounded as Now, by Theorem 8.1 and Theorem 7.3 (more precisely the fact that | | f (g) for ∈ Λ), Thus, by (8.9), we can conclude that which, by Theorem 7.3, implies Thus, taking g large enough we obtain growth by a factor of K/δ. The time estimates can be easily deduced by (8.2), (8.5), (7.6) and Theorem 7.8, which concludes the proof of the first statement of Theorem 1.1. For the proof of the second statement of Theorem 1.1 it is enough to point out that the condition s < 1 has only been used in imposing that the initial Sobolev norm is small. The estimate for the 2 norm can be obtained as explained in Remark 8.2. 8.1. Proof of Theorem 8.1. To prove Theorem 8.1, we define We use the equations in (8.4) to deduce an equation for ξ. It can be written as (8.11) We analyze the equations for ξ in (8.10) and Y in (8.4).
Proof. Proceeding as forξ, we write the equation forẎ as We claim that X 1 (t) and X 1 (t) are identically zero. Then, proceeding as in the proof of Lemma 8.3, one can bound each term and complete the proof of Lemma 8.4. To explain the absence of linear terms, consider first ∂ βθ J (0, θ, β ν ). It contains two types of monomials: those coming from R 2 (see (4.5)) which however do not depend on θ, and those coming from R 4 (see (6.2)). But also these last monomials do not depend on θ once they are restricted on the set Λ (indeed the only monomials of R 4 which are θ dependent are those of the third line of (6.2), which are supported outside Λ). Therefore ∂ βθ J (0, θ, β ν ) ≡ 0 (and so ∂ βθ J (0, θ, β ν ) and ∂ Yθ J (0, θ, β ν )).
Since we are assuming (8.5) and we can take A large enough (see Theorem 7.3), we obtain that for t ∈ [0, T ], provided g is sufficiently large which implies that T ≤ T * . That is, the bootstrap assumption was valid. This completes the proof.
Appendix A. Proof of Proposition 6.1 We split the proof in several steps. We first perform an algebraic analysis of the nonresonant monomials.
A.1. Analysis of monomials of the form e iθ· a σ 1 1 a σ 2 2 a σ 3 3 a σ 4 4 . We analyze the small divisors (6.1) related to these monomials. Taking advantage of the asymptotics of the eigenvalues given in Theorem 4.6, we consider a "good" first order approximation of the small divisor given by Note that this is an affine function in ε and therefore it can be written as . We say that a monomial is Birkhoff non-resonant if, for any ε > 0, this expression is not 0 as a function of λ.
Lemma A.1. Assume that the m k 's do not solve any of the linear equations defined in (A.5) (this determines L 2 in the statement of Theorem 6.1). Consider a monomial of the form e iθ· a σ 1 j 1 a σ 2 j 2 a σ 3 j 3 a σ 4 j 4 with (j, , σ) ∈ A 4 . If (j, , σ) ∈ R 4 , then it is Birkhoff non resonant.
If = 0, we have r σ r n r = r σ r n 2 r = 0. Then {n 1 , n 3 } = {n 2 , n 4 }. One verifies easily that in such case the sites r 's form a horizontal trapezoid (that could be even degenerate).
A.2. Analysis of monomials of the form e iθ· Y l a σ 1 j 1 a σ 2 j 2 . In this case, since the factor Y l does not affect the Poisson brackets, admissible monomials (in the sense of Definition 4.2) are non-resonant provided they do not belong to the set R 2 introduced in Definition 4.3.
Lemma A.2. Any monomial of the form e iθ· a σ 1 1 a σ 2 2 Y i with (j, , σ) / ∈ R 2 admissible in the sense of Definition 4.2 is Birkhoff non-resonant.
Proof. We skip the proof since it is analogous to Lemma 6.1 of [MP18].
We can now prove the following result.
for some constant γ depending on ε .
Proof of Proposition A.5. If the integer K is sufficiently large, namely |K| ≥ 4 | | max So from now on we restrict ourselves to the case |K| ≤ 4 | | max 1≤i≤d (m 2 i ). We will repeatedly use the following result, which is an easy variant of Lemma 5 of [Pös96].
The proof relies on the fact that all the functions appearing in (A.10) are Lipschitz in λ, for full details see e.g. Lemma C.2 of [MP18]. Now, let us fix (A.11) γ = ε M 0 100 .
Estimate (A.15) gives immediately which is what we claimed.
We can finally prove Proposition 6.1. | 15,743 | 2018-10-08T00:00:00.000 | [
"Mathematics"
] |
Nickel Extraction from Olivine : Effect of Carbonation Pre-Treatment †
In this work, we explore a novel mineral processing approach using carbon dioxide to promote mineral alterations that lead to improved extractability of nickel from olivine ((Mg,Fe)2SiO4). The precept is that by altering the morphology and the mineralogy of the ore via mineral carbonation, the comminution requirements and the acid consumption during hydrometallurgical processing can be reduced. Furthermore, carbonation pre-treatment can lead to mineral liberation and concentration of metals in physically separable phases. In a first processing step, olivine is fully carbonated at high CO2 partial pressures (35 bar) and optimal temperature (200 °C) with the addition of pH buffering agents. This leads to a powdery product containing high carbonate content. The main products of the carbonation OPEN ACCESS
Introduction
In the last few decades, traditional nickel resources have become scarcer because of ramping global production and growing demand [1].Nickel is more abundantly present in the Earth's crust than copper and lead, but the availability of high-grade ores is rather limited [2].The current strong demand for nickel is expected to carry into the future, and the scarcity of high-grade recoverable ores will inevitably call for the exploitation of low-grade ores as a source for nickel.Therefore, increasingly more research is underway investigating the feasibility of recovering nickel from low-grade ores [3][4][5].Magnesium-iron silicates, minerals that are widely distributed on the Earth's crust and that contain relatively dilute, yet considerable amounts, of nickel arise as one possible, yet challenging, opportunity.
The main objective of this study was to investigate the possibility of processing olivine ((Mg,Fe)2SiO4) for the production of nickel.Olivine is solid-solution of iron-and magnesium-silicates containing relatively small amounts of nickel and chromium, and is the precursor of weathered lateritic ores.Olivine (also known as dunite, an ore containing at least 90% olivine [6]) is abundantly present in the Earth's upper mantle [7], and intrudes in some locations into the Earth's crust, most notably in the Fjordane Complex of Norway, which contains the largest ore body (approx.two billion metric tons) under commercial exploitation [6].The use of olivine as a nickel source could, thus, possibly solve the scarcity problem of high-grade ores.Due to its small nickel content, conventional extraction and recovery methods (e.g., high pressure acid leaching, agitation leaching or heap leaching [3]) are not viable, as reagent and processing costs become too high [8].In this work a novel approach was investigated, whereby the mineral is first carbonated in a pre-treatment step before the nickel is extracted by leaching.Carbonation may allow for an easier recovery due to a better accessibility of the nickel during leaching as a result of morphological and mineralogical changes.Recently, considerable research has focused on the carbonation of olivine and other alkaline silicates as an option for sustainable carbon dioxide sequestration [9,10].In the present work, however, CO2 is utilized primarily as a processing agent; such an approach can be termed "carbon utilization" (more specifically turning CO2 from a waste into an acid).
Nickel is able to replace magnesium in olivine's magnesium silicate matrix forming a magnesium-nickel silicate (Mg,Ni)2SiO4 called liebenbergite or nickel-olivine.This replacement is possible due to certain similarities of nickel and magnesium in the silicate structure.Their ionic radii are similar (Mg = 0.66 Å; Ni = 0.69 Å), their valences are the same (Mg 2+ , Ni 2+ ), and they both belong to the same orthorhombic system [11].The amount of nickel in olivine is variable and depends on the ore's origin, varying between <0.1 and 0.5 wt.% Ni [12].These concentrations are rather low compared to the grade of nickel deposits presently used in industrial processes, which ranges from 0.7 to 2.7 wt. % Ni [11].
Olivine is highly susceptible to weathering processes and alterations by hydrothermal fluids.These alteration reactions involve hydration, silicification, oxidation, and carbonation; common alteration products are serpentine, chlorite, amphibole, carbonates, iron oxides, and talc [11].The fact that olivine is highly susceptible to weathering also makes it suitable for intensified carbonation.Due to this suitability and its high abundance, olivine has been the subject of intensive research for carbon dioxide sequestration using mineral carbonation, whereby the formation of stable magnesium carbonates act as carbon sinks [13][14][15][16][17].
Carbonating olivine converts the silicates (mainly forsterite (Mg2SiO4) and fayalite (Fe2SiO4) [6]) into carbonates and silica.This reaction is exothermic and is, thus, thermodynamically favored.The reaction mechanism contains three main steps: the dissolution of CO2 in the aqueous solution to form carbonic acid; the dissolution of magnesium in the aqueous solution, and the precipitation of magnesium carbonate.The overall reaction schemes for carbonation of forsterite and fayalite are given in Equations ( 1) and (2): (1) The formed magnesite (MgCO3) and siderite (FeCO3), as well as the residual silica (SiO2), are thermodynamically stable products that are environmentally friendly.These reaction products can, thus, be readily disposed of in the environment or reutilized as commercial products.
The focus of this work was to investigate the leaching behavior of carbonated olivine.When carbonated olivine is leached, the acid will have to dissolve a carbonate structure instead of a silicate structure.These reactions can be seen in Equations ( 3) and (4): (3) Through these alterations of the olivine mineral, which may increase specific surface area, nickel might become more accessible to leaching.Secondly, the C-O (360 kJ/mol) bonds are weaker than their Si-O (466 kJ/mol) counterparts [18], which can lead to an easier leaching of the carbonated olivine compared to natural olivine.
This paper reports the results of a series of tests that aimed to: (i) find the optimal carbonation conditions that maximize the desired mineral and morphological alterations; (ii) characterize the carbonated products with a focus on the fate of nickel; (iii) compare the leaching performance of an array of organic and inorganic acids, and assess the efficiency and extent of nickel extraction from carbonated olivine compared to natural olivine; and (iv) provide the proof-of-concept of using carbonation as a pre-treatment step for nickel recovery from low-grade silicate ores and elucidate directions for future research.
Olivine Characterization
Olivine was supplied by Eurogrit B.V. (a subsidiary of Sibelco, Antwerp, Belgium) and originated from Åheim, Norway.The material obtained, classified as GL30, had the following properties described by the supplier: sub-angular to angular shape, pale green color, hardness of 6.5 to 7 Mohs, specific density of 3.25 kg/dm 3 , and a grain size between 0.063 and 0.125 mm.The olivine was milled before any further use to increase the reactivity of the material to carbonation and leaching by increasing the specific surface area.The milling was performed using a centrifugal mill (Retsch ZM100, Haan, Germany) operated at 1400 rpm with an 80 μm sieve mesh.After milling, a total of 86 vol.% of the material had a particle size below 80 μm, and the average mean diameter D{4,3}, determined by Laser Diffraction Analysis (LDA, Malvern Mastersizer 3000, Worcestershire, UK), was equal to 34.8 μm.The particle size distribution is shown in Figure S1.The morphology of the particles was imaged by Scanning Electron Microscopy (SEM, Philips XL30 FEG, Eindhoven, The Netherlands), and is shown in Figure S2.For SEM analysis, particles were gold-coated and mounted on conductive carbon tape.
The material was extensively analyzed to obtain the chemical and mineralogical composition.Table 1 presents the elemental composition results obtained by digestion followed by Inductively-Coupled Plasma Mass Spectrometry (ICP-MS, Thermo Electron X Series, Waltham, MA, USA) analysis; Co, Mg, Mn, and Si content were determined by Wavelength Dispersive X-ray Fluorescence (XRF, Panalytical PW2400, Almelo, The Netherlands).The mineralogy of the fresh olivine was analyzed by powder X-ray Diffraction (XRD, Philips PW1830, Almelo, The Netherlands) with quantification by Rietveld refinement; the diffractogram is shown in Figure 1.As can be expected, the material contains mostly forsterite (84.5 wt.%; in fact ferroan-forsterite, which is forsterite with iron substitution) as well as a smaller amount of fayalite (2.5 wt.%).Other minor components present include some hydrated silicates (clinochlore ((Mg,Fe 2+ )5Al(Si3Al)O10(OH)8, 2.1 wt.%), lizardite (Mg3Si2O5(OH)4, 2.7 wt. %), talc (Mg3Si4O10(OH)2, 0.5 wt.%), and tirodite (Na(Na,Mn 2+ )(Mg4,Fe 2+ )Si8O22(OH)2, 3.1 wt.%)), carbonates (magnesian calcite (Ca0.85Mg0.15CO3,1.0 wt.%), and magnesite (MgCO3, 0.2 wt.%)), magnesium (hydr)oxides (periclase (MgO, 0.1 wt.%), and brucite (Mg(OH)2, 0.7 wt.%)), chromite (FeCr2O4, 1.1 wt.%) and quartz (SiO2, 0.2 wt.%).The olivine was also analyzed with a Jeol Hyperprobe JXA-8530F Field Emission Gun Electron Probe Micro-Analyzer (FEG EPMA, Akishima, Japan), equipped with five wavelength dispersive spectrometers, to map the concentration of each element within the particles.The EPMA was capable of detecting elements down to a concentration of 100 ppm and map them down to a spatial resolution of 0.1 μm.A small representative surface area (80 × 100 μm) of a polished sample (pelletized and embedded in resin) was fully mapped to give the distribution of elements in the material.The EPMA was operated at 15 kV, a probe current of 100 nA, and dwell time of 30 ms per 0.3 × 0.3 μm pixel.Both peak and background were measured under these conditions.Nickel was found to be dispersed in the material, as can be seen in Figure 2.This would indicate that it replaces magnesium in the magnesium silicate structure to form a magnesium-nickel silicate ((Mg,Ni)2SiO4).There are also small particles that are highly concentrated (shown as white) in nickel, chromium and iron.Figure S3 shows the elemental distribution of other elements (Al, C, Ca, Co, Cr, Fe, Mg, Mn, Si). Figure S4 helps to visualize that nickel-rich regions exist; in some, nickel is associated with iron (cyan color in composite map), and in some nickel is not associated with iron nor chromium (green color in composite map).In the case of chromium, it is present mainly in select regions, and those regions are highly concentrated in iron as well (suggestive of chromite), but not in nickel.
Carbonation
Carbonation experiments were conducted in a Büchi Ecoclave continuously-stirred tank reactor (CSTR, Uster, Switzerland).The reactor has a volume of 1.1 liters and is capable of withstanding pressures up to 60 bar and temperatures up to 250 °C.Carbon dioxide gas (99.5% purity) was continuously injected from a compressed cylinder.It should be noted that for industrial implementation, gases with lower CO2 purity (e.g., combustion flue gases) may be used for mineral carbonation so long as the desired CO2 partial pressure is met by gas compression.All experiments in this study were conducted with 35 bar CO2 partial pressure; steam made up the balance pressure up to 55 bar total, depending on the temperature.The reactor was equipped with a Rushton turbine stirrer and a baffle to ensure adequate mixing of the reactor contents; 1000 rpm stirring rate was used.The liquid volume in the reactor was kept constant at 800 mL.
The experimental parameters varied are detailed in Table 2; these were temperature, solids loading, residence time, and additive concentrations.Increasing the temperature influences the equilibrium constants.An increase in the dissociation constants of carbonic acid leads to a decrease in pH (higher acidity) and an increase in both bicarbonate and carbonate ion concentrations; this enhances the dissolution of magnesium as well as the precipitation of magnesium carbonate (under suitable pH, i.e., not excessively acidic).These effects are counteracted by an increase of Henry's constant, which leads to a lower solubility of CO2 in the solution.Lastly, a decrease of the solubility product of magnesium carbonate stimulates its precipitation.These opposing effects indicate that an optimal temperature exists.Increasing the solids loading in the reaction process has been reported to increase the extent of carbonation due to an increase in particle-particle collisions that remove passivating layers and increase the surface area available for carbonation [19].The use of additives aims at enhancing the dissolution of magnesium, the dissociation of carbonic acid, and/or the precipitation of magnesium carbonate.Sodium chloride (NaCl) and sodium bicarbonate (NaHCO3) were tested as carbonation enhancing additives as suggested by Chen et al. [20].
After completion, the reacted slurry content was filtered to recover the liquid and solid portions; solids were dried at 105 °C for 24 h.Most experiments were conducted in duplicate, and data presented are average values.
Leaching
Leaching experiments were conducted by atmospheric agitation methodology.A certain amount of olivine (typically two grams), either fresh or carbonated, was added to plastic flasks together with 100 mL of a solution containing various concentrations of a certain acid.The flasks were shaken at 25 °C for the desired reaction time (typically 24 h).When finished, solids and liquids were separated using a centrifuge.The supernatant liquids and the dried solids were further analyzed.Leaching experiments were conducted in duplicate, and data presented are average values.A low leaching temperature was used as this study's main aim was to investigate mineralogical effects on chemical equilibrium rather than leaching kinetics.Low temperature leaching (i.e., ambient) is typical in heap leaching operations [3].
Analytical Methods
The concentrations of soluble elements in aqueous solutions were determined by ICP-MS.The mineralogical, morphological, and microstructural properties of carbonated solids were characterized by XRD, SEM, nitrogen adsorption (BET, Micromeritics TriStar 3000, Norcross, GA, USA), LDA, and EPMA.The CO2 uptake of the carbonated solids was determined by thermogravimetric analysis (TGA, TA Instruments Q500, New Castle, DE, USA), conducted in duplicate.The weight loss between 250 and 900 °C was attributed to the decomposition of carbonates (XRD results suggest minimal formation of hydration products that could interfere in this range, and there is good agreement between quantitative XRD and TGA determination of magnesite content (Figure S6)).The maximal theoretical CO2 uptake ( CO 2 ,max ) of natural olivine, 0.521 g, CO2/g, olivine, was estimated based on its magnesium and iron content.Extent of carbonation (ξ) is expressed as the percentage ratio of actual to maximal uptake values: ξ = CO 2 ,actual CO 2 ,max ⁄ .
Influence of Carbonation Parameters
The dependencies of the temperature, residence time, solids loading, and NaCl and NaHCO3 concentrations on the carbonation extent are given in Figure 3.
The influence of the reactor temperature on the carbonation is shown in Figure 3a.The extent of carbonation increases with increasing temperature between 150 °C and 200 °C, both for 4 h and 24 h residence times.This is due to the increase in both acid dissociation constants of carbonic acid, which contributes to magnesium silicate dissolution, as well as the decrease in the solubility product of magnesium carbonate, which promotes magnesium carbonate precipitation.These two effects are mutually beneficial, since as more magnesium precipitates as carbonate, more magnesium can leach from the silicate, propagating the reaction.Increasing the temperature also increases Henry's constant for the dissolution of CO2 in the water, which can have a negative impact on the carbonation [20], but this was not observed here.O'Connor et al. [23] found that these counteracting temperature effects lead to an optimal olivine carbonation temperature of 185 °C.In our experiments, no maximum was reached between 150 °C and 200 °C.The difference in results can be explained because O'Connor et al. [23] use other parameter values in their experiments; most importantly, they operated at CO2 pressures of 150 bar, whereas our experiments operated at 35 bar.At higher CO2 pressure, the solubility limit of CO2 will be reached at a lower temperature.
The extent of carbonation increases linearly with an increase in residence time, as can be seen in Figure 3b.There seems to be an initially fast carbonation rate due to parts of the olivine that are more easily carbonated (fines, particle surfaces, and more reactive minerals (e.g., periclase, brucite)), after which the carbonation continues linearly with time.Due to this linear increase with time, there is either no limitation by the formation of a passivating layer, or the passivating layer is broken down sufficiently by particle collisions.This was confirmed by SEM analysis of partially carbonated olivine.Figure S7 shows that the residual silica and precipitated magnesite form separate particles, rather than forming a passivating layer around unreacted olivine.More discussion on this is presented in the Section 3.2.The residence times used in this study are relatively long, which was necessary because of the relatively low CO2 partial pressure utilized (35 bar), as restricted by the reactor's pressure rating.Higher CO2 partial pressures should accelerate the processes, from the order of days to the order of hours, as indicated by other studies conducted at higher pressures (e.g., 139 atm [14]) and modeling work [13].S1 provides detailed data values and statistics on replicates.
As can be seen from Figure 3c, increasing the solids loading greatly enhances carbonation.A solids loading increase from 50 g (5.9 wt.%) to 200 g (20 wt.%) almost doubles the carbonation extent for both the 24 h and 72 h experiments at 200 °C with the addition of 1 M NaCl.These results confirm previous results from Bé arat et al. [19] who also noticed a substantial increase in carbonation, proportional to (wt.%) 1/3 , when increasing the solids loading from 5 to 20 wt.%.The higher amount of solids in the reactor will lead to more collisions of the olivine particles, promoting the removal of passivating layers and the breakage of unreacted particles.Julcour et al. [16] emphasized the importance of attrition/exfoliation, conducting olivine carbonation reactor in a stirred bead mill and achieving 80% conversion in 24 h at 180 °C, 800 rpm and 20 bar CO2.
The addition of NaCl does not seem to enhance the extent of carbonation.As can be seen in Figure 3d, using one or two molar solutions of NaCl has a very limited impact on the extent of carbonation.O'Connor et al. [23] proposed the addition of both 1 M NaCl and 0.64 M NaHCO3, although they also remarked that the addition of sodium bicarbonate has a much larger impact than sodium chloride on the carbonation extent.It can be concluded that, in view of minimizing processing cost or complexity, NaCl addition can be omitted.However, ionic strength can play a role in surface charges and particle aggregation, and should, thus, be investigated in view of product properties such as particle size distribution, specific surface area, and mineral separation.An economical source of saline solution, if desired, would be seawater.
The addition of NaHCO3, on the other hand, has a substantial impact on the carbonation reaction.As can be seen from Figure 3e, the extent of carbonation is highest when 0.64 M of NaHCO3 is added.The substantial impact of NaHCO3 on the carbonation is due to its dissolution into Na + and HCO3 − ions.Chen et al. [20] state that adding sodium bicarbonate reduces the concentration of magnesium ions required to exceed the solubility product for magnesium carbonate and, thus, promotes the precipitation of magnesite.They explain that this is because in solutions with a large amount of NaHCO3, the concentration of CO3 2− is inversely proportional to CO2 pressure and proportional to the square of the concentration of NaHCO3.A reversal of this effect occurs at higher concentrations of NaHCO3 possibly because the solution pH increases excessively, slowing the dissolution of the silicate minerals.
Characterization of Fully-Carbonated Olivine
Full conversion (0.515 g, CO2/g, olivine = 99.0%± 3.9%) of olivine was achieved by carbonating it for 72 h at 200 °C and 35 bar CO2 partial pressure with the addition of 1 M NaCl and using a solids loading of 200 g/800 mL.This fully carbonated olivine was the only carbonated material used in the acid leaching experiments presented in Section 3.3.The chemical and mineralogical compositions as well as the microstructural characteristics of the carbonated olivine are very important for interpretation of the leaching results.
The chemical composition of the fully carbonated olivine was determined by digestion followed by ICP-MS (for Al, Ca, Cr, Fe, and Ni) and by XRF (for Co, Mg, Mn, and Si).The obtained results, in decreasing order, were: 18.1 wt.% Mg; 13.8 wt.% Si; 2.5 wt.% Fe; 0.19 wt.% Ni; 0.17 wt.% Cr; 0.13 wt.% Ca; 0.11 wt.% Al; 0.06 wt.% Mn; 0.01 wt.% Co.The respective weight percentages are lower compared to fresh olivine (Table 1) due to the conversion of magnesium silicate to magnesium carbonate; the addition of CO2 increases the total mass of the olivine by roughly 50%, thus reducing the elemental concentrations.
The particle size distribution of the fully-carbonated olivine can be seen in Figure 4.The average particle size is considerably lower after carbonation as 90 vol.% of the carbonated olivine has a particle size below 42 μm.The BET specific surface area increased from 0.49 m 2 /g to 16.1 m 2 /g.This confirms that carbonation can act as a substitute to more intense comminution of fresh olivine.Two distinctive peaks can clearly be identified in the particle size distribution, with one peak around 5 μm and one peak around 30 μm.The SEM and EDX analyses presented in Figure 5 provide insight into their occurrence.The carbonated olivine consists of small particles that are clusters of small spheres, and larger crystalline particles.The small particles are analyzed to be silica (SiO2) rich (Figure 5c), whereas the crystalline particles are primarily magnesium carbonate (MgCO3), seemingly in solid-solution with iron carbonate (FeCO3) (Figure 5b).The appearance of clustered silica particles is likely due to the aggregation of smaller polymerized silica particles.Surface silica will polymerize setting free a water molecule.The polymerized silica will break off from the surface of the olivine particle forming small silica particles free in solution.These small particles can either grow by further condensation or by aggregating together forming the clusters that are shown in Figure 5c.This reaction mechanism was proposed for the preparation of silica from olivine by Lieftink and Geus [26] and was confirmed by Lazaro et al. [27].
The fully carbonated olivine was analyzed using EPMA to map the concentration of each element within the particles.The concentrations of chromium, iron, magnesium and nickel are shown in Figure 6.Chromium is concentrated in iron-rich particles, in agreement with chromite composition.Iron distribution largely coincides with that of magnesium, confirming the solid-solution carbonate formation ((Mg1−x,Fex)CO3).Nickel appears to be more dispersed throughout the material, although the outline of the carbonate particles can be seen in its map, which would suggest a preference for the carbonate phase.Highly concentrated nickel is also found in a few small particles and the few chromium-rich particles.Additional elemental distributions are mapped in Figures 7 and S8, taken at slightly lower magnification of another area of the embedded sample.In Figure 7 it is seen that silicon is present in regions poor in magnesium and iron, which supports the EDX results (Figure 5) in that silicon forms distinct particles separate from the carbonate phase.The few calcium-rich particles have magnesium, silicon, and aluminum co-present (Figure 7), and low levels of carbon (Figure S8), which could indicate a calcium-magnesium-aluminum silicate originally present in the olivine or formed during the reaction.A possible natural analogue would be alumoå kermanite ((Ca,Na)2(Al,Mg,Fe 2+ )(Si2O7)) [28].Spot EPMA analysis on certain regions of the polished sample is shown in Figure 8, with their chemical compositions given in Table 3.The four analyzed spot areas are very distinctive.Area 001 is a chromium-and iron-concentrated chromite particle.Area 002 contains mainly a combination of magnesium and silicon, meaning it is a rare grain of unreacted ferroan-forsterite.Area 003 contains magnesium, iron and carbon, indicative of ferro-magnesite ((Mg1−x,Fex)CO3).Area 004 is a polymerized silica cluster, as it contains high concentrations of silicon, low concentrations of magnesium and carbon that originates from the embedding resin that penetrates the gaps between the small agglomerated silica particles.These results reaffirm the EDX analysis presented in Figure 5, indicating that carbonate and silica phases (as well as unreacted mineral grains) form distinct particles in the product powder.This means that these particles may be separable by physical means, in view of producing high-value product streams.
Inorganic Acids
Limited research mentions the acid leaching of nickel from olivine.Our own experiments have two main objectives.The first objective is to look at the leaching behavior of various inorganic and organic acids for olivine.The second objective is to investigate the impact of carbonation as a pre-treatment step to leaching.
The metal extractions for the leaching from fresh and carbonated olivine with the three inorganic acids tested can be seen in Figure 9. Sanemasa et al. [29] found that there is no preferential leaching of the silicate structure in olivine of one metal over the other.Our results, both for fresh as well as carbonated olivine, confirm these findings: for each acid, at the various concentrations, there is no preferential leaching of magnesium, iron or nickel.In all cases, chromium leaching was minimal, which confirms that it is located in different particles than the ones containing nickel, and that those chromium-containing particles do not undergo dissolution during leaching, while the ones containing nickel do.
Terry et al. [30] noted that olivine dissolves congruently, meaning a complete breakdown of the silicate structure to give, percentage-wise, the same amount of silica and metal cation leached.The leaching results for fresh olivine in Figure 9 do not entirely confirm these findings.It seems that less silica is solubilized than would be expected based on a congruent dissolution.This indicates that dissolved silica will precipitate, possibly by forming a silica gel.Notably, the results for carbonated olivine (Figure 9b,d,f) show substantially less leaching of silica compared to fresh olivine.This occurs because the metal silicates in the fresh olivine are transformed to metal carbonates in carbonated olivine.Carbon dioxide will now be released instead of silica during the acid leaching.The polymerized silica in the carbonated olivine does not significantly partake in the dissolution reactions of the acid attack.
Increasing the acid concentration enhances the leaching of most elements.The leaching of Mg, Fe and Ni from fresh olivine increased from below 10% at 0.02 N to above 60% at 2.56 N for all inorganic acids tested.This impact was even more apparent for the leaching of carbonated olivine with HCl and HNO3, where Mg, Fe, and Ni extractions increased from below 10% at 0.02 N to nearly 100% at 2.56 N. Most notably, the leaching of nickel from carbonated olivine is substantially better than from fresh olivine when using hydrochloric and nitric acids.A concentration of 2.56 N of either acid leaches, respectively, 100% and 91% of nickel from carbonated olivine.For fresh olivine these values are only 66% and 64%, respectively.
In terms of the differences between the three acids, all three inorganic acids show similar leaching behavior of fresh olivine for the same acid normalities.Between 60% and 70% of Mg, Fe and Ni is leached with 2.56 N of either HCl, H2SO4 or HNO3.At intermediate normalities, sulfuric acid performed better; Sanemasa et al. [29] explain that sulfate is better at stabilizing the metal cations than the chloride anions.For carbonated olivine, however, there is a clear and opposite distinction between the leaching behaviors of nitric and hydrochloric acid compared to sulfuric acid.H2SO4 leaches substantially less Mg, Fe and Ni than HCl and HNO3.Whereas H2SO4 leaches only about 30% Mg, Fe and Ni at 2.56 N, for HCl and HNO3 this is equal to about 90% to 100%.Additional leaching tests, discussed later on, were performed to investigate this phenomenon.sulfuric (c,d) and nitric (e,f) acids; Tables S2 and S3 provide statistical data on replicates.
The leaching enhancement ratio of nickel from carbonated olivine compared to fresh olivine can be seen in Table 4; the enhancement ratio (φ) is calculated as the percent of nickel leached from carbonated olivine (χNi,carb) over the percent of nickel leached from fresh olivine (χNi,fresh): φ = χNi,carb/χNi,fresh.For HCl and HNO3, the leaching ratio increases with increasing acid concentration, peaking at 0.64 N, where carbonated olivine leaches, respectively, 1.77 and 1.72 times better than fresh olivine.The slight decrease at even higher concentrations can be attributed to the fact that the leaching of carbonated olivine is approaching completion, while fresh olivine still benefits significantly from higher acid concentrations (Figure 9).For H2SO4, the leaching of carbonated olivine compared to fresh olivine follows the opposite path, decreasing with increasing acid concentrations.This indicates that the inhibiting effect of H2SO4 is dependent on the sulfate ion concentration.The leaching ratio for H2SO4 reaches a minimum at 0.32 N with only 0.42 times the percentage of nickel extracted from carbonated olivine compared to fresh olivine.
Organic Acids
The metal extractions from fresh and carbonated olivine by the three organic acids tested are shown in Figure 10.Some similar conclusions can be drawn as for inorganic acids.There is also essentially no preferential dissolution of Mg, Fe and Ni both from fresh and carbonated olivine.Although there is also less silica in solution as would be expected based on a congruent dissolution of the fresh olivine silicate structure, this difference is less than for inorganic acids.Silica is, thus, less prone to precipitation in the presence of organic acids, but is equally insoluble when already precipitated as colloidal silica in the case of carbonated olivine.
The leaching enhancement ratio for lactic acid increases with increasing acid concentrations, reaching a maximum of 1.48 at 2 N (Table 4).Contrary to lactic and formic acids, leaching with citric acid remains constant (for fresh olivine), or decreases slightly (for carbonated olivine) with increasing acid concentration (Figure 10).The enhancement ratio remains above one up to 1 N, with a maximum of 1.25 at 0.5 N (Table 4).Citrate ions act as chelating agents and will form strong metal-ligand complexes that enhance leaching.It appears that due to these highly soluble complexes, citric acid already reaches its maximum leaching potential at low concentrations.Tzeferis et al. [31] also found that increasing the citric acid concentration from 0.5 M to 1.5 M did not increase the nickel or iron extraction from nickeliferous ores at low pulp densities.At higher pulp densities, the percentages of nickel and iron leached at 0.5 M were substantially lower than for low pulp densities, and they did increase when the citric acid concentration was increased to 1.5 M.This confirms the idea that citric acid has an intrinsic maximum for the leaching of metals or specific ores, which, once reached, will not increase further when increasing the citric acid concentration.
Comparing the leaching results from fresh olivine with carbonated olivine, it can be seen that there is also a large discrepancy among the organic acids (Table 4).Nickel leaching by citric and lactic acids is enhanced when olivine is carbonated (i.e., enhancement ratios > 1).Leaching by formic acid, however, just like with sulfuric acid, experiences a considerable decrease (-32% to -54%) in nickel extraction from carbonated olivine compared to fresh olivine.S4 and S5 provides statistical data on replicates.
Sulfuric Acid Leaching Investigation
Aforementioned results for fresh olivine show a limited leaching of nickel, magnesium and iron (the cationic components of ferroan-forsterite) with H2SO4 compared to HCl and HNO3.Terry et al. [30] noted that sulfate ions form stronger metal cation-acid anion complexes than chloride ions, which would lead to an increased reactivity with sulfuric acid.For carbonated olivine, however, this is completely reversed and thus does not follow the existing theory.One possibility is that the sulfate ions form insoluble compounds with the components of carbonated olivine (carbonate and silica phases), passivating the particles.To test this hypothesis, leaching tests were conducted using sulfuric acid and mineral mixtures.The mineral mixtures consisted of one or more of the following components: pure magnesite (MgCO3), pure fumed (amorphous) silica (SiO2), fresh olivine and fully carbonated olivine.
If the carbonation products (magnesite and silica) had unexpected behavior in contact with sulfuric acid, these tests would help uncover these effects.Since the pure phases did not contain nickel or iron, leaching data for magnesium was collected.
Figure 11 presents the data on magnesium leaching from the pure components (Figure 11a) and from the mineral mixtures (Figure 11c).Figure 11a presents the raw leaching data (expressed as g, Mg/100 mL) and the data expressed as a percentage of the theoretical maximum leaching extent (based on Mg content in the mineral: 0.577 g, Mg/100 mL for pure MgCO3, 0.543 g, Mg/100 mL for fresh olivine, and 0.362 g, Mg/100 mL for fully carbonated olivine).It is found that pure MgCO3 leaches completely in 1.28 N H2SO4, so there is no negative effect of sulfate anions, and no precipitation of insoluble compound.In the case of olivine, leaching is much more extensive in the case of fresh olivine (77%) compared to fully carbonated olivine (20%).This confirms the effects seen in previous results (Figure 9). Figure 11b presents kinetic data on the leaching of fresh and fully carbonated olivine by 1.28 N H2SO4.Leaching of the latter is slower and stalls after two hours, while leaching of the former continuously increases over time, although it is also relatively slow (leaching after 2 h is less than a third that after 24 h).
Figure 11c presents the leaching results of four mineral mixtures.For each mixture, data is presented in raw format (g, Mg/100 mL) and as a percentage of the theoretical maximum leaching based on the mixture's composition and the leaching results obtained for the singular components (Figure 11a).The first mixture contains only the pure minerals, and shows that SiO2 does not prevent leaching of MgCO3, as the leaching extent reaches 97% of the predicted value (i.e., within experimental uncertainty).The second mixture shows that MgCO3 does not alter the leaching of fresh olivine, as the leaching extent is equal to the predicted value; hence, no insoluble precipitate forms.Likewise, the third mixture shows that SiO2 does not alter the leaching of fresh olivine (the value greater than 100% is within experimental uncertainty); again, no insoluble precipitate forms.Finally, the last mixture shows that carbonated olivine still leaches poorly and that leaching of MgCO3 is not affected by the presence of carbonated olivine, since the leaching extent is approximately equal to the calculated prediction (97%).This last mixture result shows that no component of carbonated olivine prevents the leaching of pure MgCO3, although the ferro-magnesite present in carbonated olivine is affected.
To better understand what happens with the sulfuric acid-leached carbonated olivine, the leaching residue was characterized by XRD, and results are shown in Figure 1.The diffraction pattern of the leaching residue is very similar to that of the pre-leaching fully carbonated olivine.No additional peaks form after leaching, which indicates that no crystalline precipitates form.Additionally, all significant peaks present in the pre-leaching mineral (attributable to magnesite) are still present after leaching, meaning that magnesite leaching is not extensive, as the magnesium leaching data indicated (Figure 11a).The main difference seen between the two diffractograms is that the leached residue has a larger "bump", which is attributable to a quasi-amorphous phase.Since this bump is near the theoretical location for crystalline silica (i.e., quartz), it is possible to infer that it represents the colloidal silica content of the material (and hence is also present in the pre-leached mineral).The reason why the bump grows after leaching, is that the crystalline content is reduced, due to partial dissolution of magnesite.These XRD results suggest that no precipitate forms after leaching, either crystalline or quasi-amorphous in nature.Based on leaching and XRD results, it appears that the only explanation for the poor leaching results of fully carbonated olivine in sulfuric acid is that the magnesite present in carbonated olivine is a solid-solution of MgCO3 and FeCO3.Since pure MgCO3 leaches adequately in sulfuric acid, and amorphous silica does not affect leaching, it could be that the iron content of the ferro-magnesite helps to passivate the mineral particles after an initial limited leaching extent.Further research is needed to characterize this mechanism.
Conclusions
The increasing demand and diminishing availability of raw materials requires us to look beyond conventional sources.In the future, the importance of low-grade ores and waste streams as a source for raw materials will only increase.The objective of this work was to look at the extraction of nickel from a low-grade silicate ore, namely olivine.This was achieved by combining conventional acid leaching with a pre-treatment step in which the olivine underwent mineral carbonation.It is anticipated that the mineral carbonation pre-treatment approach may also be applicable to other ultrabasic and lateritic ores.
In a first processing step, olivine was fully carbonated at high CO2 partial pressures (35 bar) and optimal temperature (200 °C) with the addition of pH buffering agents.Although substantial research has looked into the carbonation of olivine, reported extents of carbonation are usually lower than those achieved in this work (i.e., full carbonation).The carbonation increased linearly with time, indicating that carbonation is not limited by the formation of a passivating silica layer under the processing conditions used.This was confirmed by SEM analysis of partially carbonated olivine, showing that after carbonation distinct crystalline magnesium carbonate particles and clusters of nano-silica are formed.High solids loading and mixing rate appear to enhance the carbonation reaction substantially due to the olivine particles being eroded by increased particle collisions.Using electron probe micro analysis it was possible to map the distribution of both major (C, Mg, Si) and minor (Al, Ca, Cr, Fe, Ni) elemental components in this material.The main products of the carbonation reaction included quasi-amorphous colloidal silica, chromium-rich metallic particles, and ferro-magnesite.
The second stage of this work looked at the extraction of nickel and other metals by leaching fresh as well as carbonated olivine with an array of inorganic and organic acids to test their leaching efficiency.Compared to leaching from untreated olivine, the percentage of nickel extracted from carbonated olivine by acid leaching was significantly increased.For example, using 2.6 N HCl and HNO3, 100% and 91% of nickel was respectively leached from carbonated olivine.This compares to only 66% and 64% nickel leached from untreated olivine using the same acids.Similar trends were observed with the organic acids used, where the leaching enhancement reached a factor of 1.25 using 0.5 N citric acid, and a factor of 1.48 using 2 N lactic acid.It was found that two acids, sulfuric and formic, are unsuitable for leaching of carbonated olivine.
Looking at future developments, it should be pointed out that in the present work the metal extraction was performed after carbonation.Selective metal recovery of the olivine during carbonation might substantially enhance the leaching and reduce extractant consumption.During carbonation, the metals dissolve due to the acidic aqueous solution as a result of carbonic acid formation.The residual solids could then be separated at high temperatures and pressures prior to exiting the reactor, producing a purified silica stream and a concentrated metal liquor or metal precipitate.Another option is to separate the final products (i.e., silica-rich clusters, carbonate crystals, and metallic particles) prior to leaching, and leach only the metal-rich fraction; this may reduce the amount of sequestered CO2 liberation.Lastly, re-use of any CO2 released during acidification can contribute to lowering processing costs associated with CO2 concentration from industrial emission sources.These processes are presently in development using the proprietary reactor technology of Innovation Concepts B.V. called the "CO2 Energy Reactor" [13].This reactor makes use of a Gravity Pressure Vessel (GPV) that supports hydrostatically built supercritical pressures, runs in autothermal regime by recycling the exothermic carbonation heat, and operates under turbulent three-phase plug flow configuration.This reactor is expected to allow faster carbonation conversion and more economical processing.
Figure 2 .
Figure 2. Fresh olivine backscattered scanning electron image (top) and EPMA mapping of nickel concentrations (bottom); concentration scale is relative to max/min levels.
Figure 3 .
Figure 3. Influence of carbonation process parameters (temperature (a); residence time (b); solids loading (c); NaCl concentration (d); and NaHCO3 concentration (e)) on extent of olivine carbonation; TableS1provides detailed data values and statistics on replicates.
Figure 4 .
Figure 4. Particle size distribution of fully carbonated olivine, determined by LDA.
Figure 5 .
Figure 5. SEM and EDX analyses of individual fully carbonated olivine particles: (a) fully carbonated olivine at low magnification; (b) ferro-magnesite crystal; (c) colloidal silica cluster.Note that Au and C signals are also attributable to gold coating and carbon tape.
Figure 6 .
Figure 6.Backscattered scanning electron image and elemental concentration mapping (carbon, chromium, magnesium, iron and nickel) of fully carbonated olivine, captured by EPMA; concentration scale is relative to max/min levels; color gradient in Mg map is due to curvature artifact that occurs at these relatively low magnifications.
Figure 7 .
Figure 7. Backscattered scanning electron image and elemental concentration mapping (silicon, iron, magnesium, calcium and aluminum) of fully carbonated olivine, captured by EPMA; color scale indicates max/min levels.
Figure 8 .
Figure 8. Backscattered scanning electron image of fully carbonated olivine and EPMA scanning of selected areas; mounted sample was coated with platinum and palladium for the analysis.
Table 2 .
Parameter values used in the carbonation experiments.
Table 3 .
Chemical composition (%), determined by EPMA, of spot areas in Figure8; carbon as elemental C and other elements as oxides.
Table 4 .
Nickel leaching enhancement ratios of carbonated olivine over fresh olivine for inorganic and organic acids. | 9,321.2 | 2015-09-11T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Influenza immunisation in pregnancy is efficacious and safe, but questions remain
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Influenza immunisation in pregnancy is efficacious and safe, but questions remain
Pregnant women and infants are at high risk of severe influenza. 1,2 Since 2012, WHO has recommended influenza immunisation during pregnancy in any trimester and targets pregnant women as a high priority in annual influenza vaccination programmes. Although trivalent seasonal inactivated influenza vaccine (IIV) has shown efficacy against influenza infections in both pregnant women and infants, 3,4 the optimal timing of vaccination and effect on infant outcomes and safety remain controversial. The Bill & Melinda Gates Foundation funded three large randomised controlled trials in South Africa, Mali, and Nepal, which were done between 2011 and 2014, to increase the evidence base for the effects of maternal influenza immunisation. [5][6][7] The three trials showed that IIV was effective in preventing laboratory-confirmed influenza in pregnant women and in infants younger than 6 months. In the trial done in Nepal, maternal immunisation reduced the frequency of low birthweight infants by 15%.
In The Lancet Respiratory Medicine, Saad B Omer and colleagues 8 report the pooled analysis of these three trials, which included 10 002 pregnant women (5017 assigned to IIV and 4985 assigned to control) and 9800 liveborn infants (4910 livebirths from women who received IIV, and 4890 livebirths from women who received control) representing the largest dataset on women and newborns concerning influenza immunisation during pregnancy.
Several lessons can be learned from the results of Omer and colleagues' study. First, trivalent IIV administered at any time during pregnancy is effective in protecting pregnant women against PCR-confirmed influenza with a vaccine efficacy of 42% (95% CI 12-61) during pregnancy and 60% (36-75) in the postpartum period. Efficacy lasted until 6 months after vaccination (49%, 95% CI 29-63), which has implications for countries with year-round influenza virus circulation.
Second, although maternal immunisation appears to be effective in protecting infants up to 6 months of age (vaccine efficacy 35% [95% CI 19 to 47] on cumulative episodes of PCR-confirmed influenza), this protection is significant only up to 4 months of age (56% [28 to 73] before 2 months and 39% [11 to 58] between 2 and 4 months) but not between 4 and 6 months of age (19%, 95% CI -9 to 40) underscoring the progressive decline in maternal antibody titre. This finding has two implications. First, it supports the immunisation strategy based on the passive transplacental transfer of anti-influenza antibodies, which allows for effective protection of children who cannot be vaccinated against influenza, because influenza vaccines are not approved in children 0-6 months old. Second, it shows that there is a period between the ages of 4 months (disappearance of maternal antibodies) and 6 months (possible start of vaccination) when children are no longer protected, which should be considered in immunisation strategies based on recombinant or high dose or adjuvanted influenza vaccines during pregnancy. Moreover, in this pooled analysis, the vaccine was not effective against influenza B in infants (vaccine efficacy 13%, 95% CI -21 to 37). This finding could be explained by the frequent mismatch between vaccine and circulating influenza B strains in trivalent IIV. Research on the use of the quadrivalent IIV in pregnant women, which would probably improve the overall vaccine efficacy and the efficacy against influenza B, especially in children, is therefore warranted.
Third, the optimal timing of immunisation during pregnancy remains unclear. Whether the gestational stage of pregnancy affects responses to vaccines has not yet been extensively studied and conflicting results on seroconversion after seasonal influenza immunisation exist. In this study, there was no difference in efficacy against PCR-confirmed influenza in infants when the mothers were vaccinated before or after 29 weeks of gestation. Concerning the mothers, there was no efficacy against PCR-confirmed influenza when they were vaccinated before 29 weeks gestational age (vaccine efficacy 30%, 95% CI -2 to 52). As explained by the authors, this absence of efficacy in mothers vaccinated before 29 weeks gestational age is probably due to statistical considerations (lack of power), rather than a real difference in efficacy, as this would be inconsistent with studies that have shown a waning serological response to influenza immunisation as pregnancy progresses. 9 Fourth, these results confirm that seasonal influenza vaccination during pregnancy is safe. In addition to studies that did not show an increased incidence of adverse events in mothers, 3 safety in fetuses and newborns was also shown when considering low birthweight, stillbirth, preterm birth, and small for gestational age. However, contrary to what was suggested in the trials in Bangladesh 4 and Nepal, 7 the pooled data show no positive association between maternal immunisation and low birthweight. These findings would be a strong argument for recommending generalised maternal influenza immunisation in resource-limited countries and suggest that further research considering the heterogeneity of the findings across countries is needed.
In conclusion, these pooled data confirm that influenza immunisation during pregnancy is safe and effective for protecting both women and infants. Further research is warranted to consider more immunogenic vaccines to fill the protection gap in infants between 4 and 6 months of age and improve understanding of the association between maternal immunisation and child weight and length at birth at 6 months of age.
PL has received personal fees and non-financial support from Pfizer and Sanofi Pasteur. VT has received personal fees from Alexion and grants and personal fees from Roche Diagnostics and is a member of the scientific board of Obseva. OL has received personal fees from Sanofi Pasteur, grants, personal fees, and non-financial support from Pfizer, Janssen, and Sanofi Pasteur-Merck Sharp & Dohme, and grants, and non-financial support from GlaxoSmithKline. | 1,423.8 | 2020-06-01T00:00:00.000 | [
"Economics"
] |
BTK gatekeeper residue variation combined with cysteine 481 substitution causes super-resistance to irreversible inhibitors acalabrutinib, ibrutinib and zanubrutinib
Irreversible inhibitors of Bruton tyrosine kinase (BTK), pioneered by ibrutinib, have become breakthrough drugs in the treatment of leukemias and lymphomas. Resistance variants (mutations) occur, but in contrast to those identified for many other tyrosine kinase inhibitors, they affect less frequently the “gatekeeper” residue in the catalytic domain. In this study we carried out variation scanning by creating 11 substitutions at the gatekeeper amino acid, threonine 474 (T474). These variants were subsequently combined with replacement of the cysteine 481 residue to which irreversible inhibitors, such as ibrutinib, acalabrutinib and zanubrutinib, bind. We found that certain double mutants, such as threonine 474 to isoleucine (T474I) or methionine (T474M) combined with catalytically active cysteine 481 to serine (C481S), are insensitive to ≥16-fold the pharmacological serum concentration, and therefore defined as super-resistant to irreversible inhibitors. Conversely, reversible inhibitors showed a variable pattern, from resistance to no resistance, collectively demonstrating the structural constraints for different classes of inhibitors, which may affect their clinical application.
Introduction
Bruton Tyrosine Kinase (BTK) belongs to the TEC-family of non-receptor tyrosine kinases and is an important component of the B-Cell Receptor (BCR) signaling pathway [1,2]. BTK is essential for the development and survival of B-cells [3]. Lossof-function variations in BTK cause X-Linked Agammaglobulinemia [4,5] due to a block in the B-cell development at the transition from the pro-B to the pre-B cell stage, causing an increased proportion of pro-B and pre-B-I cells and a reduction of all subsequent stages [6,7]. While aberrant BTK activation and expression have been reported in malignant B-cells [8], it is generally considered that many B-cell tumors are addicted to BTK, since intact BCR-signaling is needed for tumor cells to thrive [9][10][11]. In a similar way, overexpression and constitutive phosphorylation of BTK lead to the activation of phospholipase C-γ2 (PLCG2), and of extracellular signal-regulated kinase (ERK) and nuclear factor kappa-beta, which promote upregulation of pro-survival signals and migration of chronic lymphocytic leukemia (CLL) cells [12,13]. Although the need for enhanced BCR-signaling remains elusive, signaling through BTK promotes adhesion and chemotaxis [14,15].
BTK inhibition is an effective strategy that has revolutionized the treatment of B-cell malignancies [12,16,17].
Ibrutinib (Imbruvica ® ) is the most studied BTK inhibitor and the first in this new class approved by the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) [18]. Resistance to ibrutinib treatment has been attributed to the selection of cells carrying a pathogenic mutant altering BTK or its downstream effector PLCG2 [19]. The most common resistance variation results in a cysteine (C) to serine (S) substitution at position 481, which prevents the covalent binding of ibrutinib to the thiol group located at the ATP-binding site [20,21]. When this alteration is introduced into the germline of mice, B-cell development remains normal, demonstrating functional interchangeabilty [22]. The BTK variants C481F, C481G, C481R and C481Y are enriched in some CLL patients, but occur at much lower frequency than C481S [21,23,24].
Phosphorylation of tyrosines Y551 and Y223 reflects the activation status and catalytic activity of BTK, respectively [25]. Ibrutinib inhibition effectively reduces autophosphorylation of Y223 in wild-type BTK as well as the phosphorylation of PLCG2, but does not impair activity of BTK variants C481S/T [26]. C481S/T variants are resistant to ibrutinib treatment, whereas C481G has only very weak activity upon exposure, and C481F/R/W/Y are catalytically inactive [26].
A new generation of irreversible and reversible BTK inhibitors has been developed to reduce side effects and to overcome resistance toward ibrutinib treatment [27]. Acalabrutinib is a second-generation BTK inhibitor, which covalently binds to C481. It has higher selectivity and reduces the number of adverse effects compared to ibrutinib [28,29]. Acalabrutinib has been approved by the FDA for the treatment of mantle cell lymphoma (MCL) and CLL/ small lymphocytic leukemia [30]. Zanubrutinib is also a more selective, irreversible BTK inhibitor and FDA approved for the treatment of MCL [31]. Zanubrutinib shows potent preclinical activity and minimal off-target effects in patients with Waldenström macroglobulinemia [32,33].
The gatekeeper residue in BTK is located in the regulatory spine, a conserved structure and key component in the control of the TEC-family kinase domain activity [34]. The gatekeeper residue plays an important role in the access to a deep pocket in the catalytic domain, and activation of an isolated kinase domain is independent of its N-terminal portion, as has been demonstrated by threonine to methionine substitution of the gatekeeper residue [35].
Reversible, non-covalent inhibitors are also selective for BTK, and since they do not bind to C481, the inhibition is likely to be at least partially maintained in presence of the C481S variant, in analogy with previous reports [36]. Thus, non-covalent inhibitors have shown high capacity against BTK variants including C481R and T474I/M in in vitro assays [36]. The non-covalent BTK inhibitor fenebrutinib (GDC-0853) was demonstrated to be safe and has been used in phase I studies [37]. Equivalent BTK inhibition was shown for wild-type and the C481S variant as measured by Y223 phosphorylation, when transfected into HEK-293T cells [38]. Viability was reduced and chemokine CCL3 production was decreased in C481S patient-derived clones treated with fenebrutinib in comparison to ibrutinib [38]. Another non-covalent inhibitor, RN486, is under preclinical evaluation and prevents type I and type III hypersensitivity responses, including anti-inflammatory effects in mice with collagen-induced arthritis [39,40]. CGI-1746, which also binds reversibly to BTK, has been used in rheumatoid arthritis and multiple myeloma mouse models [41].
Acquired mutations at the gatekeeper residue play an important role by causing resistance to many tyrosine protein kinase inhibitors [9]. For example, in chronic myeloid leukemia a T315I replacement in the BCR-ABL fusion causes resistance to the kinase inhibitor imatinib [42]. The anaplastic lymphoma kinase-inhibitor, crizotinib, is affected by the gatekeeper mutation L1196M [43]. In lung cancer, T790M substitution in epidermal growth factor receptor (EGFR) results in resistance to gefitinib, erlotinib, and afatinib. The mechanism of resistance differs from that in BTK, because T790M in EGFR increases the ATP-binding affinity [44]. In contrast, in BTK, the gatekeeper residue T474 is located at the edge of the regulatory spine, which maintains a compact and linear architecture important for BTK activation and function [45].
Inhibitors
All BTK inhibitors were kept at −20°C and dissolved in dimethyl sulfoxide (DMSO) at 10 mM concentration. For each experiment fresh dilutions were prepared in phosphate buffered saline (Sigma-Aldrich). 36-48 h post transfection, cells were starved under serum-free conditions for 5 h and inhibitors were added during the last hour of starvation. BTK inhibitors were purchased from the following suppliers, ibrutinib and acalabrutinib (Selleckchem, Houston, TX, USA); zanubrutinib (Chemgood, Glen Allen, VA, USA); RN486, CGI-1746 and fenebrutinib (MedChemTronica, Stockholm, Sweden).
Whole-cell lysate was obtained by using modified RIPA buffer (50 mM Hepes, 120 mM NaCl, 1% NP40, 10% glycerol, and 0.5% sodium deoxycholate) containing a phosphatase inhibitor cocktail (Roche, Basel, Switzerland). Cell lysates were preheated for 5 min at 65°C with sample buffer (0.2 M sodium carbonate, 0.25 M DL-dithiothreitol, 0.5% glycerol, and 2% sodium dodecyl sulfate). Immunoblotting was performed as previously described [26]. The following antibodies were used for the immunoblotting, polyclonal rabbit anti-BTK and anti-actin (Sigma-Aldrich): mouse anti-BTK (pY551), clone 24a/BTK (Y551) was from BD Biosciences; anti-BTK (pY223) clone EP420Y and polyclonal anti-PLCG2 (pY753) were from Abcam; rabbit anti-PLCG2 polyclonal antibody was from Southern Biotech. Odyssey infrared imaging system was used for scanning after the membranes were incubated with secondary antibodies according to the manufacturer's protocol (all the secondary antibodies and imaging system were from LI-COR Biosciences GmbH). The signals of total and phosphorylated proteins from duplicate/triplicate or higher number of experiments were quantified by the densitometric program NIH ImageJ 1.52a. β-actin served as internal loading control and the values were normalized to wild-type BTK.
Generation of BTK gatekeeper variants
In order to study the efficacy of both covalent and noncovalent BTK inhibitors in the context of resistant CLL cells, we created BTK variants by substituting the gatekeeper T474 residue, known to be associated with drug resistance [46,53]. Single nucleotide changes in the ACT codon for T474 yield 5 amino acid substitutions T474A/I/N/ P/S. T474I and T474S have been found in sub-clones of CLL cells in patients [46]. Variants T474E/F/L/M/Q/V required two or three nucleotide changes. Threonine to methionine substitution in the gatekeeper residue is related to drug resistance e.g., in EGFR [54]. T474E, T474F, T474L, T474Q and T474V replacements change or remove the charge or polarity of the site. We further investigated the potential influence of gatekeeper residue substitutions when combined with replacement of C481. For that purpose, we generated five double BTK variants: T474A/C481S, T474I/C481S, T474M/ C481S, T474M/C481T, and T474S/C481S (Table 1) [26].
Expression and catalytic activity of BTK gatekeeper variants in COS-7 cells
Constructs with both single and double BTK variants were transfected into COS-7 cells to enable analysis in the absence of endogenous BTK. Expression and activity results are shown in Fig. 1. The expression levels of BTK variants varied ±40% of wild-type.
Phosphorylation of Y223 in the SH3 domain of BTK serves as a measure of BTK catalytic activity [22,26,55]. While expressed at similar levels, the variants show activity differences. Single variants T474E/F/I/Q/S/V presented similar Y223 phosphorylation status as wild-type. The catalytic activity of T474A/N/P was substantially reduced, whereas T747L and T747M variants showed increased phosphorylation. Interestingly, the double variants T474I/ C481S, T474M/C481S, and T474M/C481T showed significantly, elevated activity, whereas T474A/C481S and T474S/C481S variants exhibited slightly reduced activity.
Activity of BTK variants in non-lymphoid cells and in B lymphocytes
Further experiments were performed in HEK-293T and B7.10 cells. Similar to COS-7 cells, HEK-293T cells enabled the analysis without the influence of endogenous BTK. The B7.10 cell line is a BTK knock-out subline generated from the parental chicken B lymphoma DT40 [48]. Variants T474I, T474M, and T474S were transfected into these cell lines. Moreover, the double variants T474I/C481S, T474M/C481S, and T474M/C481T, which show higher catalytic activity, were also transfected. When the catalytic activity was evaluated in the three cell lines, COS-7 cells showed the highest increase in phosphorylation at Y223 (Supplementary Fig. 1).
Gatekeeper variants and their resistance to ibrutinib and acalabrutinib
To evaluate the efficacy of ibrutinib to block BTK phosphorylation in the gatekeeper variants, we initially exposed transfected COS-7 and HEK-293T cells to ibrutinib in different concentrations. First, the inhibition of BTK phosphorylation at Y223 was analyzed in COS-7 cells using 0.5 μM of ibrutinib, which is the pharmacological concentration obtained in serum of treated patients [56]. Ibrutinib inhibited almost completely Y223 phosphorylation, whereas Y551 phosphorylation, which is dependent on other kinases, was only partially affected (Fig. 2), similar to previous observations [26].
The BTK variants sensitive to ibrutinib were T474A/ E/I/L/N/Q/S/V, and unexpectedly also T474A/C481S and T474S/C481S ( Fig. 2A and C). While T474S/C481S was partially sensitive, T474A/C481S was fully sensitive at 0.5 μM concentration and resisted three washouts. As expected, the T474M variant was insensitive to ibrutinib at 0.5 μM, whereas the resistance was reduced at 2 μM and lost at a concentration of 4 μM. The double mutants, T474I/ C481S, T474M/C481S, and T474M/C481T showed an unexpected super-resistance (defined as ≥16-fold the pharmacological serum concentration) to ibrutinib, for which BTK activity was not robustly blocked even when increased 120fold (64 μM) over the physiological concentration (Fig. 2B).
The potential effect of the single methionine, isoleucine or leucine substitutions in the gatekeeper residue on ibrutinib binding to BTK was also assessed by washout experiments. Our results show that both T474S/I, being sensitive to 0.5 μM and T474M being only sensitive at 4 μM, variants behave as wild-type with regard to washouts, indicating that these gatekeeper substitutions do not abrogate ibrutinib covalent binding ( Supplementary Fig. 2).
Experiments in HEK-293T cells confirmed the results obtained in COS-7 cells. The outcome for T474I/M/S, T474I/C481S, and T474M/C481S was similar to that in COS-7 cells. An exception, for which we have no explanation, was T474M/C481T, whose phosphorylation was essentially completely blocked by 64 μM of ibrutinib in HEK-293T, but was less affected in COS-7 cells (Fig. 2 and Supplementary Fig. 3).
The effect of the variants on the second-generation covalent BTK inhibitor, acalabrutinib, was also tested. We transfected variants T474I/M/S, T474I/C481S, and T474M/C481S into T474V T474E T474Q Wild-type T474A T474S T474N T474I T474L kDa COS-7 cells and exposed the cells to a series of dilutions of acalabrutinib, from 1.5 μM to 96 μM. Similar to ibrutinib, acalabrutinib inhibition is affected by substitutions at both C481 and T474. A concentration of 12 μM of acalabrutinib was needed to block at least 70% of the Y223 phosphorylation in T474M, and the super-resistance was maintained in the double variants, as in the case of ibrutinib treatment. The catalytic activity was not completely blocked even at 96 μM of acalabrutinib ( Supplementary Fig. 4). Zanubrutinib was studied in COS-7 cells transfected with T474I/C481S and T474M/C481S variants. Zanubrutinib is more selective for BTK and has fewer off-targets than ibrutinib [51]. Zanubrutinib binds covalently to C481, and similar to ibrutinib and acalabrutinib, it could overcome the double replacements, measured as phosphorylation at Y223, only at very high concentration (64 μM) ( Supplementary Fig. 5). Collectively, this set of data demonstrates that all the investigated irreversible inhibitors were subject to the same pattern of super-resistance, when both C481 and T474 were replaced.
Non-covalent inhibitors as treatment for BTK variants resistant to ibrutinib
Three commercially available, non-covalent BTK inhibitors RN486, fenebrutinib and CGI-1746 were tested in COS-7 cells. Following transfection with either wild-type or variants of BTK the cells were treated with 1 or 3 μM of the compounds. Similar to wild-type BTK (Fig. 3A, top), 1 μM of non-covalent inhibitors was sufficient to block Owing to that the T747A/C481S and T747S/C481S variants unexpectedly were sensitive and partially sensitive to ibrutinib, respectively (Fig. 2C), we were also interested to see how the variants behave when they are used in the presence of non-covalent inhibitors. We tested the effect of fenebrutinib and this non-covalent inhibitor, as anticipated, inhibited the single mutants, C481S and T474A, and also the double T474S/C481S even after washout. However, surprisingly, the T474A/C481S variant was insensitive to fenebrutinib, providing an example of a resistance mutant for this inhibitor that potentially could occur in tumor patients.
In all the ibrutinib super-resistant double-variants, 1 μM RN486 blocked >50% of the BTK activity and ≥50% phosphorylation of the downstream target PLCG2 (Y753), and almost complete BTK inhibition was obtained at 3 μM (Fig. 3B, top and Supplementary Fig. 6). Apart from T474A/C481S, as mentioned, Fenebrutinib inhibited >50% of BTK and PLCG2 phosphorylation in double variants at 1 μM concentration with the exception of T474I/C481S for BTK and T474M/C481S for PLCG2 (Fig. 3B, middle and Supplementary Fig. 6). Fenebrutinib at 3 μM inhibited >50% of the activity of the T474M/C481T variant (Fig. 3B, middle and Supplementary Fig. 6). CGI-1746 did not block BTK activity in most of the double variants at a concentration of 1 or 3 μM (Fig. 3B, bottom and Supplementary Fig. 6). In order to test the possibility of total block of BTK phosphorylation with fenebrutinib or CGI-1746, the concentration of the inhibitors was gradually increased to 6, 12 and 24 μM. The T474I/C481S variant was chosen for this experiment, since both substitutions have been reported in ibrutinib-resistant patients [46]. BTK phosphorylation activity was only partially blocked even when elevated concentrations of fenebrutinib or CGI-1746 were used (Fig. 4). Our results suggest that the tested non-covalent BTK inhibitors could be further examined for the treatment of C481S and C481T variants and that RN486 would offer the best treatment for the double variants T474I/C481S, T474M/C481S and T474M/C481T.
Bioinformatic and structural analysis of affinity effects due to variants
The effects of the variants were examined utilizing threedimensional structures. Coordinates were available for five of the complexes, in the case of acalabrutinib the inhibitor structure was downloaded and docked based on the matching atoms in ibrutinib.
BTK binding of the inhibitors was based on experimental three-dimensional structures, except for acalabrutinib that was modeled in the closed conformation, as the binding is only relevant in it (Fig. 5). Four variants were substituted in the structures (T474M/I and C481S/T). The effects of the substitutions can be explained based on the structure. C481 in the wild-type forms a covalent bond with the inhibitor, but not in the substitutions (Fig. 5; Fig. 6 left). Further, amino acid alterations can collide with the inhibitor. Combined effects create weaker binding of the inhibitors to the variants. Amino acid changes T474M and T474I shorten the side chain in comparison to the wild-type threonine and thus cannot retain the mode of binding ( Fig. 5; Fig. 6 right). Note that the side chains are flexible and have several favorable rotamers; however, their capability to bind is reduced and thus higher inhibitor concentrations are expected for activity, consequently leading to super-resistance.
The binding mode of the non-covalent inhibitors is very different in comparison to the covalent inhibitors. The gatekeeper residue T474 is important for binding of both types of inhibitors. Super-resistance in double variants emerges because binding interactions are lost or modified at positions important for affinity and/or specificity.
Discussion
We here demonstrate how simultaneous substitution of the C481 residue, to which irreversible BTK inhibitors tether, and of the gatekeeper amino acid, T474, result in superresistance to three clinically approved BTK inhibitors (summarized in Fig. 7). These findings also provide insight into the binding mode of both irreversible and reversible inhibitors. BTK inhibitors are highly efficient in the treatment of CLL and in a group of other B-cell malignancies. Covalent inhibitors ibrutinib, acalabrutinib, and zanubrutinib are approved for clinical use and the non-covalent inhibitor fenebrutinib has been tested in phase 1 and 2 clinical trials [37,38]. Non-covalent BTK inhibitors are needed becausẽ 60% of ibrutinib long-term treated patients obtain resistant sequence variants, mainly in BTK, but also in PLCG2 [21,24,53]. The most frequently mutated site is C481 and the most common substitution is to serine [21,24], but other substitutions may occasionally predominate [21].
Non-covalent BTK inhibitors provide promising treatment options for patients developing drug resistance [36]. Reduction of the inhibitory capacity was previously reported for spebrutinib and GS-4059 in C481S, C481R, T474I, and T474M variants [36]. The poorest inhibition of spebrutinib and of GS-4059 was obtained in the C481S variant and in the T474I or T474M gatekeeper mutants, respectively [36].
Non-covalent inhibitors have an orthogonal binding mode and occupy the kinase domain H3 pocket forming several hydrogen bonds [36,47,50]. Inhibitors such as RN486, bind to BTK using a network of three hydrogen bonds to the kinase-invariant residues K430 and G414 and to T474 (Fig. 5) [47].
Gatekeeper variations have been detected in treated CLL patients with a frequency of 4% and in the presence of the resistant C481S variation [46]. The role of the gatekeeper variations in BTK is not well understood, but since tumor cells carrying them increase in numbers, they act as resistance drivers. A gatekeeper variant could affect the binding of both covalent and non-covalent BTK inhibitors. Substitutions T474I or T474M introduce longer side chains that could sterically interfere the binding of covalent inhibitors and might disrupt the hydrogen bonds needed for the binding of non-covalent inhibitors [36,47]. In a systematic BTK mutagenesis screen, the importance of the T474 variants was reported for non-covalent inhibitors binding [47]. Interestingly, the authors observed co-occurrence of gatekeeper and kinase domain variants (L512M, E513G, F517L, L547P) in cis. Although two variations affecting the same allele might be anticipated to be very rare, they are relatively common in breast cancer patients with alterations to the catalytic subunit of the phosphoinositide 3-kinase alpha (PI3Kα) complex, 95% carrying the double-variant E545K/E726K [57].
Upon substitution of the gatekeeper residue, we found that most, but not all, variants are expressed at normal levels and are catalytically active. BTK was inhibited by ibrutinib in all the single variants with the exception of T474M, which does not explain the enrichment of T474I clones in a subset of treated patients [46]. Variant activities were blocked by 0.5 μM of ibrutinib, which is equivalent to the peak inhibitor concentration in plasma of treated patients [56]. To inhibit the T474M variant, 4 μM of ibrutinib was needed. These results confirm that methionine substitution affects the potency of covalent inhibitors even when C481 is intact in BTK [36].
We generated five double variants to investigate effects on inhibitor binding. T474I/C481S, T474M/C481S and T474M/C481T variants are super-resistant to ibrutinib, while T474A/C481S and T474S/C481S are sensitive and partially sensitive, respectively, at the clinically relevant 0.5 μM concentration. We hypothesize that super-resistance appears when ibrutinib cannot bind covalently to C481S, it acts instead as non-covalent inhibitor and forms hydrogen bonds with E475 and M477 similar to in the wild-type [36]. The replacements at position 474, either by methionine or isoleucine, could alter the binding site and weaken affinity to ibrutinib even at very high concentrations (Fig. 6). The super-resistant variants to ibrutinib T474M/C481S, T474I/ C481S and T474M/C481T are also unsusceptible to acalabrutinib and zanubrutinib.
The sensitivity of T474A/C481S and T474S/C481S variants to ibrutinib is compatible with explanation that substitutions carrying smaller side chains are less likely to clash with other residues. In T474S/C481S the polar character of the side chain is also retained. The single variants T474A and T474S showed slightly reduced enzymatic activity, as previously reported for T474A [35], but since T474S-carrying CLL cells are enriched in ibrutinib-treated patients [46], albeit rarely, this means that the activity is sufficient for tumor cells to thrive. Yet, it was unexpected that the double substitutions T474S/C481S and T474A/ C481S were partially, and fully sensitive, respectively, to ibrutinib at pharmacological concentration. The T474A and T474S gatekeeper replacements alter the binding pocket making it wider, however the serine variant could retain hydrogen binding to ibrutinib. In the case of alanine, we can hypothesize that a water molecule bound relatively stable could compensate for the missing side chain interactions, alternatively the binding site is changed along with adjustment in the angle between the lobes. This resistance mechanism is not unprecedented, since better fit for ATP has been reported in non-small-cell lung cancer patients treated with tyrosine kinase inhibitors. Variations affecting the gatekeeper threonine residue in the EGFR were identified as the underlying mechanism [9].
The non-covalent BTK inhibitors RN486 and CGI-1746 are highly selective for BTK in in vitro and in vivo models, implying a potential for treatment of patients with C481S resistant variations [36,40,41,58]. Fenebrutinib demonstrated clinical activity in a phase 1 study and inhibitory capacity of C481S variant in preclinical data [37,38]. Two other non-covalent BTK inhibitors, vecabrutinib (SNS-062), ARQ 531 and LOXO-305 were also shown to be effective against the C481S variant, as recently reviewed [59]. Our results for C481S and C481T variants confirm that all the three non-covalent BTK inhibitors could be further examined as treatment for ibrutinib-resistant patients. Only RN486 showed inhibitory capacity at a clinically relevant concentration against T474M/C481S, T474I/C481S, and T474M/C481T, and may provide the most potent treatment option. Author contributions CIES perceived and conceptualized the project. HYE and YS performed inhibition experiments with covalent BTK inhibitors. HYE performed inhibition experiments with non-covalent BTK inhibitors. QW, YS, LZ and DKM performed selected experiments where expression and activation of BTK variants were evaluated. GCPS and MV performed bioinformatic and structural analysis. HYE, AB, MV and CIES analyzed and interpreted results. RZ interpreted the results, assisted in obtaining structures of BTK inhibitors and edited the paper. LY assisted in obtaining BTK variants. MV and YS wrote selected parts of the paper. HYE, AB, and CIES wrote the paper.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. | 5,689.6 | 2021-02-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Synthesis and characterization of nanoparticle thin films of a-(PbSe)100−xCdx lead chalcogenides
We report the synthesis of amorphous (PbSe)100−xCdx (x = 5, 10, 15, and 20) nanoparticle thin films using thermal evaporation method under argon gas atmosphere. Thin films with a thickness of 20 nm have been deposited on glass substrates at room temperature under a continuous flow (50 sccm) of argon. X-ray diffraction patterns suggest the amorphous nature of these thin films. From the field emission scanning electron microscopy images, it is observed that these thin films contain quite spherical nanoparticles with an average diameter of approximately 20 nm. Raman spectra of these a-(PbSe)100−xCdx nanoparticles show a wavelength shift in the peak position as compared with earlier reported values on PbSe. This shift in peak position may be due to the addition of Cd in PbSe. The optical properties of these nanoparticles include the studies on photoluminescence and optical constants. On the basis of optical absorption measurements, a direct optical bandgap is observed, and the value of the bandgap decreases with the increase in metal (Cd) contents in PbSe. Both extinction coefficient (k) and refractive index (n) show an increasing trend with the increase in Cd concentration. On the basis of temperature dependence of direct current conductivity, the activation energy and pre-exponential factor of these thin films have been estimated. These calculated values of activation energy and pre-exponential factor suggest that the conduction is due to thermally assisted tunneling of the carriers.
Background
Metal chalcogenides, especially zinc, cadmium, and lead, have a lot of potential as efficient absorbers of electromagnetic radiation [1][2][3]. In recent years, there has been considerable interest in lead chalcogenides and their alloys due to their demanding applications as detectors of infrared radiation, photoresistors, lasers, solar cells, optoelectronic devices, thermoelectric devices, and more recently, as infrared emitters and solar control coatings [4][5][6]. A lot of work has also been focused on the fundamental issues of these materials possessing interesting physical properties including high refractive index [6][7][8].
There have been many theoretical and experimental studies on lead chalcogenides (PbS, PbSe, and PbTe) [9,10]. These chalcogenides are narrow, direct bandgap semiconductors (IV-VI groups) and crystallized at ambient condition in the cubic NaCl structure. They possess ten valence electrons instead of eight for common zinc blende and wurtzite III-V and II-VI compounds. They also exhibit some unusual physical properties, such as anomalous order of bandgaps, high carrier mobility, and high dielectric constants. All these unique properties of these semiconductors have inculcated great interest in the fundamental studies of these materials. Thin film semiconductor compounds, especially lead chalcogenide, and their alloys have drawn a lot of attention due to their technological importance and future prospects in various electronic and optoelectronic devices [11][12][13].
Nano-chalcogenides continue to attract the attention of researchers and engineers as a very large group of interesting solids in which unusual physical and chemical phenomena are revealed and as the materials that open new roads in science and technology. The nonlinear optical properties of these materials have attracted much attention because of their large optical nonlinearity and short response time. The size, shape, and surface characteristics have a strong influence on the physical properties of nanomaterials. Therefore, much attention has been paid in controlling these parameters to manipulate the physical properties of nanomaterials. Nanostructure formation has been explored for many kinds of materials, and this leads to an interesting topic also for lead chalcogenides. Lead chalcogenide possesses unique characteristics which are different from those in oxide and halide glasses, i.e., molecular structures and semiconductor properties. However, studies on lead chalcogenides at nanoscale are still at their early stages, and accordingly, overall features of these nanostructures have not been discovered.
Several workers reported the electrical and optical properties of PbSe in bulk form [14][15][16][17]. Many studies on PbSe films synthesized by chemical techniques are available in the literature [18][19][20][21][22]. There are also few reports on PbSe films and PbSe nanostructured thin films deposited by thermal evaporation technique [23][24][25][26]. Ma et al. [27] deposited polycrystalline PbSe thin films on Si substrates by thermal reduction method with carbon as the reducing agent. Kumar et al. [28] have studied the electrical, optical, and structural properties of PbSe 1−x Te x thin films prepared by vacuum evaporation technique. Lin et al. [29] reported the fabrication and characterization of IV-VI semiconductor Pb 1−x Sn x Se thin films on gold substrate by electrochemical atomic layer deposition method at room temperature. Pei et al. [30] studied the electrical and thermal transport properties of lead-based chalcogenides (PbTe, PbSe, and PbS) with special emphasis on the lattice and the bipolar thermal conductivity. Gad et al. [31] have studied the optical and photoconductive properties of Pb 0.9 Sn 0.1 Se nanostructured thin films deposited by thermal vacuum evaporation and pulse laser technique.
Recently, in a joint article from one of us [32], the structural, optical, and electrical properties of polycrystalline cadmium-doped lead chalcogenide (PbSe) thin films are reported. They also studied the optical bandgap, optical constants, and temperature dependence of direct current (dc) conductivity of these thin films in polycrystalline form. In the present work, we have synthesized the materials, i.e., (PbSe) 100−x Cd x in amorphous form using melt quenching technique and the prepared thin films containing nanoparticles using thermal evaporation method. Here, all the calculated experimental parameters are reported on the amorphous thin films containing nanoparticles of (PbSe) 100−x Cd x .
Methods
The source material (PbSe) 100−x Cd x with x = 5, 10, 15, and 20 were synthesized by direct reaction of high purity (99.999%) elemental Pb, Se, and Cd using melt quenching technique. The desired amounts of the constituent elements were weighed according to their atomic percentage and then sealed in quartz ampoules under a vacuum of 10 −6 Torr. The bulk samples of (PbSe) 100−x Cd x were prepared in steps. Initially, we have prepared PbSe in amorphous form, then doped with cadmium, and finally synthesized the (PbSe) 100−x Cd x in amorphous form using melt quenching. The sealed ampoules containing the samples PbSe and Cd were kept inside a programmable furnace, where the temperature was raised up to 923 K at the rate of 4 K/min and then maintained for 12 h. During the melt process, the ampoules were agitated frequently in order to intermix the constituents to ensure homogenization of the melt. The melt was then quenched rapidly in ice water.
Thin films of (PbSe) 100−x Cd x with a thickness of 20 nm were deposited on glass substrates at room temperature under argon pressure of 2 Torr using an Edward Coating Unit E-306 (Island Scientific, Ltd., Isle of Wight, England, UK). The thickness of the films was measured using a quartz crystal monitor (Edward model FTM 7). The earthed face of the crystal monitor was facing the source and was placed at the same height as the substrate. Evaporation was controlled using the same FTM 7 quartz crystal monitor.
The surface morphology of these thin films was studied by field emission scanning electron microscopy (FESEM). We have dispersed these samples in acetone solution, and a drop of solution is dispersed on carbon tape. The morphology of these dispersed particles was also studied. This suggested that the dispersed nanoparticles are aggregated with the average diameter of 20 nm. The X-ray diffraction (XRD) patterns of (PbSe) 100−x Cd x chalcogenide thin films were recorded using an X-ray diffractometer (Ultima-IV, Rigaku Corporation, Tokyo, Japan). The copper target (Cu-Kα, λ = 1.5406 Å) was used as a source of X-rays. These measurements are undertaken at a scan speed of 2°/min for the scanning angle ranging from 10°to 70°. Thin films composed of nanoparticles were used for measuring optical and electrical parameters. For optical studies, we recorded the Raman spectra, photoluminescence, optical absorption, reflection, and transmission of these thin films containing nanoparticles. Optical absorption and reflection of these thin films were measured by UV-vis spectrophotometer (UV-1620PC, Shimadzu Corporation, Nakagyo-ku, Kyoto, Japan). Raman spectrum is recorded by a Raman spectrophotometer (DXR, Thermo Fisher Scientific, Waltham, MA, USA), and photoluminescence had been measured by a spectro-fluorophotometer (RF-5301PC, Shimadzu). To study the electrical transport properties, dc conductivity of these thin films was measured as a function of temperature. The resistance of these nanoparticle thin films was measured for a temperature range of 293 to 473 K. To measure the resistance, two silver thick electrodes were pasted on these thin films using silver paste. All these measurements were performed in a specially designed I-V measurement setup (4200 Keithley, Keithley Instruments Inc., Cleveland, OH, USA), which was evacuated to a vacuum of 10 −6 Torr using a turbo molecular pump. In this setup, thin film was mounted on the sample holder with a small heater fitted below, and the temperature dependence of dc conductivity was studied.
Results and discussion
The morphological studies of these thin films show the presence of high yield of nanoparticles on the surface (Figure 1a). To understand the shape and size of these nanoparticles, we have further undertaken the morphological studies of the dispersed solution of these nanoparticles. Our studies suggest that these nanoparticles are aggregated with an average size of approximately 20 nm, and the particles are quite spherical ( Figure 1b). Figure 2 presents the XRD pattern of these nanoparticle thin films. The XRD spectra do not show any significant peak for the thin films of all the studied alloy composition, thereby suggesting the amorphous nature of these nanoparticles synthesized in this study. Raman spectra of (PbSe) 100−x Cd x nanoparticles for different concentrations of cadmium are shown in Figure 3. Several Raman bands are observed at 116, 131, 162, 218, 248, 289, 383, and 822 cm −1 . The weak peak observed at 116 cm −1 probably originates from the surface phonon (SP) mode, which is close to the reported value of 125 cm −1 for the SP mode in the case of PbSe nanoparticles [33]. The peak at around 131 cm −1 is assigned to the lattice mode vibration. It is an elementary transition, and the energy of this lattice phonon is 16.2 MeV. Murali et al. [33] observed a Raman peak at 135 cm −1 for the PbSe thin films. It is designated as lattice phonon (LO) mode. Similarly, the peaks observed at 162, 218, and 248 cm −1 may be attributed to 2LO(X), LO(L) + LA(L) and 2LO(A) vibration bands, respectively [34]. The peak observed at around 289 cm −1 is closer to the reported value of 279 cm −1 , which is to be associated with two phonon scattering (2LO) [35]. The high-frequency peak that appeared at 822 cm −1 is in accordance with the polar theory, which is close to the reported value of 800 cm −1 for PbSe films possibly corresponding to the ground state energy of the polar on the study of Appel [36]. It is observed from the Raman studies that this alloy also contains some phases of CdSe, and a peak at 383 cm −1 has been observed. This peak is near the reported value of 410 cm −1 , corresponding to the CdSe LO phonon mode [37,38]. Here, it is clear that all the observed Raman peaks show a wavelength shift on adding Cd to the PbSe system. In the case of the present system of (PbSe) 100−x Cd x nanoparticles, this shift in wavelength on low as well as on high sides may be associated with the shape of dispersion of LO phonon with a maximum wavelength at the zone center, which decreases as the phonon vector moves toward the zone edges. It is also suggested that the optical phonon line will also get broadened on reducing the size to nanoscale dimensions. This broadening may also originate from the disorder present in these nanoparticles.
The room-temperature photoluminescence (PL) spectra of these thin films of a-(PbSe) 100−x Cd x nanoparticles as a function of incident wavelength is presented in Figure 4. The spectrum shows the emission peak under PL excitation wavelength at 300 nm within the range of 300 to 600 nm. We have observed the emission peak at 360 and 380 nm and a broad peak at 425 nm for a-(PbSe) 100−x Cd x nanoparticles. These peaks show a shift to the lower wavelength side as the metal (Cd) concentration increases. It is suggested that this shift in the emission peaks toward the lower wavelength side may be attributed to the narrowing of the bandgap of a-(PbSe) 100−x Cd x nanoparticles with the increase in cadmium concentration. This shows clearly an agreement with our results on the variation of optical bandgap with metal (Cd) content, which decreases with the increase in Cd content. It is also observed that these peaks show a broad full width at half maximum, which suggests the effect of size reduction to nanoscale in the present samples. Arivazhagan et al. [39] studied the effect of thickness on the vacuum-deposited PbSe thin film. They reported that the emission peak centered at 380, 386, 388, and 405 nm for the films of thickness 50, 100, 150, and 200 nm, respectively. This suggests that the peak shows a blueshift with the decreasing film thickness. In our case, we have deposited the films of 20-nm thickness. Therefore, the peak observed at 360 nm shows a further blueshift due to the decrease in film thickness (20 nm) as compared with that of the reported results of 50-nm-thick PbSe films. A new peak originating at 380 nm may be due to the addition of Cd to PbSe. These peaks show a blueshift with the increase in Cd content. Several workers [40] showed an emission peak at 420 nm under the PL excitation at 300 nm for nanocrystalline PbSe. In our case, we have also observed the emission peak at 425 nm for the thin films of a-(PbSe) 100−x Cd x nanoparticles, which shows a slight red shift as compared with that of the reported results. This may be due to the disorder (amorphous nature) present in the films. This peak also shows a slight blueshift with the increase in Cd content. Therefore, the peak observed at 425 nm agrees well with that of the reported results [40]. The understanding of optical and electrical processes in lead chalcogenide materials in nanoscale is of great interest for both fundamental and technological points of view. In recent years, owing to their very interesting physical properties, this particular material has raised a considerable deal of research interest followed by technological applications in the field of micro/optoelectronics. Significant research efforts have been focused to the study of the optical and electrical properties of this compound in thin film formation because the optimization of device performance requires a wellestablished knowledge of these properties of PbSe and metal-doped PbSe thin films. Here, we have studied the optical absorption, reflection, and transmission of amorphous thin films of (PbSe) 100−x Cd x nanoparticles as a function of the incident wavelength in the range of 400 to 1-200 nm.
The optical absorption studies of materials provide a simple approach to understand the band structure and energy gap of nonmetallic materials. Normally, the absorption coefficient is measured in the high and intermediate absorption regions to study the optical properties of materials. It is one of the most important means of determining the band structures of semiconductors. On the basis of measured optical density, we use the following relation to estimate the values of the absorption coefficient [4]: where OD is the optical density measured at a given layer thickness (t).
On the basis of the calculated values of absorption coefficient, we have observed that the value of absorption coefficient increases with the increase in photon energy for all the studied thin films of a-(PbSe) 100−x Cd x nanoparticles. During the absorption process, a photon of known energy excites an electron from a lower to a higher energy state, corresponding to an absorption edge. In the case of chalcogenides, we observe a typical absorption edge, which can be broadly attributed to one of the three processes: (1) residual below-gap absorption (2) Urbach tails, and (3) interband absorption. Highly reproducible optical edges are being observed in chalcogenide glasses. These edges in chalcogenides are relatively insensitive to the preparation conditions, and only the observable absorption [41] with a gap under equilibrium conditions accounts for the first process. A different type of optical absorption edge is observed in amorphous materials, and absorption coefficient increases exponentially with the photon energy near the energy gap. A similar behavior has also been observed in other chalcogenides [42]. This optical absorption edge is known as the Urbach edge and is given as follows: where A is a constant of the order of unity, ν is the frequency of the incident beam (ω = 2πν), ν 0 is the constant corresponding to the lowest excitonic frequency, k B is the Boltzmann constant, and T is the absolute temperature. The calculated values of the absorption coefficient for thin films of a-(PbSe) 100−x Cd x nanoparticles are of the order of approximately 10 5 cm −1 , which is consistent with the reported results [43,44]. The calculated values of absorption coefficient (α) are given in Table 1. It is observed that α shows an overall increasing trend with the increase in the metal (Cd) concentration. It is suggested that bond breaking and bond rearrangement may take place when there is increasing cadmium concentration, which results in the change in local structure of these lead chalcogenide nanoparticles. This includes subtle effects such as shifts in the absorption edge, and more substantial atomic and molecular reconfiguration which is associated with changes in the absorption coefficient and absorption edge shift.
In the case of amorphous semiconductors, the fundamental absorption edge follows an exponential law. Above the exponential tail, the absorption coefficient obeys the following equation [4]: where B is a constant, E g is the optical bandgap, and m is a parameter that depends on both the type of transition (direct or indirect) and the profile of the electron density in the valence and conduction bands. The values of m can be assumed to be 1/2, 3/2, 2, and 3, depending on the nature of electronic transition responsible for the absorption: m = 1/2 for allowed direct transition, m = 3/ 2 for forbidden direct transition, m = 2 for allowed indirect transition, and m = 3 for forbidden indirect transition.
The present systems of a-(PbSe) 100−x Cd x obey the role of direct transition, and the relation between the optical gap, absorption coefficient α, and the energy (hν) of the incident photon is given as follows: The variations of (αhν) 2 with photon energy (hν) for a-(PbSe) 100−x Cd x nanoparticle films are shown in Figure 5. Using this figure, the intercept on the x-axis gives the value of direct optical bandgap E g , and the calculated values of E g for a-(PbSe) 100−x Cd x nanoparticles are given in Table 1. It is clear from the table that E g decreases with the increase in Cd concentration in this system of nanoparticles. This decrease in optical bandgap may be explained on the basis of 'density of state model' proposed by Mott and Davis [45]. According to this model, the width of the localized states near the mobility edges depends on the degree of disorder and defects present in the amorphous structure. In particular, it is known that unsaturated bonds together with some saturated bonds are produced as the result of an insufficient number of atoms deposited in the amorphous film [46]. The unsaturated bonds are responsible for the formation of some defects in the films, producing localized states in the amorphous solids. The presence of high concentration of localized states in the band structure is responsible for the decrease in optical bandgap on increasing dopant (Cd) concentration in these amorphous films of (PbSe) 100−x Cd x nanoparticles. This decrease in optical bandgap may also be due to the shift in the Fermi level whose position is determined by the distribution of electrons over the localized states [47].
The values of refractive index (n) and extinction coefficient (k) have been calculated using the theory of reflectivity of light. According to this theory, the reflectance of light from a thin film can be expressed in term of the Fresnel's coefficient. The reflectivity [48][49][50] on an interface is given as follows: where the value of k has been calculated by using the following formula: with λ is the wavelength. Figures 6 and 7 show the spectral dependence of the extinction coefficient and refractive index for a-(PbSe) 100−x Cd x thin films. It is observed that the values of these optical constants (n and k) increases with the increase in photon energy. A similar trend has also been observed for thin films of other various amorphous semiconductors [51,52]. The values of n and k for different concentrations of Cd are given in Table 1. It is evident from the table that, overall, the value of these optical constants increases with the increase in dopant concentration. This can be understood on the basis of density of defect states. It is well known that chalcogenide thin films contain a high concentration of unsaturated bonds or defects. These defects are responsible for the presence of localized states in the amorphous bandgap [53]. In our case, the addition of Cd in the PbSe alloy results in the increased number of unsaturated defects. Due to this increase in the number of unsaturated defects, the density of localized states in the band structure increases, which consequently leads to the increase in values of refractive index and extinction coefficient with the addition of metal (Cd) content. For the study of electrical transport in amorphous semiconductors, especially chalcogenide glasses, dc conductivity is one of the important parameters. The dc conductivity of chalcogenide glasses depends on the combination of starting components, synthesis conditions, rate of melt annealing, purity of starting components, thermal treatment, and on some other important factors. The electrical conduction process in amorphous semiconductors is generally governed by the three mechanisms namely (1) bands near the Fermi level (E F ). To explain the conduction mechanism in amorphous semiconductors, studies on temperature dependence of conductivity is reported by various workers [54][55][56][57]. It is understood that conduction in chalcogenide glasses is intrinsic [58,59] and that the Fermi level is close to the midway of the energy gap. Intrinsic conduction of amorphous semiconductors is determined by carrier hopping from the states close to the edge of the valence band to localized states near the Fermi level or from the state near the Fermi level to the conduction band. The suitable conduction mechanism is decided depending on the predominant process. In the case of chalcogenide glasses, the Fermi level is somewhat shifted from the middle of the energy gap toward the valence band [60].
In the present work, we have also studied the temperature dependence of dc conductivity of thin films of a-(PbSe) 100−x Cd x nanoparticles over the temperature range of 297 to 400 K. From the variations of dc conductivity with temperature, it is found that the experimental data for the entire temperature range is fitted well with the thermally activated process model. To elucidate the conduction mechanism in the present sample of a-(PbSe) 100−x Cd x nanoparticles, we have applied the thermally activated process for the temperature region of 297 to 400 K.
The plot of ln σ dc versus 1000/T for the temperature range of 297 to 400 K is presented in Figure 8. The graph is a straight line, indicating that the conduction in this system is through a thermally activated process. The conductivity is, therefore, expressed by the usual relation given as follows [4]: where σ 0 represents the pre-exponential factor, and ΔE c is the dc activation energy which is calculated from the slope of ln σ dc versus 1000/T plot. Using the slope and intercept of Figure 8, we have calculated the value of ΔE c and σ 0 , respectively. The Table 1. On the basis of these calculated values, it may be suggested that the conduction is due to the thermally assisted tunneling of charge carriers in the extended states for the temperature range of 297 to 400 K of our sample a-(PbSe) 100−x Cd x nanoparticles. However, it is important to mention that activation energy alone does not provide any information as to whether conduction takes place in the extended states above the mobility edge or by hopping in the localized states. This is due to the fact that both of these conduction mechanisms may take place simultaneously. The activation energy in the former case represents the energy difference between mobility edge and the Fermi level, E c − E F or E F − E V , and in the latter case, it represents the sum of the energy separation between the occupied localized states and the separation between the Fermi level and the mobility edge. It is evident from Table 1 that dc conductivity increases as the concentration of Cd increases, whereas the value of activation energy decreases with the increase in Cd contents in our lead chalcogenide nanoparticles. An increase in dc conductivity with a corresponding decrease in activation energy is found to be associated with a shift of the Fermi level for the impurity-doped chalcogenide [46,61]. It also shows that the Fermi level changes after the incorporation of Cd. However, it has also been pointed out that the increase in conductivity could be caused by the increase in the portion of hopping conduction through defect states associated with the impurity atoms [62].
A clear distinction between these two conduction mechanisms can be made on the basis of the pre-exponential factor value. For conduction in extended states, the value of σ 0 reported for a-Se and other Se alloys in thin films is of the order 10 4 Ω −1 cm −1 [62]. In the present sample of a-(PbSe) 100−x Cd x nanoparticles, the value of σ 0 is of the order 10 7 Ω −1 cm −1 . Therefore, extended state conduction is most likely to take place. An overall decrease in the value of σ 0 is observed with the increase in Cd contents in the PbSe system, which may be explained using the shift of Fermi level on adding Cd impurity. Therefore, the decrease in the value of σ 0 may be due to the change in Fermi level on adding Cd in the PbSe System.
Conclusions
Thin films of amorphous (PbSe) 100−x Cd x nanoparticles have been synthesized using thermal evaporation technique. The average diameter of these nanoparticles is approximately 20 nm. Raman spectra of these a-(PbSe) 100−x Cd x nanoparticles revealed the presence of PbSe phases in as-synthesized thin films, and the observed wavelength shift in the peak position as compared with that of reported values on PbSe may be due to the addition of Cd impurity. PL spectra suggest that the peaks show a shift to the lower wavelength side as the metal (Cd) concentration increases, which may be attributed to the narrowing of the bandgap of a-(PbSe) 100−x Cd x nanoparticles with the increase in cadmium concentration. A direct optical bandgap is observed, which decreases on increasing cadmium concentration. This may also be due to the increase in the density of defect states, which results in the extension of tailing of bands. The value of refraction index and extinction coefficient increases with increasing photon energy for all samples of a-(PbSe) 100−x Cd x . From temperature dependence of dc conductivity measurements, it may be concluded that conduction is taking place through the thermally activated process over the entire range of investigation. The preexponential factor shows an overall decreasing trend with increasing Cd content. The decrease in σ 0 may be due to the change in the Fermi level on the addition of Cd in the lead chalcogenide system. Finally, the suitability of these nanoparticles of lead chalcogenides for various applications especially in solar cells can be understood on the basis of these properties. | 6,529.6 | 2013-04-02T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Mixed Convection of Hybrid Nanofluid in an Inclined Enclosure with a Circular Center Heater under Inclined Magnetic Field
: The hybrid nanofluids have efficient thermal networking due to the trade-off between the pros and cons of the more than one type of suspension. In the current study, water-based hybrid nanofluid is used to investigate mixed convection in a squared enclosure heated with a circular center heater. The cavity is placed inclined under the uniform inclined magnetic field. The squared cavity comprises of two adiabatic vertical walls and two cold horizontal walls. The governing equations are normalized using a suitable set of variables and are solved with the finite element method. A comparison is provided with previously reported results at limiting case. The grid independence is examined for the Nusselt number at the central heater. The analysis reveals the effective role of the concentration of hybrid nanofluid particles in enhancing the heat spread. The results indicate that adding 2% concentration of Ag-MgO hybrid nanoparticles causes an 18.3% uprise in the Nusselt number at the central heater. The heat transfer rate enhances for increasing Hartmann number between 0 and 10 but decreases over 10. For better heat transfer augmentation, a heater with a smaller radius is recommended for the free convection. In contrast, a heater with a larger radius serves the purpose in case of forced convection.
Introduction
Due to low thermal conductivity, the traditional heat transfer fluids like water or kerosene oil are not suitable in microelectronics and heat exchangers. It motivates researchers to create innovative fluids with significantly higher conductivities to enhance the thermal performance. The thermal conductivities of these fluids are improved by utilizing metallic or non-metallic nanoparticles in conventional fluids. There are several engineering applications where mixed convection in lid-driven cavities plays a vital role, e.g., microelectronics, chemical and drying processes, and lubrication machinery. The thermal performance is enhanced by utilizing nanofluids in a porous medium.
Mixed convection problems in enclosures and in bounded domains with moving lids can be found in numerous engineering applications, like furnaces, chemical processing equipment, lubrication technologies, microelectronics, and drying process. Several studies related to mixed convection in various cavities are available in open literature. Oztop and Dagtekin [1] investigated the mixed convection problem numerically in a square cavity and noticed powerful effects of the Richardson number on the heat transfer. Ismael, et al. [2] considered square cavity and examined the effects of volume fraction of nanoparticles on the thermal performance. Sebdani, et al. [3] examined the effects of variable thermal conductivity and viscosity on the mixed convection. They found that the rate of heat transfer depends upon pertinent parameters. Basak, et al. [4] considered different thermal boundary conditions and analyzed heat transfer to explain the variation in the Nusselt number. Abdelkhalek [5] used the perturbation method and demonstrated the crucial role of governing parameters in explaining thermal performance of the cavity.
Hasan, et al. [6] analyzed thermal performance of water-based nanofluid in the square cavity and proved the higher heat transfer rates for all conditions. Mansour, et al. [7] used different nanofluids and explored an increase in the average Nusselt number with an increase in the solid volume fraction of nanoparticles. They also reported a decrease in the average Nusselt number with an increase in the heater length. Basak, et al. [8], Alsabery, et al. [9], Li, et al. [10], and Alsabery, et al. [11] used different thermal boundary conditions and found an upsurge in the intensity of vortices with rising Grashof number. The local Nusselt number reveals non-monotonic features on the heated surface for higher Darcy and Prandtl numbers. The robust combination between flow and temperature fields has been discovered at elevated Pêclet numbers. The average heat transfer rate on the heated walls were found to be a vital function of Grashof numbers.
Cheng [12] investigated mixed convection for governing parameters. The effects of various flow parameters on the heat transfer were analyzed to develop correlations for the average Nusselt numbers in the laminar flow regimes. Later, Cheng and Liu [13] discovered that both the direction of temperature gradient and Richardson number influence the thermal performance of the cavity. Mehmood, et al. [14] and Mehmood, et al. [15] used alumina-water nanofluid and examined the effects of nonlinear thermal radiation as well as inclined magnetic field. They employed different models to analyze the effects of pertinent parameters on the thermal performance. It is noticed that the controlling parameters help in increasing the heat transfer.
Moolya and Satheesh [16] analyzed the combined effects of heat and mass transfer and noticed a rise in the Nusselt and Sherwood numbers with an increasing inclination angle. Behzadi, et al. [17] examined the impacts of the porous medium on the thermal performance in a ventilated square cavity. Using different thermal boundary conditions, they disclosed a decreasing trend of Nusselt number with a rise in the Darcy number and porous particle diameter. Garoosi, et al. [18], Garoosi and Talebi [19], and Garoosi, et al. [20] conducted numerical studies with different heating arrangements and found enhancement in the thermal performance with diminishing the nanoparticle diameter and increasing the number of the heating elements up to a certain Richardson number.
Sheremet, et al. [21] and Sheremet and Pop [22] employed Buongiorno nanofluid model and Darcy approach to study the features of water-based nanofluids. They established a rise in heat transfer with an increase in dimensionless numbers. Talebi, et al. [23] noticed that the percentage increase in the solid volume fraction of nanoparticles changes the stream pattern and thermal performance significantly at higher Rayleigh numbers. Kefayati, et al. [24] observed a rise in the heat transfer with an increase in the mixed convection parameter and reduction with increasing magnetic field. Shahi, et al. [25] discovered an improvement in the average heat transfer with increasing solid volume fraction and reduction in the average bulk temperature.
Kalteh, et al. [26] dealt with mixed convection of a water-based nanofluid and realized a substantial rise in heat transfer in the presence of the nanoparticles. Selimefendigil and Öztop [27] employed a fuzzy model to a CFD code and noticed that the fin enhances the heat transfer rate. However, thermal performance is affected by the length and inclination angle of the fin. Alsabery, et al. [28] noticed an adverse effect of nanoparticles on the heat transfer rate for larger values of mixed convection parameter and smaller Reynolds numbers. However, the nanofluid approach confirms an apparent escalation of heat transfer. Ramakrishna, et al. [29] and Ramakrishna, et al. [30] examined the impact of several boundary conditions and pertinent parameters on heat transfer rates and established that the average heat transfer rate rises with Prandtl number.
Ismael, et al. [31] imposed partial slip condition on the cavity walls and noted a decrease in heat transfer with the slip parameter. Shirvan, et al. [32] and Sourtiji, et al. [33] found a decrease in the Nusselt number with increasing magnetic field. Burgos, et al. [34] reported insignificant influence of the buoyancy force for lower Richardson numbers. Çolak, et al. [35] used Open FOAM's software and noticed an augmentation in the Nusselt number based on chamfer radius. Further details related to current topic can be found in [36][37][38][39][40][41][42].
In the current investigation, we intend to target the combined effects of resistive magnetic force applied at an angle of attack while the square enclosure is positioned at a different angle. The square enclosure contains a circular heater and is filled with a mixture of hybrid Ag-MgO nanoparticles and water. Such effects have never been considered before despite having certain industrial applications, like, in various heat exchangers this design is used to expedite the convection. Generally, the cavity position and the magnetic field angle of inclination are not horizontal always and change with position and design, thus have significantly influence on the flow heat transfer behavior. Therefore, we think this problem should be addressed and explore to understand the square cavity problem from these aspects. The study is devoted to examining the influence of pertinent parameters, likes, Rayleigh number, Hartmann number, nanoparticles concentration, and angles of inclination on the mixed convection of hybrid nanofluids. Various graph and tables are plotted to see the variational trends and some important recommendations are made in the conclusion part.
Formulation of the Problem
A square enclosure of dimension L is assumed to be filled with a suspension of Ag-MgO hybrid nanofluid, thoroughly dispersed in the water-based fluid. The enclosure is placed inclined at an angle α and the upper lid is moving towards right with a constant velocity U. The top and bottom boundaries are placed at a constant cooled temperature T c while the left and right vertical boundaries are completely insulated. A circular heater of radius r, situated at the center of the enclosure, is heating the enclosure with a uniform temperature T h . The base fluid and the nanoparticles are assumed to be in thermal equilibrium. The temperature differences inside the cavity are assumed to be negligible, and except density thermo-physical properties are supposed to be uniform. The Boussinesq approximation is used to model the density. Moreover, the enclosure is held under uniform Lorentz force of potency B 0 in the direction of angle β. In Cartesian coordinate system the geometry of considered problem is shown in Figure 1.
The boundary data is provided in Table 1. Table 1. The boundary data (See Figure 1 for detail).
At the Left and Right Walls
The hybrid nanofluid comprises of blend of water and Ag-MgO nanoparticles. Table 2 describes the thermo-physical properties of base-fluid and nanoparticles. At the reference temperature from 20 • C to 30 • C, the effective density (ρ hnf ) and the thermal expansion coefficient (ρβ) hnf for the hybrid nanofluid are presented by Tiwari and Das [43,44]: and the effective heat capacity assuming thermal equilibrium is where φ hnf (= φ Ag + φ MgO ) is the volume fraction of hybrid nanoparticles. For effective electrical conductivity, the modified Maxwell model for hybrid nanofluids used by Ghalambaz, Sabour, Pop, and Wen [45] is assumed here and is stated as Moreover, for thermal conductivity and dynamic viscosity, we use the model of curve fitting of experimental data by Hemmat Esfe, et al. [46]: Invoking the following dimensionless quantities: Employing Equation (11), the normalized form of governing Equations (1)-(4) is given by: ∂u ∂x where the dimensionless coefficients C 1 , C 2 , C 3, and C 4 are constants associated with the characteristics of hybrid nanofluid and are described below as: The local Nusselt number in normalized form at the heater and the upper wall is given by: Hence the average Nusselt number take the following forms:
Numerical Solution and Validation
A numerical scheme of finite element method is applied to solve the set of nonlinear partial differential Equations (12)- (16). For this purpose, we use Newton's linearization method to convert the nonlinear partial differential equations PDEs into linear equations. The initial seed is obtained by solving the corresponding Stokes problem. The initial seed is then employed to evaluate the linearized PDEs. The resulting linearized PDEs are solved using the NDSolve utility package of Mathematica 12. The solutions obtained are fed back into the PDEs, and the new linearized PDE is solved repeatedly until the solutions converge.
The discretization of the computational domain is created by dividing it into a finite number of non-uniform triangular elements. The smaller element size offers more accuracy but costs more computational time. Therefore, smaller element size is used near the boundaries of the computational domain so that the velocity and temperature gradients are effectively captured. However, the rest of the domain is covered with stretched elements to save computational time (refer to Figure 2 for mesh generation). Grid-independent is an essential criterion to measure the convergence of numerical results. This criterion is illustrated in Table 3 where the numerical values of average Nu are shown for looser to denser grid by increasing the number of elements. The table shows that no correction is needed up to the desired accuracy of two decimal places as the number of elements increases from 5800 elements. To save computational time, the rest of the calculations are done by fixing the mesh size at 5800 elements. The present results are also verified by giving a comparison with a previous study by Moukalled and Acharya [47] on simple viscous fluid. The results of the current research for the limiting case are obtained by assuming the nanofluid concentration φ hnf to be zero and considering the half-width channel with free convection. The comparison of Nu avg is provided in Table 4. The table shows well concurrence between both the studies, which validates the results reported in the present investigation and hence develops the confidence in the results presented in the coming section.
Results and Discussion
In this parametric study we considered the mixed convection of Ag-MgO hybrid nanofluid due to the moving lid of an inclined enclosure with the inside circular heater under inclined magnetic field. For this purpose, the radius of circle r is supposed to be fixed at 0.15, and the fluid Prandtl number Pr is kept fixed at 6.2. In further analysis, the main governing parameters, like Richardson and Hartmann numbers, angles of inclination (α and β), and the volume fraction of nanoparticles φ hnf varies to show the discriminating characteristics of the flow phenomenon. The Richardson number (Ri = Gr/Re 2 ) is chosen to have values of 0.01, 1 and 100 by fixing Re at 100 and varying Gr as 10 2 , 10 4 and 10 6 so that the effect of forced, mixed and free convection could be observed. The Hartmann number Ha fluctuates from 0-100, the angles of inclination (α and β) vary from 0 • to 120 • , and φ hnf takes the values between 0% and 2%.
The effects of Richardson number on streamlines at different center heater radii are shown in Figure 3 while the cavity is posed at a 45 • angle of inclination. The stream plots show that for forced convection case (Ri = 0.01 and r = 0.15), the enclosure is occupied by clockwise circular stream cells. At the right and left sides of the circular heater, two weak counterclockwise boluses are formed, which disappear as the radius of the heater increases to 0.2 and to 0.3. In case of mixed convection (Ri = 1), the reverse flow starts augmenting from the central part of the channel while the upper and lower parts still show the clockwise rotation in the streamlines. This reverse flow becomes weak as the radius r shifts to 0.2 and completely disappears for r = 0.3. The free convection case (Ri = 100) demonstrates the interesting feature where the counterclockwise rotation dominates the upper part of enclosure. Since the cavity is held inclined at an angle of 45 • , therefore, the flow direction is toward the right side from the center heater, which is obvious conduct. The same behavior is seen for r = 0.2 and 0.3. Figure 4 shows the heat lines distribution at different radii of the center heater and at various Ri values. For forced convection case, the contour lines are dispensed around the heater with heat trend toward the left side. A similar pattern of the isotherms has been noticed in the case of mixed convection cases with more shifts toward the right. However, for free convection, a clear increasing trend is noticed in temperature heading toward the right side. The thermal plum is also appeared directed towards the right wall showing the dominancy of buoyancy forces. As the radius of the center heater increases, the spread of temperature clearly rises all around in the enclosure. Figure 5 indicates that Nu avg , in case of free convection, is higher than Nu avg of forced convection. Moreover, it is found that for forced convection cases, an expansion in circle radius causes an increase in Nu avg at the central heater. However, for free convection increasing circle radius results in decreasing the Nusselt number at the central heater. Therefore, for better heat transfer augmentation heater with a smaller radius is recommended for free convection. On the other hand, for the forced convection, a heater with a larger radius serves the purpose. The Nusselt number at the upper horizontal boundary rises with an expansion in circle radius.
The streamlines and isotherm pattern are also plotted at varying α values in Figure 6, assuming mixed convection (Ri = 1). For non-inclined cavity α = 0 • , the whole enclosure exhibits clockwise rotation except the lower-left corner where a weak counterclockwise recirculation cell is formed. For α = 90 • , a balanced flow is observed on both sides of the center heater. However, at α = 120 • , the counterclockwise recirculation becomes weak in the left part and strengthen in the right part. The contour plot of isotherms (Figure 6b) demonstrates that for the non-inclined cavity, the heat lines are stronger on the left side, and as the inclination angle rises, the temperature becomes higher on the right side leaving no significant impact in the top and bottom regions. In fact, the convection mechanism is always predominant against the gravity direction due to the buoyancy force. Another perspective of the convection phenomenon is to fix the parameter Gr and vary Re to observes the impact of Ri. In the present study, this viewpoint is illustrated in Figure 7, where the 3-dimensional graphs of velocity and temperature profiles are plotted by fixing Gr = 10 4 and varying Re from 10 to 5000, which covers the free to forced convection cases (Ri = 100, 1 and 0.01). The velocity profile u in the x-direction is plotted in Figure 7a at α = 45 • . For free convection case (Ri = 100) the velocity profile satisfies the boundary condition at the upper wall and then shows a negative trend immediately below the moving plate and then attains a peak near the center heater, which indicates a counterclockwise circulation in the upper part. In the lower part, near the center heater, the velocity is positive, and near the lower surface, the velocity is negative, which indicates the formation counterclockwise circular cell. For the mixed convection case (Ri = 1), there is clear evidence of the formation of clockwise circular cells in the upper part; however, the velocity is negligible in the lower part. This is evidence of the strengthening of shear forces. For forced convection case (Ri = 0.0004), the excavation in the profile shifts to the lower side of the center heater, which indicates the predomination of the clockwise circular cell. Figure 7b presents the velocity profile v in y-direction at α = 45 • . The positive values indicate the flow in the upward direction and negative values signify the downward flow. For free convection case (Ri = 100), an upward flow is noticed near the center heater, top-left, and lower-left sides of the enclosure, which is due to stronger buoyancy force. For mixed convection cases (Ri = 1), one can notice the formation of excavation in the top-right corner. The excavation becomes deeper as forced convection becomes stronger (Ri = 0.0004), which indicates a strong downward flow. In contrast, the u velocity profile is the part of the clockwise circular cell that appeared in the last part of Figure 7a. The temperature profile in Figure 7c shows the high-temperature gradient near the central heater. However, for free convection, the heat dissipates in the right side of enclosure, which is because of strong buoyancy force at elevated enclosure from the right (α = 45 • ). For the forced convection, the temperature gradient is stronger as compared to the free and mixed convection cases, and the heat effect accumulates around the heater and slightly dissipates in the upper part of the channel. Comparatively, the spread of heat is higher in the case of free convection as compared to the mixed and forced convection. The velocity stream and heatlines are also sketched for varying magnetic field in Figure 8 assuming mixed convection Ri = 1 and inclined enclosure at α = 45°. In Figure 8a, stream plots at varying values of Ha are shown. The nonexistence of magnetic field results in the clockwise rotational cell throughout the enclosure. However, a clockwise eddy is generated along with the center heater at the upper-right edge showing a higher flow rate. As the magnetic field is introduced with Ha = 50, the flow regime is divided into three parts: the upper part, around the heater, and the lower part. The upper and lower parts demonstrate the clockwise circular boluses. However, around the circular heater, two anticlockwise eddies are formed. Overall, the flowrate inside the enclosure is reduced after the induction of the magnetic field as expected. This behavior becomes more prominent by increasing the magnetic field intensity (Ha = 100) where the anticlockwise circular eddies grow with the higher flowrate suppressing the clockwise circular cells in the upper and lower parts. This conduct is quite expected since the role of the magnetic field is to squash the flow augmentation. The isotherms contour plots are shown in Figure 8b at α = 45°. In the absence of Lorentz force (Ha = 0), the primarily spread of heat is in the upright direction. Upon introducing the Lorentz force (Ha = 50), the heat lines propagated toward the right and left sides, and this conduct of temperature becomes stronger for higher Lorentz force (Ha = 100). Consequently, the heat transport decreases around the heater (see Table 5). This behavior of temperature profile is a consequence of the reverse flow generated around the heater (see Figure 8a), which raises the convection phenomenon around the heater and lessens heat transfer rate. Table 5 shows that Nuavg at The velocity stream and heatlines are also sketched for varying magnetic field in Figure 8 assuming mixed convection Ri = 1 and inclined enclosure at α = 45 • . In Figure 8a, stream plots at varying values of Ha are shown. The nonexistence of magnetic field results in the clockwise rotational cell throughout the enclosure. However, a clockwise eddy is generated along with the center heater at the upper-right edge showing a higher flow rate. As the magnetic field is introduced with Ha = 50, the flow regime is divided into three parts: the upper part, around the heater, and the lower part. The upper and lower parts demonstrate the clockwise circular boluses. However, around the circular heater, two anticlockwise eddies are formed. Overall, the flowrate inside the enclosure is reduced after the induction of the magnetic field as expected. This behavior becomes more prominent by increasing the magnetic field intensity (Ha = 100) where the anticlockwise circular eddies grow with the higher flowrate suppressing the clockwise circular cells in the upper and lower parts. This conduct is quite expected since the role of the magnetic field is to squash the flow augmentation. The isotherms contour plots are shown in Figure 8b at α = 45 • . In the absence of Lorentz force (Ha = 0), the primarily spread of heat is in the upright direction. Upon introducing the Lorentz force (Ha = 50), the heat lines propagated toward the right and left sides, and this conduct of temperature becomes stronger for higher Lorentz force (Ha = 100). Consequently, the heat transport decreases around the heater (see Table 5). This behavior of temperature profile is a consequence of the reverse flow generated around the heater (see Figure 8a), which raises the convection phenomenon around the heater and lessens heat transfer rate. Table 5 shows that Nu avg at the center heater and the top wall decreases as Ha increases.
The behavior of streamlines and isotherm at varying magnetic field direction is shown in Figure 9. Figure 9a shows the streamlines plot at different inclination angles of the magnetic field for mixed convection (Ri = 1) and horizontal cavity (α = 0 • ). For horizontal Lorentz force (β = 0 • ), the streamlines plot shows two clockwise circular cells and one counterclockwise eddy in the lower-left corner with negligible circulations. For vertical Lorentz force (β = 90 • ), only one clockwise eddy is formed near the top wall. At β = 120 • , the core upper cell slightly shifts towards the left, and a weak counterclockwise eddy is developed at the lower left side of enclosure. Overall, the flowrate is higher for the horizontal magnetic field as compared to the vertical magnetic field. The isotherms contours show that for horizontal Lorentz force, the temperature increases horizontally toward the left wall. For vertical Lorentz force, the temperature increases in the vertical direction, and the thermal plume appears toward the upper left corner. This thermal plume becomes slender for a magnetic field at an angle of 120 • which leads to a higher convection rate. Figure 10a demonstrates an imperative behavior of the heat transfer rate at the surface of the center heater by showing the effect of parameter φ hnf on Nu avg against Ri. The figure indicates that the convection rate reduces as Ri remains within 0.01 ≤ Ri ≤ 1. This shows that strong, forced convection increases the heat transfer rate. For Ri > 1, Nu avg increases rapidly, suggesting high heat transport in case of free convection. Moreover, the heat transfer rate significantly increases as the concentration φ hnf increases, especially for forced convection. However, no significant change has been noticed for the free convection case as φ hnf increases. A similar trend is noticed for Nu avg at the upper wall (see Figure 10b), though Nu avg at the upper wall is comparatively less than Nu avg at the circular heater. Such result is quite common in the study of mixed convection in cavities and reported by various authors (See for instance [9]). Table 4 depicts the percentage change in Nu avg as the percentage of the volume fraction of nanoparticles augments at Ri = 1. The table illustrates that by adding 1% of nanoparticles concentration results in an 8.4% increase in Nu avg at the center heater and a 9.3% increase in Nu avg at the top boundary. Introducing a 2% concentration of hybrid nanoparticles results in an increment of 18.3% and 21.6% in Nu avg at the center heater and top wall, respectively. The local Nusselt number at the upper wall is portrayed in Figure 11 for different values of volume fraction of hybrid nanoparticles for mixed convection case (Ri = 1) against the variable x. Nu local increases as φ hnf rises, and this rise becomes more significant at the center of the wall. As one moves toward the left side the wall Nu local increases, and it decreases towards the right. As the wall moves toward right with uniform velocity, the clockwise circulation cell formed near the upper wall resulting in high temperature-gradient at the left side of the cold-wall, and as the particles move towards the right side, the temperature gradient decreases eventually.
Conclusions
We discussed a mixed convection of Ag-MgO hybrid water-based nanofluid under uniform magnetic field in a lid-driven inclined enclosure. The enclosure is heated from inside at a constant temperature by a circular heater. The top and bottom boundaries are supposed to maintain the uniform lower temperature and the side vertical walls are insulated. The numerical solutions have been presented for governing equations. A parametric analysis is made for the parameters, like, Richardson number, the Hartmann number, the concentration of hybrid nanoparticles. The important concluding remarks of the study are summarized below in points:
•
The heat transport is greater in free convection as compared to the forced convection. • It is established that for the forced convection case, an increase in circle radius results in enhancing the heat transfer rate, and for the free convection case, increasing circle radius results in decreasing the heat transfer rate. • Overall, the circular flow rate is higher for the horizontal magnetic field in contrast to the vertical magnetic field. • Heat transport augments as the percentage of volume fraction of hybrid nanofluid particles improve in case of forced convection but makes no significance in free convection case.
•
Adding 2% concentration of hybrid nanofluid particles results in 18.3% increase in Nu avg at the center heater for mixed convection. | 6,624.8 | 2021-04-25T00:00:00.000 | [
"Physics"
] |
Tradeoffs Among Delay, Energy and Accuracy of Data Aggregation for Multi-View Multi-Robot Sensor Networks
© 2012 Li et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Tradeoffs Among Delay, Energy and Accuracy of Data Aggregation for Multi-View Multi-Robot Sensor Networks
Introduction
Due to the recent development in micro mechanics, electronics and wireless communication technologies, Wireless Sensor Network (WSN) has been a hot issue for many applications like monitoring, detecting, remote control, life saving etc. However, along with the applications in different area, the ways of deploying the sensor nodes are different. In some harsh environmental detection, sensor nodes are always dropt into the target area by aircraft which may lead some unnecessary troubles, for example some nodes are out of communication range, some nodes are broken, and lack of flexible because of its immobile characters et al.
The Multi-Robot Sensor Network (MRSN) which is comprised of large numbers of small, simple and inexpensive wireless robots can solve the problems mentioned above. In MRSN, besides perceived sensors, the robot also can be set up digital camera or voice recording equipment, even video camera according to the applications' requirement. Hence, it can detect more detailed information like pictures, video or sound etc.
In this section, at first we introduce the MRSN and its applications. The open problems and one of the effective methods, data aggregation is presented. At last we show a preview of this chapter.
Wireless multi-robot sensor networks
In wireless MRSN, from a viewpoint of sensor network communication, each robot senses data and transmits data to the adjacent lower node. To collect all data at the sink, data are sent by relay nodes in a multi-hop manner. However, due to the mobility of the robot,
Applications of multi-robot sensor networks
Due to its flexibility, operability, mobility and self-organization, the applications of MRSN has been increasing (Maxim & Gaurav, 2005), (Trigui S, et, al., 2012). Harsh environmental monitoring is the most popular application of MRSN, for example let wireless robots get into Amazon rainforests where it is very dangerous for human get inside or let them climb to Mount Everest where there is not enough Oxygen for human and covered by snow all over the mountain. In medical application, if the MRSN can help the nurses to do some simple task like checking body temperature and sending to a doctor, which would save much more labors in some countries those short of nurses. One of the most important utilizations of MRSN is that it can be used to detect nuclear radiation and to accomplish some other relevant tasks. The most recent example is the Fukushima nuclear leakage where if a MRSN was applied, it would have alleviated damage. Some other applications like outer space monitoring (space junk detection), industrial monitoring (quality control), disaster monitoring (forest fire detection), agriculture monitoring (soil moisture detection), traffic monitoring (intelligent transport system) etc have also much potential.
Open problems in multi-robot sensor networks
In MRSN, when an event occurs, multiple robots in the near area sense the event data and generate an abundance of sensed data; however, many of the data generated in the same area are highly redundant. Hence the transmission and relaying of all generated data caused a big waste of bandwidth and energy; it also causes data collision and congestion so that result in low efficiency of data gathering. On the other hand, similar to wireless sensor network, WRSN could not avoid the shortcoming of lack of continuous energy supplement.
One may say robot node can be equipped a large capacity battery, but its energy consumption is also large due to its big size, moving, detecting and transmitting etc. An energy harvesting algorithm (Eu et, al., 2010) is proposed for WSN. According to energy harvesting technique, a robot can absorbs solar energy from sunlight. However, how can the robot manage its task in the night? The vibrational energy get from the environment is too lees to trigger the robot. Therefore, saving energy is the most feasible way in WRSN.
For energy saving, decreasing the redundant data and sending a representative data of the detected area are the most considerable strategy. With a view of reducing the quantity of the transmitted data, the well-known scheme is data aggregation (Rajagopalan &Varshney, 2006). Since a sensor node in WSN waits for a period of time to collect extensive quantity of data to aggregate, data aggregation leads long transmission delay and low data accuracy. Some application, like medical and architectural utilization requires more accurate data while disaster relief requires receiving data as soon as possible. However, energy, delay and accuracy are trading off each other, one can not improve three of them at the same time. Hence, how to control the trade off of energy, delay and accuracy among different applications is the problem we will solve in our work.
Data aggregation in multi-robot sensor networks
We focus on data aggregation technology for collecting data in MRSN. Data aggregation (Rajagopalan & Varshney, 2006) is a process of aggregating the data from multiple robot sensors to eliminate redundant data and provide fused information to the base station. Considering from the point of data redundant, data aggregation can collect the most efficiency data. However, transmission delay and data accuracy are also important in many applications such as military application and architectural application. Hence trading off transmission delay, energy consumption and data accuracy is an important issue. There are several typical algorithms of data aggregations. PEGASIS (Lindsey & Raghavendra, 2001) is one of energy efficiency chain based data aggregation protocols that employs a greedy algorithm. The main idea of PEGASIS is forming a chain among the sensor nodes so that each node receives (or transmits) fused data from (to) the closest neighbors. The data gathered are sent from node to node, and all the sensor nodes take turns to be the leader for transmission to the Base Station. Data Funnelling (Petrović, et. al, 2003) is another scheme that sends a stream of data from a group of sensor readings to destination. Moreover, they proposed a compression method called "coding by ordering" to suppress some readings and encoding the values in the ordering of the remaining packets. On the other hand, LEACH (Heinzelman W., 2000) is one kind of energy saving schemes in which a small number of clusters are formed in a self-organized manner. A designated sensor node in each cluster collects and combines data from nodes in its cluster, then transmits the result to the BS. Directed Diffusion (Intanagonwiwat, et. al, 2000) is a kind of data centric routing protocols. The sink broadcasts an interest message to all the sensor nodes, and the nodes gather and transmit the sink-interested data to the sink. When the receiving data rate becomes low, the sink starts to attract other higher quality data.
Regarding the trade-offs, (Boulis, et. al, 2003) proposed an energy-accuracy tradeoffs algorithm for periodic data-aggregation which is a threshold-based scheme where the sensors compare their fused estimations to a threshold to make a decision of regarding transmission. Energylatency tradeoffs algorithm (Yu at. el., 2004) is proposed for minimizing the overall energy consumption of the networks within a specific latency constraint where data aggregation is performed only after a node successfully collects data from all its children and its own local generated data. ADA (Adaptive Data Aggregation) (Chen et. al., 2008) is an adaptive data aggregation (ADA) for clustered wireless sensor networks. In ADA, sensed data are aggregated on two levels; one is aggregated at sensor nodes controlled by the reporting frequency (temporal reliability) of nodes; another is aggregated at cluster heads controlled by the aggregation ratio (spatial reliability). The reliability of observed data that is decided by the number of arrival data at sink node is compared with the reliability of desired data, which is decided by the application. According to comparison, nine characteristic regions and nine states are defined in which the eight states must change into the desired state through the calculating and adjusting of observed reliability.
Most of the previously mentioned works focus on energy saving and aggregate as much data as possible. As a result, they prolong the transmission delay. Many works aimed to achieve energy-delay trade off, however they still have shortcomings for example (Yu at, el., 2004) has long waiting time at nodes with less event data while the constant latency makes the networks very inflexible in (Galluccio L. & Palazzo S., 2009). A desired energy-delay tradeoff is achieved in (Ye Z. et al., 2008); however the algorithm ignored the issue of data accuracy. Energy-delay-accuracy tradeoffs in (Mirian F. & Sabaei M.) and (Chen et al., 2008) adapt to a situation that could be described by the following question: 'what is the average temperature of this area at this hour?' The algorithms did not consider delay and accuracy among nodes and data, which may lead to large data deviation as well as transmission delay in some other applications.
Preview of our work
In this paper, at first, we show the analyses of transmission delay, energy consumption and data accuracy of non-aggregation, full aggregation and partial data aggregation with Markovian chain. The analytical results show that non-aggregation consumes much energy and full aggregation causes long transmission delay; but the proposed partial aggregation can trade off total delay, energy consumption, and data accuracy between non-aggregation and full aggregation. Then we intensively discuss the tradeoffs among energy consumption, transmission delay and data accuracy with a Trade Off Index (TOI). We discuss the TOI under the different conditions of accuracy dominant, energy dominant, and delay dominant. By comparing the TOI value among non-aggregation, full aggregation and partial aggregation in different data generation rates, we obtain the best TOI. The results show that with small data generation rate, non-aggregation is the best TOI; with moderate data generation rate, the partial aggregation is the best TOI while the data generation rate is large, the full aggregation is the best TOI. At last a multi view multi robot sensor network is discussed and a User Dependent Multi-view Video Transmission (UDMVT) scheme is introduced.
Preliminary concepts
In this section, we will introduce network topology, network parameters and the definitions of network parameters, which will be helpful in understanding our work clearly. Fig. 1 depicts tandem network topology of the MRSN, the most basic and simplest model, which enables us to make an analytic model. The results can be extensible to other topologies that are more complex. In such kind of network, all the robots deployed statically in a flat area and have same role. The robots are allocated omni-directional antenna for wireless communication and have the same transmission ranges. When a robot senses data, it transmits the data to the sink, if the data could not get to the sink by one hop; the robot sends the data to the sink by multi-hop way.
Definition
ni denotes the i-th node from the sink. N is a set of all nodes. ni+1 is called the adjacent upper node of , while ni-1 is the adjacent lower node of ni. A set of nodes of {nk | nk ∊| N|, k>i} denotes the up-per nodes of ni, while {nk | nk ∊| N|, k <i} denotes the lower nodes of ni. Suffixes non, ful and par attached to terms mean non-aggregation, full aggregation and partial aggregation respectively. Arrival data denotes that the data come from adjacent upper nodes. Local generated data denotes that the data are generated at local nodes. Server: in our work, we assume that each node has a server to process data aggregation and data transmission. The MAC protocol used in this research is CSMA. The propagation delay between adjacent robots is negligible.
Aggregation factor
Here a robot aggregates its own generated data and received data from adjacent upper nodes before transmission. The sink does not participate in data aggregation. When data aggregation occurs at a robot node, the aggregation factor denotes the proportion of aggregated data size and local generated data size. It means that the aggregated data size is AF times of the generated data size. AF=1 means that aggregated data have the same data size with generated data, and we assume there is one generated data at one time.
Aggregated data size Generated data size AF (1)
Transmission delay
Total delay D(N) shows a time interval between the instance when event Eij occurred at robot nn and the instance when the sink receives Dij in N hops networks. Data transmission time Շ' is defined as a time interval between the instance that data are transmitted from robot and the instance that the data are received at the adjacent lower robot. Channel waiting time ( ) c i : it is the time interval that data cannot utilize the channel. Event waiting time e ( ) i : In full aggregation, before a robot processes data aggregation, the arrival data have to wait for local generated data to be aggregated together, hence the waiting time of arrival data called event waiting time.
Energy consumption
Total energy consumption E(N) is defined to be the sum of energy consumption of an event data that is generated at node nn and finally received by sink node in N hops networks.
Data accuracy
We define the data accuracy as the proportion of collected data at sink and the amount of sensed data at all the robots.
Data aggregations
In this part, we analyze and evaluate the data aggregation simply in terms of nonaggregation, full aggregation and partial data aggregation.
Non-aggregation
The arrival data are transmitted to the adjacent lower node immediately after having been received; data neither wait for local generated data nor aggregate with any other data. The analytical model of non-aggregation is shown as follows in figure 2.
In the analytical model of node ni in fig. 2, the average arrival rate from the upper node is approximates to Poisson distribution. Generated data rate at a node is assumed to be Poisson distribution. The generated data and arrival data join the service queue and wait for transmission. There is one server for data transmission at each node. All data in the queue will be sent based on first in first out. i is the data rate upon exiting the server at node ni. According to the analytic model, we find that the arrival rate to the queue is Strictly speaking, arrival data from the upper node is not Poisson distribution. However, for the purpose of simplicity, we approximate the process as Poisson distribution. Since the arrival data rate and local generated data rate are independent Poisson distributions, the sum of the two is also a new Poisson distribution.
Service process
In our network model, each node has one server. The ACK packet transmission time is not considered. Data aggregating time is very short and negligible. Therefore the service time is one hop data transmission time. In our work, data transmission rate is vc and local generated data size is Si. Therefore the service time for each generated data is: Since vc and Si are constant in non-aggregation, the service time for each data are fixed and constant.
From the above analysis we can determine that the queuing system approximates to M/D/1 model.
According to equation (4), the average data transmission time that we obtain at a node is: According to queuing theory and equation (4), we determine the server waiting time as follows:
Channel waiting time
Node ni communicates with only one neighbor node at a time. If a neighbor node is transmitting data, node ni has to wait until its neighbor node finishes transmission, due to the over hearing caused by the omni-directional antenna. This waiting time is defined as channel waiting time and is obtained the formulation as follows:
Energy consumption
Node ni transmits data and relays arrival data from the upper nodes. Since the consumed energy is proportionate to the number of data transmissions, we can find the mean number of data L Q non(i) in the service queue at node ni according to Little's formula. The number of data in the queue waiting for data transmission is shown as follows: The λ'i is arrival data rate at node ni and ( ) s non T i is time duration from data joining the queue to data having been received by the next neighbor node at node ni, in case of nonaggregation, can be determined as follows: According to equations (8) and (9), we obtain the whole energy consumption in N hops network as follows: Here Pt and Pr denote the energy consumption for transmitting and receiving data.
Data accuracy
In non-aggregation, data are not aggregated and the packet drop occurs with the transmitting in real system. However, for simplicity, we assume there is no packet drop and retransmission, all the generated data will get to the sink, thus the data accuracy approaches to 100%.
Full aggregation
We define the full aggregation that the arrival data are sent to an adjacent lower node only after having been aggregated with local generated data at nodes. It means data transmission occurs only after a new local data generated a node. Hence, the waiting time for data aggregating at a node is decided by the data generation rate of the node. When there is local generated data, the node aggregates all the arrival data with generated data then waits for transmission at server. Data after aggregation undergo the same procedure as non-aggregation to detect the server and the channel for further transmission.
The analytical model of full aggregation is shown in figue3. Before explaining the model, we introduce queue A, queue B and "G." Queue A denotes the arrival data queue at a node that is waiting for local generated data for data aggregation. Data in Queue B are waiting for server; when the server is idle, data are transmitted to a neighbor node. The "G" is assumed as a virtual gate between queue A and queue B. Immediately after local generated data aggregate with the arrival data in queue A, the gate opens and lets the aggregated data join queue B. In full aggregation, the data join queue A with arrival rate of 1 i and wait for new generated data. When an event occurs at local node, the node aggregates the generated data and all arrival data in queue A according to the aggregation factor Af. The size of aggregated data becomes Sav and the aggregated data join queue B with the rate of i to await further transmission. In full aggregation, the difference from non-aggregation is that we have to determine how long the arrival data wait for aggregation in queue A.
Event waiting time
To determine the event waiting time, we apply the state transition rate diagram. We describe the state transition rate diagram in fig. 4. The basic idea of the analysis is that data waiting in queue A for exponential distribution have an average of 1/2λi. In the diagram, the state variable is the number of data waiting for an event. According to calculation of state probability distribution and Little's formula, we determine the event waiting time as shown below; more details please read (Li, et. al, 2010).
Total delay
From the definition of full aggregation we know that the arrival data join queue B only if there is new generated data at local node, hence the data arrival rate to queue B is equal to data generation rate at the node. The data generation rate abides by Poisson distribution; therefore the arrival data rate to queue B is Poisson distribution. Since the data arrival rate involves only one server and the data transmission time for server is fixed, according to queuing theory we model the queue by means of M/D/1 queue. Similar to non-aggregation, we determine the total delay Dful (N) of the network in full aggregation is consists of event waiting time in queue A, server waiting time in queue B, channel waiting time and data transmission time at server.
Energy consumption
The energy consumption is proportional to the number of data transmissions.
According to Little's formula and equation (13), the amount of data in queue B is as follows: Tradeoffs Among Delay, Energy and Accuracy of Data Aggregation for Multi-View Multi-Robot Sensor Networks 81 Therefore, the whole energy consumption is determined as follows:
Data accuracy
In full aggregation, the aggregation factor Af=1. Thus, we can get the data accuracy in N hops transmission as follow:
Partial data aggregation
According to previous analyses of non-aggregation and full aggregation, we find that nonaggregation sends all the generated data to sink node which results in large energy consumption. In case of full aggregation, the arrival data must wait for local generated data to aggregate, which causes the prolonged transmission delay and low data accuracy for the data that come from nodes far away from sink.
To minimize these two shortcomings, we propose a partial data aggregation. The main idea of partial aggregation is that nodes process data aggregation and transmit data only if a) if there are new local generated data at a node or b) after waiting a holding time at a node; the inverse of the holding time we call random pushing rate λ D i. The analytical model of partial aggregation is shown as follows. For the purpose of simplifying our analytical model, we assume the arrival data rate from adjacent upper node is approximated to Poisson distribution and the arrival data join event waiting queue A in fig. 5. Data generation rate λi is assumed to be Poisson distribution. Random pushing rate is λ D i and assumed to be exponential distribution. If new generated data occur at a node or if holding time is over for arrival data, all the data are aggregated into one data, and the gate G opens and lets aggregated data join queue B. λ'i is data arrival rate to queue B in which data are waiting for service (data transmission).
Event waiting time
Assume that a number of data are waiting for an event at robot i n in queue; we describe the state transition diagram as shown in Fig.6. . Similar to full aggregation, the event waiting time of partial aggregation can be determined as follows:
Arrival process to Queue B
From the analytical process we find that arrival data rate λ ' i is decided by the random pushing rate and data generation rate at a node. To determine the formulation of λ ' i, we calculate the property distribution of λi and λ D i. We define that λ D i and λi are the independent distribution X and Y. Through proofing of the property Y is bigger than X, we determine the arrival process to Queue B as follows; the proof can be found in (Li, et. al, 2010).
Total delay
Since the data generation rate is Poisson distribution and the random pushing rate abides by exponential distribution, the data arrival rate to queue B approximates to Poisson distribution. Therefore, we can confirm that the queuing system approximates to M/D/1 model. With the same way of full aggregation, the server waiting time and channel waiting time can be determined easily. Therefore, the total delay of partial aggregation is as follows:
Total energy consumption
In the N hops transmission in partial aggregation, total energy consumption ) (N E par is the sum of transmission energy consumption, reception energy consumption and overhearing energy consumption. t P and r P are energy required for transmitting or receiving a data. The period of time that aggregated data wait in a queue for transmission can be determined as follows: According to Little's formula and equation (25), we determine the amount of data in queue B at node ni as follows: Accordingly, we determine the total energy consumption for the network as follows:
Data accuracy
The total generated data Lpar (N) in N hops network is obtained as follows: The amount of data received by sink Lpar(S) is as follows: According to the definition and above equations, we determine the data accuracy as follows:
Evaluation
Here we show the analytic results of the previous sections. The parameters are as below: . In this section, we evaluate total delay, energy consumption and data accuracy when the aggregation factor is Af=1. Fig. 7 to Fig. 9 show the total delay, energy consumption of whole network, robot energy consumption and data accuracy of five hops transmission where λi=λ. Partial-T1 and Partial-T2 are two sets of random pushing rate vectors in partial aggregation. We get the vectors randomly [1,2,3,4,5] and [5,10,15,20,25].
From figure 7, we find that when event generation rate is small, full aggregation has long transmission delay in comparison to non-aggregation. The reason for concaving up of delay of the full aggregation is that, when event generation rate is small, the received data has to wait for generated data longer duration. In addition at a robot near to the sink, the total delay increases because of the large waiting time due to the congestion around the sink. As long as total delay is concerned, non-aggregation is suitable for situation of small event generation rate. From the figure, we also find that the performances of partial-T1 and partial-T2 are between non-aggregation and full is zero, it means fully aggregation. Fig. 8 shows the energy consumption of the whole network. Obviously, non-aggregation consumes much more energy than full aggregation. Thus, full aggregation is suitable for energy consumption while non-aggregation is efficiency for transmission delay. The partial-T1 and partial-T2 has energy consumption between non-aggregation and full aggregation. In addition, the smaller random pushing rate vector set partial-T1 has less energy consumption than the set of partial-T2. Fig. 9 shows the data accuracy of different data aggregation. From fig. 9, we find that the data accuracy of partial aggregation is between non-aggregation and full aggregation. The partial aggregation with the larger random pushing rate achieves higher data accuracy. From above evaluations we find that the partial aggregation with random pushing rate vectors can control the energy, delay and data accuracy between non-aggregation and full aggregation. Hence, one can achieve desired MSRN by controlling the random pushing rate.
Trade off index TOI
Previous section clearly shows partial aggregation with random pushing rate D i can control the energy consumption, transmission delay and data accuracy. In MRSN, according to applications, delay taken to collect data, energy consumed by each sensor node for communication and data accuracy of the collected data are critical concerns and are in tradeoff each other. Energy, delay and accuracy cannot reach full potential at the same time, but we can achieve the best possible tradeoff between them. To obtain the best trade-off value of practical application, we propose a Trade-Off Index (TOI). In the following subsections, we discuss energy, delay and accuracy of trade-offs in respect of TOI as criteria. Here E denotes total energy consumption, D denotes total delay, Ac denotes data accuracy. α, β, γ indicate the significance of accuracy, energy and delay and larger α, β, γ indicate more significance of energy, delay and accuracy. The smallest TOI value denotes the best data aggregation.
Applications of WSNs with different criteria
In MRSN, according to the different applications and objectives, we need different significances for transmission delay, energy consumption and data accuracy. Some application areas need to save energy because it is impossible to replace or recharge the battery. In some applications not only the energy is significant, but also the data freshness, such as in military monitoring and disaster monitoring; however data accuracy is most important in medical utilization and in quality control. According to real application, we formulate some of the applications according to the significances of energy, data accuracy and transmission delay in table 1. Here the "L" denotes large significance and "S" denotes small significance; the application is formed from left to right along a scale from smaller event generation rate to bigger generation rate. According to the table 1 we can decide the significant parameters of the application in order to perform our proposed TOI; we can achieve the best data aggregation corresponding to the applications.
Tradeoffs of different applications
In this section, we will investigate the tradeoffs among the applications of which data generation rate is in the range of 0.0001 to 100 events in per second, and here for corresponding to the event generation rate, we define the random pushing rate vectors as the same with event generation rate. We define the random pushing rate vectors as below: As data generation rate of λ=0.
Accuracy significant networks
In accuracy significant utilization, we define α, β and γ as 2, 1, 1; however if the data accuracy is much more important than other two, we also can define α=3 or much larger. In this research, for simplify, we discuss none other but the case that significant parameter has the significance vector of 2 and the ordinary parameters are 1. According to TOI we can get the best result in fig. 10. Figure 10. Tradeoffs of accuracy significant From Fig. 10 we find that when the event generation rate is between 0.0001 and 4.0, nonaggregation is the best comparing with full and partial aggregation. When the data generation rate is between 4 and 30, the partial aggregation is the best, and the full aggregation is the best when data generation rate is larger than 30.
Energy significant networks
Here we discuss the case when energy is significant. The parameters are defined to be as below: α=1, β=2 and γ=1. According to proposed TOI we can get the best TOI values when data generation rate is from 0.0001 to 100. Fig. 11 shows the result. We find from the figure that in the region of data generation rate between 0.0001 and 4.0, the non-aggregation is the best TOI. When the data generation rate is about 4-10, the figure shows that the partial aggregation is the best; the full aggregation is the best when event generation rate is larger than 10. Figure 11. Tradeoffs of energy significant
Delay significant networks
In delay significant networks, α, β and γ is defined as 1, 1, 2, as shown in Fig. 12. From the figure we find that when event generation rate is between 0.0001 and 4.0, the nonaggregation is investigated to be the best TOI; and when the event generation rate is from 4 to 30, the partial aggregation is the best; the full aggregation is the best TOI when event generation rate is larger than 30.
Discussion
Let us summarize the data aggregation with best TOI according to different event generation rate in table 2. From the table we find that when event generation rate is small (0.0001-4.0, 6.0) the non-aggregation is the best TOI. Moreover, from the figures we find that in accuracy significant networks, the event data range of best TOI at non-aggregation is longer (0.0001-6.0) than any others. This is because, in non-aggregation, the data accuracy is 100%; and the other two have low data accuracy; when event generation rate is larger than 6, non-aggregation has very long delay because of the congestions around the sink node.
When event generation rate is moderate (4, 6-30), the partial aggregation is the best TOI except the case energy significance networks. In energy significance networks, the number of transmission in partial aggregation is much more than full aggregation, so the energy has great impact on partial aggregation with the significant of β=2. When data generation rate is large, due to the large number of transmission, the energy consumption is very high in nonaggregation and partial aggregation; therefore, the best TOI is the full aggregation in the networks with large event generation rate.
Multi-view multi-robot sensor networks
As we mentioned in the introduction section, applications of the MRSN will be more advanced if multi-cameras are equipped on the robot nodes. The reason is quite similar to human with more eyes. From the point of application, multi-view MRSN can be applied in security system that will not miss a corner. In addition, in the medical application, the multiview MRSN can accomplish some complex and long time operations. Meanwhile it can achieve more accurate and small cut operation; besides, multi-view MRSN has quick reaction for the vary vital signs and other monitored parameters of the patient.
Introduction of multi-view video and open problem
The developments of camera and display technologies make recording a single scene with multiple video sequences possible. These multi-view video sequences are taken by closely spaced cameras from different angles. Each video sequence in the multi-view video presents a unique viewpoint of this scene. Therefore, user can switch the viewpoint by playing different video sequences. When a robot is equipped with multi-cameras, it will bring the user who controls the robot a broad perspective. The operator also can switch his viewpoints by playing different video sequences. However, since the multi-view video consists of the video sequences captured by multiple cameras, the traffic of multi-view video is several times larger than conventional multimedia, which brings the dramatic increase in the bandwidth requirement. However, as multi-view video is taken from the same scene, a large amount of inter-view correlation is contained in the video. Therefore, compression transmission technologies are especially important for multi-view video streaming.
The state of the art in multi-view representations includes Multi-View Video Plus Depth (Merkle et, al., 2007), Ray-Space and Multi-view Video Coding (MVC) (Vetro, et, al., 2008), . However, the research on Multi-View Video Plus Depth sequences (Merkle et, al., 2007) suggests that with the addition of depth maps and other auxiliary information, the bandwidth requirements could increase. MVC is issued as an amendment to H.264/MPEG-4 AVC (Vetro, et, al., 2008), . It was reported that MVC makes more significant compression gains than simulcast coding in which each view is compressed independently. However, even with the MVC, transmission bitrates for multi-view video are still high: about 5 Mbps for 704 × 480, 30fps, and 8 camera sequences with MVC encoding (Kurutepe, et, al. 2007).
Switching models
In order to reduce traffic for multi-view video transmission, we have analyzed which frames should be displayed when the viewpoint is switched. Our work mainly focuses on the successive motion model . In the successive motion model as shown by Fig. 13, user is only able to switch to the neighboring views. In other words, if the multiview video contains the views (1, 2… M), user is just able to switch from any view j to the view j', where max (1, j-1) ≤ j'≤ min(j+1, M). This kind of switching model is used in the applications such as free viewpoint TV and Remote Surgery System in which user's head is tracked to decide which views should be displayed.
User dependent multi-view video transmission (UDMVT)
In (Tanimoto, et. al., 2011), they developed two types of user interface for the Free Viewpoint TV. One showed one view according to the viewpoints given by user. With this type of user interface, the viewpoint of user can be switched by an eye/head-tracking system, moving the mouse of a PC or sliding the finger on the touch panel of a mobile player. In a real-time interactive multi-view video system (Lou, et, al., 2005), users can switch viewpoints by dragging the scroll bar to a different position. In the user interfaces of (Tanimoto, et. al., 2011) and (Lou, et, al., 2005), the changing of user's position, moving of mouse, sliding of finger and dragging of scroll bar are all successive motions. Since the switching models of these user interfaces are all successive motion models, it will take some time to switch from the current view to the neighboring view. For instance, in the head-tracking system, user needs to take some time to move from his current position to the next position for the new viewpoint. We call the speed with which user switch from one view to next view "switching speed." With different user and user interfaces, the switching speed is different. Even the same user may switch to a different switching speed each time.
In the successive motion model, which frames should be displayed when user starts to switch to the next view are decided by both the frame rate f (frame/s) of the multi-view video and the switching speed s (view/s) of user. Let k be the floor of the frame rate divided by switching speed: Fig. 14 presents the display of frames when k is 3, 2 and 1. triangle are called redundant frames (RFs). It is impossible to display RFs no matter how the user switches the viewpoint start from the current position. UDMVT reduces the transmission bitrate for multi-view video transmission by transmitting only the PFs without RFs. From these expressions, it could be found that with the increase in the length L, the ratio of the PFs to RFs increases, which means that more frames should be encoded and transmitted. In other words, the triangle will be enlarged and finally all the frames at the same time instant are involved into the triangle, which is also shown in Fig. 15.
In order to overcome this problem, the N(p, f, s) should be fed back periodically, which is able to divide a large triangle into many smaller triangles as shown in Fig. 16. In the UDMVT, the N(p, f, s) is fed back periodically at the end of the triangle. The fed back N(p, f, s) from the end of the previous triangle is used to predict the next triangle. Therefore, only potential frames are transmitted each time and the transmission bitrate is reduced. N(f, p, s) should be detected at client and fed back periodically. At the server, N(p, f, s) is used to divide the frames into PFs and RFs. The transmission bitrate can be reduce by only transmitting the PFs and ignore the RFs. Although the transmission of RFs is unnecessary, encoding and transmitting the RFs can work as a kind of insurance against some special situations, such as the switching detection error. (a) k = 1 (b) k = 2 6. Conclusions and future work
Conclusion
In this paper, at first, we analyzed the conventional non-aggregation, full aggregation, and our proposed partial aggregation with Markovian chain. The analytical result showed that, conventional method suffers large energy consumption with the highest accuracy, while full aggregation suffers long transmission delay, with the least accuracy. However, our proposed partial aggregation has the energy, delay and data accuracy between nonaggregation and full aggregation. When the random pushing rate becomes larger, the partial aggregation tends to non-aggregation and it tends to full aggregation with large random pushing rate. Hence, we find that the partial aggregation can trade off energy, delay and accuracy according to different applications. Secondly, we discussed the tradeoffs among data accuracy, transmission delay and energy consumption with different significances according to different applications by proposing tradeoff index (TOI). From the results, we find that non-aggregation has the best TOI for low event generation rate, that the partial aggregation does for moderate event generation rate, and that the full aggregation does for large event generation rate. At last, we discussed multiview multi-robot sensor network from the viewpoint of potential applications, existing schemes and our proposed UDMVT.
Future work
For the future work, at first, we will discuss the random pushing rate to adapt the various changes of data generation rate and information content. For example, in an MRSN, when it is of the state of affairs, nodes generate much more event data than in normal case, that means data generation rate becomes larger. In this case we should decrease the random pushing rate to control the amount of data transmission. On the other hand, from the view point of information entropy, if the self information of generated data is high, it means the generated data are rare generating data. However, when a node applies the normal data aggregation and aggregates the data with normal data, the aggregated data cannot reflect the real situation which may lead bad result. In this case, we can increase the random pushing rate to send high self information data immediately without data aggregation. When the self information of generated data decreases, we decrease random pushing rate to control the quantity of data transmission. Secondly, in wireless sensor network, data are transmitted to sink node by multi-hopping way, which causes the uneven energy consumption on nodes at different locations. Hence, to keep all nodes in the network having the same energy consumption is our another future work.
Author details
Wuyungerile Li, Ziyuan Pan and Takashi Watanabe Shizuoka University, Japan | 9,529.8 | 2012-09-06T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Models for dominating forest cover type prediction
The question of the most suitable forest tree species for defined area and landscape has been investigated in the paper. A set of classifiers is constructed in order to build relations between type of soil and other features of forest area and preferable species of trees. The decision tree classifiers, ensemble methods implementing bagging and boosting over such trees are used. The machine learning methods are implemented to obtain the best suited tree species to cover given forest area. This classification task is one of very important problems of forest regeneration process. Efforts of ecologists can have better results if there are expert systems allowing to understand the best forest cover type for areas of forest fires or deforestation that takes place because of human factor. Results and conclusions of this paper can be used in processing of other forest recover tasks. The same methods can be implemented in order to get the preferable tree species for different areas if there’s enough data to solve these tasks with machine learning technique.
Introduction
Today deforestation is very important problem for the whole world. It's caused with human activities in some areas, somewhere seasonable fires are caused with climate and local features. There was a lot of forest fires of giant magnitude recently: in the USA, Australia, Brazil, Siberian regions of Russia. Forest regeneration is a very important ecological problem.
Nowadays data analysis and machine learning are implemented in a lot of different domains of knowledge [1,2]. In this research the most suitable tree species are defined with machine learning methods. This solution can accelerate process of forest regeneration. Conclusions suite the area where the data was collected in the best way [3]. But the same technique can be used to handle data of different forest areas. Of course, here data collection and dataset creation are very important problems that must be solved in different regions by ecologists [4]. Their efforts help to involve data scientists all over the world to ecological problems solution [5]. Still problems of forest regeneration after fires [6 -13], agricultural deforestation and regeneration after logging [14 -16] are usually researched with traditional methods. Now data science and time series analysis methods [17,18] can be implemented in this domain of knowledge to make predictions of fires and to construct classification and clustering [19] of forest types for regeneration.
The dataset structure and classification quality metrics
In the original data competition [3] the main task was to predict the dominant kind of tree cover. The data analyzed in the paper are collected in the Roosevelt National Forest (Colorado, USA). Forest area was divided into cells. Its width and height are 30 m. Each row contains data about such cell. There are (4 values) and type of dominating tree species (type of cover, 7 values) are handled with one-hot encoding. Cover_Type is the main parameter predicted in the data competition [3]. There are 7 types of dominating tree species introduced in the dataset: spruce (fir), lodgepole pines, ponderosa pines, willows (cottonwood), aspen, douglas-fir and krummholz. There are 581012 records in the dataset.
At the largest portion of area (85% of area observed in the dataset) Lodgepole Pine and Ponderosa Pine dominate. It means that the classes in classification problem are unbalanced. One can't use ordinary accuracy metrics to test quality of classifiers. In this case special metrics is used. Measures of classifiers' quality usually are precision (2), recall (3) and , = 1 value (4) that can be considered as their combination [20]: , Correlation coefficients between all pairs of parameters have been considered. Hillshade indices at 9 a.m. and 3 p.m. have got negative coefficient and its value is 78%. It can be explained with daily move of the Sun in the sky. Some area gets a lot of sun emission in the morning. But in the evening illumination is lower at the same place because of the landscape specifics. Correlation between the hillshade at noon and at 3 pm indices can be treated in the same way.
Parameters describing elevation above the sea level and type of soil correlate. Type of soil depends on local climate and height above sea level is one of important factors influencing climate.
Also aspect value correlates with hillshade index measured at 3 pm (65%). It can be explained as description of cells getting oriented approximately to mean trajectory of the Sun at this time.
In pairs of parameters with high correlation one of them is removed.
Horizontal h and vertical v parts of distance to nearest surface water source are combined into new parameter that can be treated as Euclidean distance to that source: Parameters don't correlate with type of forest cover. The dataset has got high quality. Linear models and classification models can be used to describe type of forest cover [20].
Experiments
The classification task described above has been solved with a few algorithms: "k nearest neighbours" classifier, decision tree classifier and ensemble methods (extra trees classifier, random forest classifier and gradient boosting classifier).
Levels of F1 measures of constructed classifiers are shown in the table 1. Appropriate classes of classifiers in scikit-learn library are presented in the first column. Macroaveraged values of F1 measure for each classifier are shown in the second column. Ensemble methods unite responses of a lot of "simple" classifiers. Thus, it's difficult to explain their decisions. At the same time decision tree classifiers operate with just one tree and their behaviour can be explained [20]. Here main parameters of classification process are elevation above the sea level, type of soil, distances to the closest fire points and roads.
The classification tree has got a lot of nodes which contain parts of the investigated examples. There's a lot of set of conditions which define each class. So, only some simple cases are shown. All classes, except Lodgepole Pine and Ponderosa Pine (dominating at 85% of area), are combined into the third class. Bounds of classes in some cases are shown in the table 2. Here dist_fire denotes distance to the closest firepoint, dist_roads means distance to the closest road, wilderness is a type of wilderness area (4 binary values), hillshade3pm is hillshade index at 3 p.m. and elevation shows height of cell above sea level.
Tree ensemble classifiers construct "strong" classifier with a set of "weak" ones which are decision tree classifiers. Work of a lot of classifiers can define the most appropriate subset of dataset, appropriate diapasons of parameters. The ExtraTreesClassifier is an enhanced version of the RandomForestClassifier algorithm and here its results are better. They are based on the bagging idea The gradient boosting is supposed to be one of the best ensemble methods. It's based on the boosting technique [20]. But the ExtraTreesClassifier shows the best result in this task.
Principal component analysis [20] has been applied to the dataset. Two components are enough to describe 97% of variance. Subset of 50000 records has been created to make plots of various types of trees in the principal components basis containing two components PC1 and PC2. As it was mentioned above Lodgepole Pine and Ponderosa Pine dominate at the largest portion of forest area (85%). So, the first plot contains information only about these types of trees. The other ones are shown at the second plot. (4)). Implementation of scaling by means of standard deviation and mean value (according expression (5)) delivers results that look the same.
Conclusion
The Forest cover type dataset has been investigated in this paper. It includes information about tree species of the Roosevelt National Forest (USA). The data competition [3] was aimed to construct cover type classifiers. The most appropriate tree species for given type of landscape need to be found.
Forest recover task is very important ecological work. Forests area decreases steadily because of fires and human activity. There's need in strong efforts to recover forests all over the world at high speed.
A set of learning models have been used to classify the dataset [3]. As it's shown at the figures 1 and 2 types of trees are mixed in the dataset. So, linear regression or linear classification models haven't got appropriate results.
Here the decision trees classifiers, bagging and boosting methods using trees are implemented. Their F1 measures are greater or equal to 89%.
Way of classes distinguishing with decision trees can be explained. Main parameters of classification process are elevation above the sea level, type of soil, distances to the closest fire points and roads. Some bounds of classes obtained with the decision tree are shown in the table 2.
Solutions of such classification tasks can improve efforts of ecologists aimed to recover forests. | 2,024.4 | 2021-03-01T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
The role of antioxidant on propellant binder reactivity during thermal aging
Thermal aging of hydroxy-terminated polybutadiene (HTPB) stabilized with 2,6-di(tert-butyl)hydroxytoluene (BHT) was carried out at 60oC from 1 to 11 weeks. Samples of 200 mL were stored in sealed 500-mL Erlenmeyer flasks under atmospheric pressure or vacuum and periodically withdrawn for physical and chemical analysis, infrared spectroscopy characterization and measurement of HTPB/IPDI (isophorone diisocyanate) reactivity, expressed as pot life. Mechanical properties of the cured polyurethane, prepared from aged HTPB, were assayed by uniaxial tension tests. Despite the unchanged chemical structure, an increase in HTPB/IPDI binder reactivity was observed, being correlated with BHT depletion measured as color change (yellowing). Aging of HTPB showed no interference on mechanical properties of the cured polyurethane.
INTRODUCTION
Although there has been an increasing research effort into development of energetic polymer binders for solid rocket propulsion, hydroxy-terminated polybutadiene (HTPB) is still widely used in the formulation of composite propellant.This 1,3-butadiene homopolymer contains terminal and reactive hydroxyl groups, introduced during free radical polymerization by peroxide or azo compounds.During solid propellant processing, these hydroxyl groups react with a diisocyanate resulting in polyurethane, which acts as a binder for the solid particles of the propellant composition.
Due to the unsaturated character of the repeated unit, polybutadiene is known to be sensitive to oxidation, thus being usually supplied with the addition of stabilizers (Ninan et al., 1996), namely antioxidants, especially hindered phenol compounds.
The mechanisms and kinetics of HTPB oxidation have been a matter of great concern since the 1960's.More recently, Coquillat et al. (2007a, b, c) have defended the occurrence of radical addition to double bonds and allylic methylene consumption, meanwhile Guyader et al. (2006) have emphasized the mechanism of epoxide formation during HTPB aging.In spite of this, both works agree that oxidation of HTPB is highly dependent on sample thickness and oxygen partial pressure, a condition hardly mentioned in previous studies (Hinney and Murphy, 1989;Pecsok et al., 1976).
In one of these studies (Hinney and Murphy, 1989), the effect of HTPB aging over its reactivity with isocyanates was investigated by measuring the decrease in pot life, defined as the time necessary to reach a pre-established viscosity value.The authors attributed the increase in reactivity to the higher functionality derived from HTPB oxidation through a mechanism of hydroperoxide formation.
In our study, an assessment of the influence of usual storage conditions over aging of HTPB was carried out by submitting large samples (200 mL) of this resin, stabilized with the primary antioxidant BHT, to thermal aging under stagnated atmospheric pressure or under vacuum.Physical and chemical analysis, infrared spectroscopy and pot life measurement showed that, being diffusion-dependent, the observed increase of HTPB reactivity was not due to change in its chemical structure.Instead, an overview of the concerned literature indicated that it was related to BHT depletion and conversion into quinone derivatives, which may eventually react with isocyanates to graphitize into the polyurethane.
Aging conditions
Aging experiments were conducted on 200-mL samples of uncured liquid HTPB placed in sealed 500-mL Erlenmeyer flasks.Flasks were placed into forced circulating air oven at 60 ± 1 o C and protected from day light exposition.Headspace of the flasks was kept either at atmospheric pressure or under 99% vacuum.Duplicated flasks were withdrawn after 1, 2, 4, 6, 9 and 11 weeks and placed for characterization at the same day of collection (pot life and physical and chemical analysis) or within a maximum of two days (mechanical properties).Samples were maintained in desiccators at room temperature until analyses were performed.
Characterization
Determination of pot life was made by stoichiometric reaction of HTPB with IPDI (isophorone diisocyanate, CA index name as 5-isocyanato-1-(isocyanatomethyl)-1,3,3-trimethyl-cyclohexane) in the presence of 0.012% w/w of the catalyst ferric acetylacetonate (tris(2,4pentanedionato-iron).The catalyst was blended with HTPB in a mechanical stirrer.The mixture was heated for bubble removal.After addition of IPDI, the mixture was manually stirred and immediately placed for viscosity measurement at 50 o C. The time required to reach 20 Pa.s was considered as the pot life.At each sampling time, an unaged sample of HTPB was also analyzed for pot life as a control.
Yellowing of HTPB was measured by using a spectrometer in the visible region (PerkinElmer Lambda 3B UV/VIS).In preliminary tests, a range of wave length varying from 220 to 320 nm was evaluated.Maximum absorption was obtained at 295 nm.Samples were diluted in toluene (1:1) and analyzed in duplicate with toluene as the blank.
Determination of physical and chemical properties included: hydroxyl number (Takahashi et al., 1996); viscosity at 25 o C measured in a small sample device (Brookfield LVDV-II+ with software Wingather 2.2) and humidity (Karl Fischer Metrohm 633).
Fourier transform infrared spectra (FT-IR) were collected by using a PerkinElmer Spectrum One spectrometer.HTPB was analyzed as thin films, while BHT was analyzed as potassium bromide pressed-pellet.Analyses were carried out under transmission mode at the following conditions: spectral range 4000 -400 cm -1 ; 40 scans and 4 cm -1 resolution.
Pot life and color change were evaluated by using the property retention index (PRI) as defined by ASTM D5870-95 ( 2003) for destructive tests.Determination of the PRI for each replicate exposed to aging, z i , is defined by Eq. (1).
Where, P i,x : property of the i replicate at exposure time, x, and p 0 : initial value of the property.
Determination of the mean PRI, , is defined by Eq. (2).
Where, z i : PRI for each replicate exposed to aging; n: number of replicates.
Mechanical properties
Samples of HTPB aged for 1, 5, 8 and 10 weeks were also used to prepare polyurethane samples with 2 mm thickness and compared with polyurethane prepared from unaged HTPB.Dumbbell specimens were assayed for uniaxial tension tests, according to ASTM D412-06a in a Zwick 1474 testing machine at 500 mm/min and 25 o C. Hardness was measured in circular specimens with 70 mm thickness by following ASTM D2240-05.
RESULTS AND DISCUSSION
Physical and chemical properties of HTPB for the unaged condition are presented in Table 1, which are in accordance with recommended specifications (MIL-H-85497-81) for application of HTPB in solid propellants.Some of the properties investigated have no specified limits.
In order to improve the comparison between both treatments (atmospheric pressure and vacuum), property retention index was applied to the processing property of pot life and to HTPB color changing (yellowing), as showed in Fig. 1.A remarkable decrease of pot life retention index was followed by a sigmoidal increase of color retention index during the course of the aging experiment (Fig. 1).A linear correlation between them is presented in Fig. 2, showing that the decrease in pot life is related to yellowing increase by a correlation factor of 0.90 and 0.94 for atmospheric pressure and vacuum, respectively.The values of hydroxyl number, viscosity and humidity are presented in Fig. 3. Dashed lines represent upper (UL) and lower (LL) limits (Table 1).Although some fluctuations can be observed, the investigated properties were kept under the specified limits and showed no significant variations throughout the aging assay.
Infrared spectra were obtained on samples of HTPB exposed to 0, 1, 2, 4, 6, 9, and 11 weeks under thermal aging at both pressure conditions.For practical reasons, only the first and the last exposure time spectra are presented in Fig. 4. Aged and unaged HTPB samples presented similar spectra for both atmosphere conditions investigated.Increase in the absorption at the region of 3394 cm -1 (nOH), which could account for increase in functionality, was not observed.Additionally, the absorption at the region of 1639 cm -1 (nC=C olefinic) remained unchanged, indicating no radical addition to double bonds up to the detection level of this technique.The absence of absorptions at the regions around 1740-1800 cm -1 (nC=O) and 830-1000 cm -1 (nOO) in the aged samples also indicates no built-up of carbonyl or hydroperoxide groups.A BHT spectrum was also included in Fig. 4. Due to its low concentration (1%) on HTPB, peaks of BHT or its quinone derivatives were not apparent in HTPB spectra.
In order to verify if the change in reactivity of the binder system could interfere in the mechanical properties of the cured polyurethane, properties of tensile strength, Young modulus and hardness (Shore A) of the HTPB/ IPDI polyurethanes were assayed.They showed not to be changed with the aging of HTPB (Table 2).
Although the readily oxidative nature of HTPB is well established, Coquillat et al. (2007 a, b, c) and Guyader et al. (2006) have demonstrated its dependency on oxygen diffusion.Considering the aging testing conditions of this study, which used large samples (200 mL) exposed to mild temperature, without shaking the flasks, oxygen diffusion was quite difficult even at atmospheric pressure.In these conditions, oxidation of HTPB backbone may have hardly occurred as pointed by the results of physical and chemical properties, especially hydroxyl number (Fig. 1), and by infrared spectra (Fig. 4).It is important to state that the dark condition of aging and the analysis of aged samples had minimized any effect of UV light in the results presented (Fig. 1 and 3).
The unchanged values of humidity (Fig. 3) indicated that the reactivity increase cannot be attributed to the reaction between IPDI and any absorbed atmospheric humidity.
In addition, the occurrence of HTPB homopolymerization or oxidative cross-linking during aging could not be verified, respectively, by the results of viscosity of uncured HTPB (Fig. 3) or by any change in the mechanical properties of the cured polyurethane (Table 2).
On the other hand, the correlation between pot life decrease and color change (Fig. 2) indicated that the well-known yellowing of hindered phenol antioxidants (Vulic et al., 2002), even in the absence of oxygen (Bangee et al., 1995), may be related to the apparent change in HTPB reactivity, which was observed independently of the oxygen partial pressure.In fact, some studies (Celina et al., 2006;Désilets and Côté, 2006;Shanina, Zaikov and Mukmeneva, 1996) have demonstrated that quinone derivatives from hindered phenol antioxidants react quite readily with isocyanates and graphitize into HTPB backbone.
Based on this literature evidence, our results indicate that the apparent increase of the HTPB/IPDI binder reactivity is due to a side-reaction between BHT quinone derivatives and IPDI.
CONCLUSION
The observed increase of reactivity after thermal aging of HTPB was correlated to antioxidant depletion and quinone by-products formation, which was assigned the role to react with IPDI, thus resulting in the observed pot life decrease.No experimental evidence was obtained in order to correlate the change in reactivity with change in chemical structure of HTPB.
Figure 1 :
Figure 1: Pot life and color change retention indexes as a function of aging time.
Figure 2 :
Figure 2: Correlation between pot life and color change retention indexes.
Figure 3 :
Figure 3: Physical and chemical properties of HTPB as a function of aging time.
Table 1 :
Physical and chemical properties of unaged HTPB and recommended specifications | 2,514.4 | 2010-08-01T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Sentinel-1 Data for Underground Processes Recognition in Bucharest City, Romania
: Urban areas are strongly influenced by the different processes affecting the underground and implicitly the terrestrial surface. Land subsidence can be one of the effects of the urban processes. The identification of the vulnerable areas of the city, prone to subsidence, can be of great help for a sustainable urban planning. Using Sentinel-1 data, by the PSI (persistent scatterer interferometry) technique, a vertical displacements map of Bucharest city has been prepared. It covers the time interval 2014–2018. Based on this map, several subsidence areas have been identified. One of them, holding a thick layer of debris from urban constructions, was analyzed in detail, on the basis of an accurate local geological model and by correlating the local displacements with the urban groundwater system hydraulic heads. The properties of the anthropogenic layer have been characterized by complementary geotechnical and hydrogeological studies. A dynamic instability pattern, highlighted by PSI results, has been put into evidence when related to this type of anthropogenic layer. This thick anthropogenic layer and its connections to the urban aquifer system have to be further analyzed, when the procedures of urban planning and design invoke constructive operations modifying the aquifer dynamics. Funding acquisition, C.R.G.; Investigation, A.R., T.G. and C.R.G.; Methodology, A.R.; Project administration, C.R.G.; Supervision, C.R.G.; Writing–original draft, A.R.; Writing–review & editing, A.R., G.V., T.G. and C.R.G. All authors
Introduction
In the context of continuous urban development and population growth, urban areas are strongly influenced by the different processes affecting the underground and implicitly the terrestrial surface [1][2][3][4][5][6][7]. One of these is the groundwater flow when considering its interaction with the urban environment [5, 8,9]. In many cases, mostly because of the groundwater pumping, the effect triggering land subsidence can be observed at the ground surface [1,[10][11][12][13][14][15][16]. On an extensive scale, the vertical displacements of the ground surface integrate different hydrological, hydrogeological, geological, geotechnical phenomena, as well as other anthropogenic interventions [9,[17][18][19]. To study these, in specific urban areas located on alluvial deposits, features of different domains might be considered, among which (a) the surface water resources, their urban adaptation, and precipitations influence; (b) the geology including an accurate lithological and stratigraphic analysis as well as the related geotechnical parameters (e.g., thickness, compressibility) [19]; (c) types of aquifers and their connection to the surface waters as well as the correspondent volumes of groundwater pumping; (d) the behavior of constructions, foundations, and other infrastructure elements [3,[20][21][22][23]; (e) the tectonic activity [24,25] and others.
Land subsidence can, in many cases, be underrated. If it is associated with other severe natural or anthropogenic phenomena, it can lead to serious infrastructure damages [11], threat to human life or loss of historical or strategic infrastructures [23,24]. Consequently, the identification of the vulnerable areas in the city, prone to subsidence, can be of great support for sustainable urban planning [6]. Monitoring of vulnerable large urban areas for ground displacements was not possible until recent decades, as the available methods consisted of punctual in-situ measurements, implying a heavy demand of equipment, time, and human resources [26][27][28]. Space-borne remote sensing techniques, and more specifically the Synthetic Aperture Radar Interferometry (InSAR) techniques, made possible regional land monitoring, allowing the identification of new unknown areas susceptible of land subsidence [27]. The best monitoring solutions are considering the combination between monitoring techniques and complementary data characterizing the studied area [28]. Thus, different monitoring combinations were set up for characterizing natural and anthropic land subsidence worldwide [3,18,19,25].
Bucharest, the capital of Romania, is a dynamic city with a growing population of over 2.1 million in 2019 and a surface area of about 240 km 2 [29]. Both the population and surface coverage are expanding. There is a great deal of infrastructure under development. This is generating changes in the subsurface and consequently affecting the surface [8]. Bucharest is situated in the south-eastern part of Romania, in the central part of the Moesic platform [30]. It is crossed by two modified rivers: Dambovita River which was channelized in 1883 and further in the late 1970s, and Colentina River which was remodeled in a series of lakes connected with the shallow aquifer [31], as illustrated by Figure 1.
From the hydrogeological point of view, the city of Bucharest lies on a Quaternary sedimentary aquifer system composed by three units [8,30]. The shallow and unconfined aquifer have a direct interaction with the urban infrastructure elements. It is mainly made of gravel and sands. This unit is covered by another aquitard unit known as 'superficial deposits'. Between the shallow and the middle aquifer unit is located a clayey aquitard called 'intermediary deposits'. The middle confined aquifer unit, found at depths between 20-50 m, is mainly made of sandy materials. The two aquifer layers could be considered as belonging to the same urban aquifer system as they sporadically communicate hydraulically through geological openings or improper executed boreholes or wells. Moreover, deep infrastructure elements could activate new hydraulic contacts. The deepest Quaternary aquifer strata is separated from the urban aquifer system by a sequence of marl and clay layers, with slim sandy intercalations, having a thickness from 110 m in the north to about 40 m to the south [8]. Groundwater abstraction from the shallow aquifer was ended in 2000. Lately, the uncontrolled number of permanent or temporary dewatering systems increased tremendously. Infrastructure changes at the surface and in the subsurface of Bucharest, due to city growth, has changed and disturbed the groundwater recharge and flow [8]. These continous changes have triggered subsidence in distinct parts of the city.
Previous studies, which used multi-temporal radar interferometry techniques (MTI) [32][33][34] revealed several areas proving ground instability. In Bucharest city, identified mechanisms of ground surface displacement regroups [34,35] 'natural long-term trends' overlaid by 'short-term patterns' trigerred especially by recent city dynamics. Long-term ground deformation patterns of Bucharest have been accurately studied by Armas et al. (2017) [35] by using multitemporal InSAR and multivariate dynamic analyses, developing a comparative analysis with the evolution trends of their neighboring areas for old large industrial parks. Most of short-term patterns include geotechnical and hydrogeological aspects and are due to inadequately studied athropogenic ground disturbances of soil matrix or urban hydrogeological systems. Consequently, they trigger significant damage of existing underground and above-ground structures [36], local floods, damage to sites of historical interest, drying of supply wells or penetration of pollutants into deep aquifers. A comprehensive analysis on the behavior of the anthropogenic thick layer of debris from urban constructions, situated in Barbu Vacarescu area (Figure 1), represents the focus of this study. This has been built starting from the already existing Bucharest city scale hydrogeological model, based on a 3D geological model spatially intersected with the city main infrastructure elements, as well as the local hydrogeological model covering a part of the area analyzed in this study [30]. In the scope of the current study, an accurate geological model has been developed, its necessity being outlined by the results of the PSI ground surface displacements distribution and patterns as well as the accurate image on the area groundwater dynamics revealed by the two existing models. This The construction of subway stations, deep basements, underground parking lots, sewerage or water supply infrastructure works, requires the execution of groundwater depletion or drainage works [6,20]. These can most often be temporary (during the execution of the work) or permanent (for example to prevent seepage as well as the occurrence of underpressures) [20]. Problems that may occur when carrying out depletion or drainage work include risks related to soil mechanical effects (e.g., hydrodynamic entrainment), hydraulic rupture of the excavation base and differentiated settlements.
Dewatering works can cause suffosion and internal erosion as well as material losses from the slopes due to groundwater pumping in open excavations or by entraining fine particles from the ground into the wells. Consequently, these phenomena can lead to subsidence of the surrounding area and of the ground located under the adjacent buildings.
In this study, the Persistent Scatterer Interferometry (PSI) was used to analyze the trends of the ground instability in Bucharest city. For the time period 2014-2018, using the new satellite mission Sentinel-1. One selected area, named Barbu Vacarescu (Figure 1), outlined on the basis of this dataset, has been analysed in detail. The area, delineated by Lacul Tei Boulevard, Barbu Vacarescu Street, and Opanez Street as is shown in Figure 1.
A comprehensive analysis on the behavior of the anthropogenic thick layer of debris from urban constructions, situated in Barbu Vacarescu area (Figure 1), represents the focus of this study. This has been built starting from the already existing Bucharest city scale hydrogeological model, based on a Remote Sens. 2020, 12, 4054 4 of 24 3D geological model spatially intersected with the city main infrastructure elements, as well as the local hydrogeological model covering a part of the area analyzed in this study [30]. In the scope of the current study, an accurate geological model has been developed, its necessity being outlined by the results of the PSI ground surface displacements distribution and patterns as well as the accurate image on the area groundwater dynamics revealed by the two existing models. This local geological model, including extensive complementary data on the anthropogenic layer and new acquired borehole geological and lithological information have the needed accuracy to correlate the area PSI displacements with the urban groundwater system information.
Since 2006, a pronounced decrease of the water level of the Circului Lake, located in the Barbu Vacarescu area has been observed. As this lake is naturally recharged by the upper shallow aquifer of the Bucharest city, a hydrogeological analysis of the local aquifer system behavior has been performed [37]. This area has a water supply system consisting of low-pressure pipes with a length of about 180 km. Similar to the most of the world's cities, the groundwater recharge in Bucharest is mostly made from the water supply network losses and, in a lower percentage, from the interaction with the sewer system. In this modeling study, several urban groundwater modeling scenarios were developed to simulate Circului Lake disturbance [37,38].
The study [38], took into account the drastic reduction ( Figure 2) of the water supply losses due to the improvement of the water distribution network in the study area. During 2014-2019, a decrease of the annual precipitations has been registered [39]. This also contributed to the decrease of the hydraulic head in both shallow (unconfined) and middle (confined) layers. The same study [38], put into evidence the area permanent dewatering systems installed to decrease the hydraulic head in both aquifer strata respectively the unconfined aquifer layer and the confined one. The dewatering systems are reducing the groundwater seepage into the deep foundations of the buildings.
Remote Sens. 2020, 12, x FOR PEER REVIEW 4 of 24 local geological model, including extensive complementary data on the anthropogenic layer and new acquired borehole geological and lithological information have the needed accuracy to correlate the area PSI displacements with the urban groundwater system information. Since 2006, a pronounced decrease of the water level of the Circului Lake, located in the Barbu Vacarescu area has been observed. As this lake is naturally recharged by the upper shallow aquifer of the Bucharest city, a hydrogeological analysis of the local aquifer system behavior has been performed [37]. This area has a water supply system consisting of low-pressure pipes with a length of about 180 km. Similar to the most of the world's cities, the groundwater recharge in Bucharest is mostly made from the water supply network losses and, in a lower percentage, from the interaction with the sewer system. In this modeling study, several urban groundwater modeling scenarios were developed to simulate Circului Lake disturbance [37,38].
The study [38], took into account the drastic reduction ( Figure 2) of the water supply losses due to the improvement of the water distribution network in the study area. During 2014-2019, a decrease of the annual precipitations has been registered [39]. This also contributed to the decrease of the hydraulic head in both shallow (unconfined) and middle (confined) layers. The same study [38], put into evidence the area permanent dewatering systems installed to decrease the hydraulic head in both aquifer strata respectively the unconfined aquifer layer and the confined one. The dewatering systems are reducing the groundwater seepage into the deep foundations of the buildings. Losses from the water supply network in the study area (modified after [38]).
The study [38], put into evidence that the decrease of the area groundwater hydraulic head is a consequence of several hydrological and hydraulic factors influencing the hydrological balance: climate change manifested through reduced precipitation, reduction of water supply losses, decrease of precipitation infiltration, and the presence of alleged dewatering systems. Figure 3 shows the decrease of the water level in Circului Lake between 2006 and 2015. Losses from the water supply network in the study area (modified after [38]).
The study [38], put into evidence that the decrease of the area groundwater hydraulic head is a consequence of several hydrological and hydraulic factors influencing the hydrological balance: climate change manifested through reduced precipitation, reduction of water supply losses, decrease of precipitation infiltration, and the presence of alleged dewatering systems. Figure 3 shows the decrease of the water level in Circului Lake between 2006 and 2015.
The measured hydraulic head decrease trend has been modeled [37], the results of the calibrated local hydrogeological model Figure 3 being representative for the entire local aquifer behavior. It is likely that factors supporting these results, considering the decrease of the groundwater level in the area, were anthropogenic as well as natural causes. As mentioned before, the reduced precipitation is one of the causes, however the strong diminishing of the water supply network losses from 0.42 m 3 /s in 2006 to 0.17 m 3 /s in 2014 represents a stronger trigger [38]. Recent punctual measurements mentioned in this paper, are proving the modeled results. The study [38], put into evidence that the decrease of the area groundwater hydraulic head is a consequence of several hydrological and hydraulic factors influencing the hydrological balance: climate change manifested through reduced precipitation, reduction of water supply losses, decrease of precipitation infiltration, and the presence of alleged dewatering systems. Figure 3 shows the decrease of the water level in Circului Lake between 2006 and 2015. The measured hydraulic head decrease trend has been modeled [37], the results of the calibrated local hydrogeological model Figure 3 being representative for the entire local aquifer behavior. It is likely that factors supporting these results, considering the decrease of the groundwater level in the area, were anthropogenic as well as natural causes. As mentioned before, the reduced precipitation is one of the causes, however the strong diminishing of the water supply As area ground surface displacements have been observed on the basis of PSI previous investigations [32,33], the possible connection to its high groundwater dynamics has been further analyzed. Earlier studies revealed an instability trend for Barbu Vacarescu area [32], with vertical downward movement of 18.6 mm/year in the time period 1992-1999, 11.3 mm/year in the time period 2003-2009, and 13.3 mm/year in the time period 2011-2012 [33]. Looking on the different vertical displacements maps obtained for different time intervals, the instability trend is mainly given by the changes of the persistent scatterers' location indicating the ground displacements and not by the predominance of the subsidence persistent scatterers in the bounded area. This reveals the presence of a factor prone to subsidence for specific triggers.
Materials and Methods
To demonstrate the connection between the urban aquifer system, geological settings, and the vertical ground displacements, several datasets have been analyzed and correlated. The purpose of the analysis was to understand the main geological, hydrogeological, and geotechnical processes characterizing the area within the urban fabric context, the link between them, and their connection to the short-term ground deformation phenomenon. The analysis zone covers a slightly larger surface than the identified subsidence area (Figure 4), including Circului Park green area in southern part and western parts. The western part covers facilities of a sports club called Dinamo Sports Club and a green area named Cinema Park Floreasca.
SAR Data
In the last decades, with the launch of the Synthetic Aperture Radar (SAR), emerged missions providing long time series acquisitions and new techniques for vertical ground displacements on a centimeter-millimeter scale. The first technique was the Interferometric Synthetic Aperture Radar (InSAR) [40,41], followed by Differential InSAR (DInSAR) [17,40,42,43] and the multi-temporal differential InSAR (MTI) [15,[44][45][46][47]. The basic principle of the InSAR techniques, of detecting the subtle changes at the Earth's surface, consists in using two radar acquisitions from approximately the same position, at different time points. The phase difference between the two acquisitions indicates the magnitude of the ground displacement [48] along the line-of-sight (LOS) [49]. LOS represents the line connecting the sensor and the target from the ground [50].
On April 2014, the C-band imaging radar mission Sentinel-1, part of the European Union's Earth Observation Program, Copernicus [51], launched its first satellite, Sentinel-1A, followed by the launch of Sentinel-1B on April 2016. The data from this mission and from the other Sentinel missions are freely and openly available on Copernicus Open Access Hub [52]. For this study, data covering the time span October 2014-April 2018 were used from an ascending and a descending orbit. Table 1 presents the technical details of these acquisitions. vertical ground displacements, several datasets have been analyzed and correlated. The purpose of the analysis was to understand the main geological, hydrogeological, and geotechnical processes characterizing the area within the urban fabric context, the link between them, and their connection to the short-term ground deformation phenomenon. The analysis zone covers a slightly larger surface than the identified subsidence area (Figure 4), including Circului Park green area in southern part and western parts. The western part covers facilities of a sports club called Dinamo Sports Club and a green area named Cinema Park Floreasca.
SAR Data
In the last decades, with the launch of the Synthetic Aperture Radar (SAR), emerged missions providing long time series acquisitions and new techniques for vertical ground displacements on a centimeter-millimeter scale. The first technique was the Interferometric Synthetic Aperture Radar (InSAR) [40,41], followed by Differential InSAR (DInSAR) [17,40,42,43] and the multi-temporal differential InSAR (MTI) [15,[44][45][46][47]. The basic principle of the InSAR techniques, of detecting the subtle changes at the Earth's surface, consists in using two radar acquisitions from approximately Technical details about Interferometric Wide Swatch (IW) acquisition mode are described in [53].
The displacements map for Bucharest city were produced in this study by the Norwegian Ground Motion Service, using the data described in Table 1, by applying the standard PSI technique. This is a multi-temporal InSAR method for vertical ground displacement assessment [48]. Considering a long temporal series of more than 15-20 SAR scenes acquired over the same area, from the "n" SAR scenes, a series of "n-1" interferograms are generated, considering a master scene [54]. A set of phase stable radar targets, named persistent scatterers (PS), with smaller dimensions than the pixel resolution, are identified and are used to indicate the displacement time series [48,50,[54][55][56]. One of the limitations of this method is related to the vegetated areas where, due to decorrelation, only a limited number of PS points can be identified [50].
PSI processing was done on a high-performance computing cluster (HPCC) using software developed by the KSAT-GMS partnership (NORCE-formerly NORUT, PPO.labs and Kongsberg Satellite Services) [57,58]. The used processing chain and software are those used for InSAR Norway (the Public National Norwegian Ground Motion Service, www.insar.no), based on Sentinel-1 data, as described by Dehls et al. (2019) [58]. The digital elevation model (DEM) used to remove the initial topographic phase is the SRTM v4.1. After PSI processing, time series of PS points datasets were generated, indicating the displacements in both ascending LOS and descending LOS. Some of the products generated for each PS point are: the mean displacement velocity, the time evolution of the displacement magnitude of each acquisition with respect to the reference acquisition, and the coherence. For the performed analyzes, mainly the PS points having a coherence value greater than 0.7 were used. Based on the ascending and descending geometry of the two PS points data-sets and the LOS displacements values, the vertical and horizontal (only east-west direction) components of displacements were computed, considering the approach proposed by Dalla Via et Al. (2012) [59] where D e is the horizontal displacement, D v is the vertical displacement, D d is the descending LOS displacement, D a is the ascending LOS displacement, and θ a and θ d are the look angles for both orbits modes. For the combination of ascending and descending PS points, the nearest neighbor vector approach was used [60]. For each PS point from the ascending orbit, the nearest spatial PS point from the descending orbit was assigned. After the join between the two data-sets, the horizontal and vertical displacements were computed using Equation (1). The approach was based on GIS softwares, using tools and functionalities of the ESRI's ArcMap software package and of the free and open source QGIS software.
Development of the Urban Geological Model for the Study Area
A better understanding of the local geology in relationship to the urban infrastructure (anthropogenic layers, deep foundations, tunnels, excavations, and others) could have been achieved only by generating an accurate local geological model for the study zone ( Figure 4). As geological information framework, has been used a data-set coming from an interdisciplinary research project that set up the concept and a first realisation of the hydrogeological model of the entire Bucharest city [61]. Data and knowledge have been acquired with the collaboration of different institutions, companies and experts [61]. The city-scale 3D geological model has been developed, after compiling about 1800 boreholes, by stratigraphical litho-correlation using in-house research software [62]. It focuses the Quaternary sedimentary deposits of the first 70 m below ground level and it was used Remote Sens. 2020, 12, 4054 8 of 24 to identify, delineate, and describe the existing hydrogeological units composing the urban aquifer system. Pumping tests and grain size distribution analysis have been performed to hydraulically characterize these units.
To develop the local Barbu Vacarescu urban geological model (1200 × 1200 m), additional data acquisition steps have been achieved focusing mainly the characterisation of the anthropogenic layer and the 3D delineation of a massive clay shallow strata. Then, a local scale geological interpretation on the basis of the borehole logs description of the old and new identified wells has been performed to generate a local high-accuracy model. The operational steps were the folowing: Clay layer with thicknesses up to 11.6 m.
The extension and the thickness of the anthropogenic material layer are well marked in the geological model and will be presented in the following sections. The anthropogenic stratum 3D geometry has been defined with a high accuracy, by using complementary hydrogeological and geotechnical studies within the geological modeling process [61,62].
Hydrogeological Data Assemblage
Hydraulic head time series were available for the Circului Park green area. The series include data corresponding to the monitoring boreholes for both shallow aquifer and middle aquifer strata. Table 2 mentions these boreholes and the corresponding monitored aquifer strata. Figure 6 illustrates the location of the monitoring boreholes in the study area. The hydraulic-head measurements cover the time period February 2013 to July 2019. A large dataset was available for 2015 by means of 15 measurement campaigns that were conducted during the entire year. All these boreholes are part of the Urban Groundwater Monitoring System (UGMS) of Bucharest city [31]. Table 2. Monitoring boreholes in the Circului Park, Figure 6 shows location.
No.
Borehole Code Aquifer Stratum In the middle of Circului Park, where the artificial lake is located, the boreholes are distributed around it ( Figure 6). The particularity of this lake is that although it is an artificial lake, it is naturally recharged by the upper shallow aquifer.
Beside the monitoring boreholes from Circului Park, located inside Barbu Vacarescu area, one hydraulic head time series was available for a specific borehole monitoring the aquifer (TrEiff), marked in Figure 6. The monitoring period was between March 2011 and May 2016. Most of the measurements were taken between 2011 and 2012. Only one hydraulic head measurement has been taken in 2016. Data from two other boreholes situated in the vicinity of this borehole were used as complementary information.
Data Analysis
Based on the results of the vertical displacements map and on the current developed geological model, an analysis of vertical displacements and their causes has been made for the entire area as well as particularly for some specific sectors of the area. The data for the TrEiff monitoring borehole inside Barbu Vacarescu area described in Section 2.3, was recorded for a particular geotechnical study [63]. Other data collected for that study were included in this study.
As the boreholes and the PS points have different spatial distributions, to correlate the information between the vertical displacements and the hydraulic head, a buffer zone of 100 m around the boreholes was marked to delineate the PS points considered for the analysis.
Spatial analyses and final maps were made using ArcGIS software packages. For the vertical displacement maps, the stability interval was considered between −1.5 mm/year and 1.5 mm/year. The stable PS points are marked in green colour on the map. PS points indicating subsidence are marked in red colour for values higher than −3.5 mm/year and orange for values between −1.5 mm/year and −3.5 mm/year. PS points indicating the presence of positive vertical displacements are marked in dark blue for values higher than 3.5 mm/year and light blue for values between 1.5 mm/year and 3.5 mm/year. The amount of hydrogeological time series datasets, available for the study area, where relatively limited. The lack of data is specific for most of densely built urban areas as continuous data collection in urban settings is difficult and costly. Reasons come from the access in densely populated areas, vandalism, property rights of the land where the borehole is placed, and the needed human and equipment resources. Except for very specific works, where monitoring boreholes are required for certain time periods, the monitoring wells do not last over time. However, urban subsurface data collection, management, and availability are still seldom well planned. Consequently, using it in urban analysis and planning remains a challenge for many European cities.
In the middle of Circului Park, where the artificial lake is located, the boreholes are distributed around it ( Figure 6). The particularity of this lake is that although it is an artificial lake, it is naturally recharged by the upper shallow aquifer.
Beside the monitoring boreholes from Circului Park, located inside Barbu Vacarescu area, one hydraulic head time series was available for a specific borehole monitoring the aquifer (TrEiff), marked in Figure 6. The monitoring period was between March 2011 and May 2016. Most of the measurements were taken between 2011 and 2012. Only one hydraulic head measurement has been taken in 2016. Data from two other boreholes situated in the vicinity of this borehole were used as complementary information.
Data Analysis
Based on the results of the vertical displacements map and on the current developed geological model, an analysis of vertical displacements and their causes has been made for the entire area as well as particularly for some specific sectors of the area. The data for the TrEiff monitoring borehole inside Barbu Vacarescu area described in Section 2.3, was recorded for a particular geotechnical study [63]. Other data collected for that study were included in this study.
As the boreholes and the PS points have different spatial distributions, to correlate the information between the vertical displacements and the hydraulic head, a buffer zone of 100 m around the boreholes was marked to delineate the PS points considered for the analysis.
Spatial analyses and final maps were made using ArcGIS software packages. For the vertical displacement maps, the stability interval was considered between −1.5 mm/year and 1.5 mm/year. The stable PS points are marked in green colour on the map. PS points indicating subsidence are marked in red colour for values higher than −3.5 mm/year and orange for values between −1.5 mm/year and −3.5 mm/year. PS points indicating the presence of positive vertical displacements are marked in dark blue for values higher than 3.5 mm/year and light blue for values between 1.5 mm/year and 3.5 mm/year.
Results
Considering Bucharest city and its neighborhoods, the PS points density for Sentinel-1 data for the used time span 2014-2018 is approximately 1050 PS/km 2 . For the specific study area, which was used for the generation of the geological model, the PS density from the Sentinel-1 data increases to 2000 PS/km 2 . The obtained displacements reach values between −23,7 mm/yr and +33 mm/yr, for the 131 Ascending (131A) orbit time series acquisitions, and values between −21 mm/yr and +23.3 mm/yr for the LOS 109 Descending (109D) orbit time series acquisitions. The vertical displacements computed by using data from both ascending and descending orbits reach values between −13.05 mm/yr and +17.24 mm/yr, with a mean value of −0.27 mm/yr and a standard deviation of ± 0.91 mm/yr. The datasets from the two orbits, 131A and 109D, were self-consistent.
Bucharest City Vertical Displacements Map
The vertical displacements map of Bucharest, obtained using Sentinel-1 data, could reveal different trends at city scale. Figure 7 presents the vertical displacement trends for Bucharest city and the subsidence areas which were identified in this study. Most areas show no vertical displacements. There are several areas indicating a vertical downward movement (Figure 7). There might be areas with inconsistent trends; hence, a longer temporal series is needed for an accurate interpretation.
Within the PANGEO project [32], the areas of instability for Bucharest city were identified on the basis of PSI velocity maps generated from ERS1-ERS2 and Envisat ASAR data for the time period 1992-2009. Some of these areas showed subsidence trends. Later, these instability areas were monitored in the SYRIS project, between 2011-2012 [33], and the velocity maps were enriched with a dataset from the TerraSAR-X sensor.
The current analysis revealed the existence of several areas showing the same subsidence trend as in the previous studies. The Barbu Vacarescu area is one of the areas where the subsidence trend seems to be continuous, even though the maximum annual velocities are not very high and areas with stability trend are also included (Figure 7).
Besides these above-mentioned zones, there are some areas where new buildings and underground infrastructures are currently under construction, or construction works were finished just before the beginning of the monitoring period 2014-2018. In the previous studies, these were stable and now they are affected by subsidence. A more detailed analysis of these areas, (including the building development) shows that some of them were previously stable, as observed from the PSI data. For the others, these previous studied areas were covered by vegetation making it impossible the generation of the PS points. Hence, these vegetated areas might have had a subsidence trend, generated by the beginning of the construction activity, or there might be an older problem which could not be revealed due to the previous land cover and vegetation conditions. Based on the displacements map, a detailed analysis was made for the Barbu Vacarescu area, as shown in Figure 8. The main reasons are related to the presence of the persistent subsidence through all time periods of the available SAR data, since 1992 [32,33] and the availability of the hydrogeological, lithological, and geotechnical data which characterize this area. Considering the connections between the different processes and phenomena characterizing the subsurface, some vertical displacement patterns are highlighted.
The Barbu Vacarescu Urban Area
Barbu Vacarescu is one of the areas where subsidence has been revealed by all the existing SAR time series, since 1992 [32,33]. The distribution of the PS points can be seen in Figure 8. Velocity values for the selected area with the generated geological model are between −10.72 mm/yr (red points) and +3.88 mm/yr (blue points) with a standard deviation of ±1.11 mm/yr. Based on the displacements map, a detailed analysis was made for the Barbu Vacarescu area, as shown in Figure 8. The main reasons are related to the presence of the persistent subsidence through all time periods of the available SAR data, since 1992 [32,33] and the availability of the hydrogeological, lithological, and geotechnical data which characterize this area. Considering the connections between the different processes and phenomena characterizing the subsurface, some vertical displacement patterns are highlighted. Remote Sens. 2020, 12, x FOR PEER REVIEW 12 of 24 (a) (b) Figure 8. PSI in Barbu Vacarescu area (a) Red limit represents the identified subsidence area in the PANGEO project [26]; blue limit represents the subsidence area identified in this study. (b) Red limit represents the geological model area; blue limit represents the subsidence area identified in this study. Map generated in Esri ® ArcMap™ 10.3. Base map source: ESRI World Imagery.
Subsidence Analysis of the Anthropogenic and Geological Deposits in the Barbu Vacarescu Area
The studied area is extensively covered by a deep anthropogenic stratum as it is illustrated in Figure 9. This urban soil layer is largely composed by urban waste due to the presence of a former quarry exploitation for aggregate construction material that has been later filled by other types of anthropogenic materials [32]. The former quarry exploitation was filled gradually between 1950 and 1977 [64]. A period of 10 to 15 years is the indicative consolidation time for the anthropogenic material layer having clay layer base strata [65]. Consolidation time depends also on the anthropogenic material layer thickness [65,66]. Unless the consolidation time ended and the area is considered stable, changes of the urban aquifer system dynamics or modifications of the stress state due to the building loads, foundations, tunnels, or other infrastructure elements, can induce ground displacements [66].
It can be observed that in the left side of the Barbu Vacarescu Street, the anthropogenic material layer is missing or on a small zone is very thick. The vertical ground velocity map shows this area as being a stable one. Regarding the area from the right side of the street, which also represents our area of interest, the anthropogenic material layer has thicknesses from 5 m to 11.7 m. It also fits with the presence of the PS points indicating subsidence up to −10.72 mm/yr. The southern limit of the interest area is Lacul Tei Boulevard, bordering the Circului Park (Figure 9a). In this green area, due to the presence of vegetation, only a few PSs points could be generated. It can be assumed that the park area has the same subsidence trend as the entire studied area, as the anthropogenic material layer is present in the park's subsurface. [26]; blue limit represents the subsidence area identified in this study. (b) Red limit represents the geological model area; blue limit represents the subsidence area identified in this study. Map generated in Esri ® ArcMap™ 10.3. Base map source: ESRI World Imagery.
The Barbu Vacarescu Urban Area
Barbu Vacarescu is one of the areas where subsidence has been revealed by all the existing SAR time series, since 1992 [32,33]. The distribution of the PS points can be seen in Figure 8. Velocity values for the selected area with the generated geological model are between −10.72 mm/yr (red points) and +3.88 mm/yr (blue points) with a standard deviation of ±1.11 mm/yr.
Subsidence Analysis of the Anthropogenic and Geological Deposits in the Barbu Vacarescu Area
The studied area is extensively covered by a deep anthropogenic stratum as it is illustrated in Figure 9. This urban soil layer is largely composed by urban waste due to the presence of a former quarry exploitation for aggregate construction material that has been later filled by other types of anthropogenic materials [32]. The former quarry exploitation was filled gradually between 1950 and 1977 [64]. A period of 10 to 15 years is the indicative consolidation time for the anthropogenic material layer having clay layer base strata [65]. Consolidation time depends also on the anthropogenic material layer thickness [65,66]. Unless the consolidation time ended and the area is considered stable, changes of the urban aquifer system dynamics or modifications of the stress state due to the building loads, foundations, tunnels, or other infrastructure elements, can induce ground displacements [66].
It can be observed that in the left side of the Barbu Vacarescu Street, the anthropogenic material layer is missing or on a small zone is very thick. The vertical ground velocity map shows this area as being a stable one. Regarding the area from the right side of the street, which also represents our area of interest, the anthropogenic material layer has thicknesses from 5 m to 11.7 m. It also fits with the presence of the PS points indicating subsidence up to −10.72 mm/yr. The southern limit of the interest area is Lacul Tei Boulevard, bordering the Circului Park (Figure 9a). In this green area, due to the presence of vegetation, only a few PSs points could be generated. It can be assumed that the park area has the same subsidence trend as the entire studied area, as the anthropogenic material layer is present in the park's subsurface. Remote Sens. 2020, 12, x FOR PEER REVIEW 13 of 24 (a) (b) The high heterogeneity of the urban antropogenic material is highlighted by Figure 10 indicating that the highest level of subsidence is over an area with thick anthropogenic material, however much of that area appears to be stable. The high heterogeneity of the urban antropogenic material is highlighted by Figure 10 indicating that the highest level of subsidence is over an area with thick anthropogenic material, however much of that area appears to be stable.
Relationship between Ground Surface Displacements and the Urban Aquifer System Dynamics in the Barbu Vacarescu Area
Hydraulic-head measurements corresponding to Circului Park have been used in this study in conjunction with the vertical displacements to identify the possible connection between the urban aquifer system dynamics and the terrain surface movements. Technical details on the boreholes are presented in Section 2.3. The main steps to analyse the hydraulic head data against the vertical displacements of the Circului Park are further described. A 100 m buffer zone for the PS data was generated around each existing borehole, and a spatial query operation has been applied to identify the corresponding PS points. The buffer spatial query has been used in order to simplify the data representation. If PS points were found inside this buffer zone, the specific borehole and the corresponding PS points were included in the analysis. As for example for the boreholes PC1LC and FC1LC no PS points closer than 100 m could be found, due to fact that the area is covered with vegetation and therefore no PS points were obtained. Consequently, these boreholes were not included in the analysis. After verifying the available hydraulic head temporal series of F15C, data from this monitoring borehole could not be considered due to existing inconsistencies.
A further spatial analysis was based on the data from the boreholes FM1LC, FM2LC, and F14M. FM1LC and FM2LC are placed in the same location, respectively monitoring the confined middle aquifer and the shallow aquifer. These two points are part of the Bucharest city groundwater monitoring system as reported by Gaitanaru et al. (2017) [31]. A double tube monitoring well was designed to measure the hydraulic head for both the upper (unconfined) and middle (unconfined) aquifer strata.
Only two PS points are situated at a distance less than 100 m for FM1LC and FM2LC. Figure 11 illustrates them, their PS codes being 333,807 and 333,808 respectively.
For the F14M monitoring borehole, the analysis was made for the middle-confined aquifer strata, as more than 60 PS points are within the 100 m buffer zone. The selected PS points can be seen in Figure 12.
Relationship between Ground Surface Displacements and the Urban Aquifer System Dynamics in the Barbu Vacarescu Area
Hydraulic-head measurements corresponding to Circului Park have been used in this study in conjunction with the vertical displacements to identify the possible connection between the urban aquifer system dynamics and the terrain surface movements. Technical details on the boreholes are presented in Section 2.3. The main steps to analyse the hydraulic head data against the vertical displacements of the Circului Park are further described. A 100 m buffer zone for the PS data was generated around each existing borehole, and a spatial query operation has been applied to identify the corresponding PS points. The buffer spatial query has been used in order to simplify the data representation. If PS points were found inside this buffer zone, the specific borehole and the corresponding PS points were included in the analysis. As for example for the boreholes PC1LC and FC1LC no PS points closer than 100 m could be found, due to fact that the area is covered with vegetation and therefore no PS points were obtained. Consequently, these boreholes were not included in the analysis. After verifying the available hydraulic head temporal series of F15C, data from this monitoring borehole could not be considered due to existing inconsistencies.
A further spatial analysis was based on the data from the boreholes FM1LC, FM2LC, and F14M. FM1LC and FM2LC are placed in the same location, respectively monitoring the confined middle aquifer and the shallow aquifer. These two points are part of the Bucharest city groundwater monitoring system as reported by Gaitanaru et al. (2017) [31]. A double tube monitoring well was designed to measure the hydraulic head for both the upper (unconfined) and middle (unconfined) aquifer strata.
Only two PS points are situated at a distance less than 100 m for FM1LC and FM2LC. Figure 11 illustrates them, their PS codes being 333,807 and 333,808 respectively. In Figure 12, the blue line illustrates the groundwater hydraulic head variation for the FM1LC (middle confined aquifer strata) and FM2LC boreholes (shallow unconfined strata) and a correlation with the vertical ground movement of PS data points (green lines). For borehole FM1LC, two time periods stand out respectively between June 2015 to September 2015 with a decrease of the hydraulic head and between December 2017 and March 2018 with an increase of the hydraulic head ( Figure 12). The mean annual velocity for the average between the two PS points is −0.26 mm ± 1.71 mm. For borehole FM2LC, the decrease in hydraulic head between March 2016 and November 2017 show a small change in vertical ground movement of PS data. However, the rapid increase in hydraulic head from January to March 2018 corresponds to a rapid change in the PS data ( Figure 12). As average, these changes in ground movement (PS data) are small (millimetres). The groundwater hydraulic head, registered in the two boreholes, corresponds to the general behavior of the Bucharest aquifer system. The middle confined aquifer shows a little higher hydraulic head than the shallow unconfined one. It can be also observed that the shallow strata show a more intense dynamic due to its generally higher hydraulic conductivity, its recharge from precipitation as well as its closer hydraulic interaction with the surface water and with the city infrastructure elements. In several places, where those two aquifer strata communicate naturally or artificially, the groundwater hydraulic head values are undistinguishable [53].
For both boreholes penetrating respectively the shallow aquifer (FM2LC) and the confined middle aquifer (FM1LC), the hydraulic head has a descending trend while the area neighboring the PS points show a slightly ascending trend. However, as the annual value of the vertical displacements for the same time period is less than −1.5 mm, for these two mentioned PS points the area can be interpreted as stable. Consequently, Figure 12 does not show a correlation between For the F14M monitoring borehole, the analysis was made for the middle-confined aquifer strata, as more than 60 PS points are within the 100 m buffer zone. The selected PS points can be seen in Figure 12.
In Figure 12, the blue line illustrates the groundwater hydraulic head variation for the FM1LC (middle confined aquifer strata) and FM2LC boreholes (shallow unconfined strata) and a correlation with the vertical ground movement of PS data points (green lines). For borehole FM1LC, two time periods stand out respectively between June 2015 to September 2015 with a decrease of the hydraulic head and between December 2017 and March 2018 with an increase of the hydraulic head ( Figure 12). The mean annual velocity for the average between the two PS points is −0.26 mm ± 1.71 mm. For borehole FM2LC, the decrease in hydraulic head between March 2016 and November 2017 show a small change in vertical ground movement of PS data. However, the rapid increase in hydraulic head from January to March 2018 corresponds to a rapid change in the PS data ( Figure 12). As average, these changes in ground movement (PS data) are small (millimetres). The groundwater hydraulic head, registered in the two boreholes, corresponds to the general behavior of the Bucharest aquifer system. The middle confined aquifer shows a little higher hydraulic head than the shallow unconfined one. It can be also observed that the shallow strata show a more intense dynamic due to its generally higher hydraulic conductivity, its recharge from precipitation as well as its closer hydraulic interaction with the surface water and with the city infrastructure elements. In several places, where those two aquifer strata communicate naturally or artificially, the groundwater hydraulic head values are undistinguishable [53].
For both boreholes penetrating respectively the shallow aquifer (FM2LC) and the confined middle aquifer (FM1LC), the hydraulic head has a descending trend while the area neighboring the PS points show a slightly ascending trend. However, as the annual value of the vertical displacements for the same time period is less than −1.5 mm, for these two mentioned PS points the area can be interpreted as stable. Consequently, Figure 12 does not show a correlation between vertical displacements and the hydraulic head variation. On the contrary, for F14M borehole penetrating the middle confined aquifer, Figure 13 shows the correspondence of the two types of data. Here, a decrease in hydraulic head corresponds to vertical negative ground displacements (subsidence). This strengthens the hypothesis that the Circului Park area, or parts of it, has the same behavior as the study area situated in the north side of the park. A good correspondence is registered between the hydraulic head variation of FM2LC borehole in the shallow aquifer and the water level in Circului Lake which has a direct connection with the shallow aquifer and is representative for the area aquifer hydraulic head trend since 2006.
Study Case of a Building Situated in the Barbu Vacarescu Area
In 2011, a stability-geotechnical expertise has been made for a building situated inside the study area Barbu Vacarescu [63], triggered by signs of instability. Degradations occurred after the On the contrary, for F14M borehole penetrating the middle confined aquifer, Figure 13 shows the correspondence of the two types of data. Here, a decrease in hydraulic head corresponds to vertical negative ground displacements (subsidence). This strengthens the hypothesis that the Circului Park area, or parts of it, has the same behavior as the study area situated in the north side of the park. On the contrary, for F14M borehole penetrating the middle confined aquifer, Figure 13 shows the correspondence of the two types of data. Here, a decrease in hydraulic head corresponds to vertical negative ground displacements (subsidence). This strengthens the hypothesis that the Circului Park area, or parts of it, has the same behavior as the study area situated in the north side of the park. A good correspondence is registered between the hydraulic head variation of FM2LC borehole in the shallow aquifer and the water level in Circului Lake which has a direct connection with the shallow aquifer and is representative for the area aquifer hydraulic head trend since 2006.
Study Case of a Building Situated in the Barbu Vacarescu Area
In 2011, a stability-geotechnical expertise has been made for a building situated inside the study area Barbu Vacarescu [63], triggered by signs of instability. Degradations occurred after the beginning of construction works in a neighboring property located in the north eastern area. The A good correspondence is registered between the hydraulic head variation of FM2LC borehole in the shallow aquifer and the water level in Circului Lake which has a direct connection with the shallow aquifer and is representative for the area aquifer hydraulic head trend since 2006.
Study Case of a Building Situated in the Barbu Vacarescu Area
In 2011, a stability-geotechnical expertise has been made for a building situated inside the study area Barbu Vacarescu [63], triggered by signs of instability. Degradations occurred after the beginning of construction works in a neighboring property located in the north eastern area. The construction works involved modifications of an existing building ( Figure 14).
The geological stratification mapped in the construction site is in accordance with the geological model ( Figure 9). Hence, the top layer is an anthropogenic material layer with thicknesses of approximately 9 m. The anthropogenic material is a mixture of silts, clay, silty clay, biodegradable waste, and demolition waste. This urban soil stratum is very compressible and has weak shear-strength parameters [63]. The mechanical properties of this anthropogenic layer induce a difficult process of building foundation development. This stratum is very sensitive to static and especially to dynamic conditions (e.g., vibrations, earthquakes). The anthropogenic material stratum lays on a macro granular alluvium package consisting of sand and gravel [63].
The monitoring borehole TrEiff, described in Section 2.3, was drilled close to the boundary between the two properties, near the new building.
In the case of the displacements map generated from 109D orbit scenes, several PS points were available inside the studied zone. These points are marked in Figure 14. For the 131A displacements map, no PS point was identified inside the studied zone. Figure 14. Study case on a building situated in the Barbu Vacarescu area. Blue color line limit marks the area of the stability-geotechnical expertise. Green color line is the limit of the affected building. Purple color line is the limit of the building under construction. PSI velocity map is generated from Sentinel-1 109 Descending orbit data. Codes of PS points situated inside the studied area are marked on the map. Map generated in Esri ® ArcMap™ 10.3. Base map source: ESRI World Imagery.
The main difference between the two buildings, the one showing instability and the one under-construction, is related to the foundation system. The affected building from the Turnul Eiffel Street has a slab type foundation, which is a shallow type foundation, located in the anthropogenic material stratum. The building from the neighboring property (under construction) has a pile type foundation. Conceptually, the slab type foundation is floating into the anthropogenic material stratum and the general stability of the building is assured. Of course, the displacements of the ground foundation should be limited. In August 2010, the construction works started on the second mentioned building (located on Kepler Street) with the execution of the pile foundation. The pile foundation is a deep foundation type, used to transfer the loads of the building through the anthropogenic material layer onto a deeper, stronger, more compact, and less compressible layer The geological stratification mapped in the construction site is in accordance with the geological model ( Figure 9). Hence, the top layer is an anthropogenic material layer with thicknesses of approximately 9 m. The anthropogenic material is a mixture of silts, clay, silty clay, biodegradable waste, and demolition waste. This urban soil stratum is very compressible and has weak shear-strength parameters [63]. The mechanical properties of this anthropogenic layer induce a difficult process of building foundation development. This stratum is very sensitive to static and especially to dynamic conditions (e.g., vibrations, earthquakes). The anthropogenic material stratum lays on a macro granular alluvium package consisting of sand and gravel [63].
The monitoring borehole TrEiff, described in Section 2.3, was drilled close to the boundary between the two properties, near the new building.
In the case of the displacements map generated from 109D orbit scenes, several PS points were available inside the studied zone. These points are marked in Figure 14. For the 131A displacements map, no PS point was identified inside the studied zone.
The main difference between the two buildings, the one showing instability and the one under-construction, is related to the foundation system. The affected building from the Turnul Eiffel Street has a slab type foundation, which is a shallow type foundation, located in the anthropogenic material stratum. The building from the neighboring property (under construction) has a pile type foundation. Conceptually, the slab type foundation is floating into the anthropogenic material stratum and the general stability of the building is assured. Of course, the displacements of the ground foundation should be limited. In August 2010, the construction works started on the second mentioned building (located on Kepler Street) with the execution of the pile foundation. The pile foundation is a deep foundation type, used to transfer the loads of the building through the anthropogenic material layer onto a deeper, stronger, more compact, and less compressible layer [67]. This process induced ground deformations causing subsequently an instability effect on the neighboring buildings. Due to this, the construction works were stopped several times by the authorities, first in September 2010 and secondly in February 2011. After the second interruption, for a long period, the works did not continue.
In 2011, a borehole located on the street with the building under construction intercepted seepage water at 4.6 m. In another borehole on the street of the affected building, seepage water was intercepted at 2.5 m. The pipes leakage modified considerably the local hydrogeological conditions in the studied area. The shallow aquifer, located at depths of about 9-10 m, shows continous variations in hydraulic head with an increase from 2011 to mid of 2012 and then a decrease until June 2016. This was intercepted by the borehole drilled close to the boundary between the two properties ( Figure 14). The alternation in hydraulic head is clearly affecting the ground stability as indicated by PS data in Figure 15.
Remote Sens. 2020, 12, x FOR PEER REVIEW 18 of 24 [67]. This process induced ground deformations causing subsequently an instability effect on the neighboring buildings. Due to this, the construction works were stopped several times by the authorities, first in September 2010 and secondly in February 2011. After the second interruption, for a long period, the works did not continue.
In 2011, a borehole located on the street with the building under construction intercepted seepage water at 4.6 m. In another borehole on the street of the affected building, seepage water was intercepted at 2.5 m. The pipes leakage modified considerably the local hydrogeological conditions in the studied area. The shallow aquifer, located at depths of about 9-10 m, shows continous variations in hydraulic head with an increase from 2011 to mid of 2012 and then a decrease until June 2016. This was intercepted by the borehole drilled close to the boundary between the two properties ( Figure 14). The alternation in hydraulic head is clearly affecting the ground stability as indicated by PS data in Figure 15.
The graph of Figure 15 shows the ground surface subsidence trend that occurs in the same period of time when the hydraulic head decreasing trend is detected. Analysis of the combined datasets hydraulic head and the PS points time series is shown in Figure 15. As it can be observed, the relationship between displacements and hydraulic head variations is based on limited measurements, however the area hydraulic head evolution follows the general decreasing trend for the period 2006-2015, modeled and illustrated in Figure 3 [37]. The modeled aquifer hydraulic head decreasing trend is confirmed by the area wells as well as by the measurement took in TrEiff borehole on May 2016.
Discussion and Conclusions
One of the main advantages of the SAR techniques consists in detecting ground displacements and so improving considerably large area monitoring capability. This allows identifying specific areas affected by vertical displacements which were unknown before applying SAR monitoring and shows the evolution of areas where subsidence or uplift could occur. This is the case for the Barbu Vacarescu area, which was identified as having a subsidence trend in the SAR time series analyzed since 1992. When analyzing the PSI vertical displacements maps between 1992 and 2018, it can be clearly observed that the instability trend of this area is mainly shown by the changes of the location of the PS points indicating ground displacements and not by the predominance of the subsidence affected areas in the bounded area. As the common characteristic for the entire area is the presence of the stratum made of anthropogenic constructions waste, the particularities being given by local The graph of Figure 15 shows the ground surface subsidence trend that occurs in the same period of time when the hydraulic head decreasing trend is detected.
Hydraulic head measurements were available for 2011-2016, while displacements time series are available for 2014-2018 time period. Although there is an overlapping period of two years of common measurements, there is only one measurement for hydraulic head in the period August 2012-May 2016.
Analysis of the combined datasets hydraulic head and the PS points time series is shown in Figure 15. As it can be observed, the relationship between displacements and hydraulic head variations is based on limited measurements, however the area hydraulic head evolution follows the general decreasing trend for the period 2006-2015, modeled and illustrated in Figure 3 [37]. The modeled aquifer hydraulic head decreasing trend is confirmed by the area wells as well as by the measurement took in TrEiff borehole on May 2016.
Discussion and Conclusions
One of the main advantages of the SAR techniques consists in detecting ground displacements and so improving considerably large area monitoring capability. This allows identifying specific areas affected by vertical displacements which were unknown before applying SAR monitoring and shows the evolution of areas where subsidence or uplift could occur. This is the case for the Barbu Vacarescu area, which was identified as having a subsidence trend in the SAR time series analyzed since 1992.
When analyzing the PSI vertical displacements maps between 1992 and 2018, it can be clearly observed that the instability trend of this area is mainly shown by the changes of the location of the PS points indicating ground displacements and not by the predominance of the subsidence affected areas in the bounded area. As the common characteristic for the entire area is the presence of the stratum made of anthropogenic constructions waste, the particularities being given by local geotechnical differences or by local groundwater dynamics, it can be concluded that this PS points displacement pattern put into evidence this type of urban ground layer.
For the SAR data used in this study, when looking back at the previous European C-band SAR missions, ERS 1&2 and ENVISAT ASAR, the technological improvements of the Sentinel-1 mission are impressive. The Sentinel-1 comprises a better coverage and a revisit time of 12 days for one satellite, and 6 days when considering both satellites of the mission. This allows a more complex and complete analysis considering the number of available PS points for the same area and their registered variations at a finer rate.
As a limitation of this monitoring technique, most of the areas with continuous ground displacements trends are affected by these movements for a long period of time, many going back to the industrialization period of the 1970s-1980s years. For SAR temporal series, the data availability is restricted to 1992 until present. This makes a historical analysis of the vertical displacements very difficult, as other monitoring methods were used only if it was a high interest for a specific area.
The ground movement recorded by radar satellites and the InSAR techniques does not display the cause but allows highlighting different geological, hydrogeological, or geotechnical problems that influence the ground surface and subsurface. Hence, considering the correlation between the hydraulic head data and the PSI vertical displacements, some aspects can be highlighted for the Circului Park area. Displacements dissimilarity trend observed for the PS points correlated to boreholes FM1LC/FM2LC and F14M might be due to the differences of the land use of the north-eastern (close to FM1LC and FM2LC boreholes) and of the south-eastern (close to F14M borehole) vicinities of the Circului Park. The north-eastern side of Circului Park, is a residential area built in the period of 1980s-1990s. The consolidation process of the anthropogenic material layer ended and the ground is stabilized, as no other changes of the stress state occurred meantime. On the other hand, the south-eastern side is a more dynamic area, with new buildings made both during 2000-2010 time period as well as during 2013-2016. The dewatering systems needed for the building foundation implementation process, the presence of the anthropogenic material layer, the stress state due to the buildings load, may led to compaction of the subsurface and to vertical displacements.
Ground weakening that occurred for the buildings analyzed in the Barbu Vacarescu area, has a combination of sources. The solutions related to foundation techniques, the presence of the urban anthropogenic material stratum, the construction activity, the seepage from the losses of the water supply system which existed before the start of construction works, the variations of the aquifer hydraulic head, have led to ground displacements and consequently to the degradation of the building described in the second study case (Turnul Eiffel Street). Solving the leakage problems from the water supply system and introducing a drainage control conducting to the hydraulic heads steadiness remain the zone main stabilization solutions. Considering the monitoring data and the performed technical analyses, it was concluded that there are still continuous small deformations of the ground, due to milimetrical settlements and uplifts induced by the hydraulic head variations. In the case of cyclicality, negative effects could occur on buildings with a slab type foundation.
Expanding our analysis of the regional patterns of subsidence is the compulsory subsequent step of this study. Focussing on small areas is usually effective, however in the densely populated urban environment it is rather complicated to develop efficient groundwater monitoring systems. In such environments the hydraulic data will always be inadequate and they cannot be quantitatively compared with the InSAR data. The bulk of this study is based upon only a very few PS, out of more than a million that were produced for the entire urban area of Bucharest city. The lack of data and long time-series makes it difficult to obtain a quantitative analysis. Therefore, a general correlation is highlighted.
The high heterogeneity of the urban antropogenic material can affect seriously the ground stability in urban areas. In our case study (Figure 10), one of the highest levels of subsidence is located in an area with thick anthropogenic material, even a considerable part of that area appears to be stable. This emphasizes the need of developing more accurate spatial models to manage the anthropogenic strata information. From a construction and urban hydrogeology point of view, it can be concluded that the presence of the thick anthropogenic material layer and its connections with the shallow aquifer have to be very well considered when new construction projects are designed as the water pumping from deeper aquifer units and other man-made factors may induce local area destabilization.
Subsidence in cities, such as in Bucharest, may have multiple causes. However, changes in hydraulic head caused by pipe leakage, the behavior of the anthropogenic construction debris stratum, or the severe diminishing of water percolation due to the urban fabric extension, play an important role. These phenomena contribute directly to urban groundwater dynamics and consequently to ground stabilization. A better understanding of the linked complex geological and hydrogeological processes relating to the urban water cycle and ground subsidence will provide improvements on urban subsurface planning and urban development.
A future recommendation is to use complex urban monitoring stations composed of a corner reflector and surface sensors as well as subsurface components comprising downhole equipments and sensors. This monitoring device could improve urban displacements data achievement procedures. The corner reflector providing high intensity InSAR data, supports the acquisition of remote sense deformation time series. At each spatial location where the station is situated, a large range of other relevant parameters are being recorded using in-situ techniques. Regrouped in the same urban monitoring station centered arround an inclinometric tube, the facilities are able to measure horizontal and vertical subsurface ground displacements, groundwater hydraulic heads, and other groundwater physical and chemical parameters [68]. | 15,108.8 | 2020-12-11T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Geography"
] |
A Simple HPLC/DAD Method Validation for the Quantification of Malondialdehyde in Rodent’s Brain
In the present study, a HPLC/DAD method was set up to allow for the determination and quantification of malondialdehyde (MDA) in the brain of rodents (rats). Chromatographic separation was achieved on Supelcosil LC-18 (3 μm) SUPELCO Column 3.3 cm × 4.6 mm and Supelco Column Saver 0.5 μm filter by using a mobile phase acetonitrile (A) and phosphate buffer (20 mM, pH = 6) (B). Isocratic elution was 14% for (A) and 86% for (B). The injection volume (loop mode) was 100 μL with an analysis time of 1.5 min. Flow rate was set at 1 mL/min. The eluted compound was detected at 532 nm by a DAD detector by keeping the column oven at room temperature. The results indicated that the method has good linearity in the range of 0.2–20 μg/g. Both intra- and inter-day precision, expressed as RSD, were ≤15% and the accuracies ranged between ±15%. The lower limit of quantification (LLOQ), stability, and robustness were evaluated and satisfied the validation criteria. The method was successfully applied in a study of chronic toxicology following different treatment regimens with haloperidol and metformin.
Introduction
Oxidative stress is currently one of the most intensely studied processes. It appears when reactive oxygen species (ROS) and reactive nitrogen species (RNS) production exceeds the neutralizing capacity of endogenous antioxidant systems [1,2]. Mitochondrial impairment constitutes a crucial and critical factor in the aging process and development of age-related disorders and is generally accepted. This impairment occurs due to the loss of oxidative phosphorylation capacity and oxygen radical leakage, with the subsequent apparition of ROS. These reactive species can constitute either physiological signals (low levels) or toxic species (high levels) that affect cellular integrity, depending on the concentration [3,4]. One of the most studied reactions involved in the oxidative stress is lipid peroxidation, thus the increased reactivity of these ROS and RNS makes all biological structures susceptible to oxidative processes, particularly the brain [5], due to its high lipid content [6]. Lipid peroxidation reactions affect the cellular membrane, degrading it, and subsequently, degradation products are obtained, i.e., malondialdehyde (MDA) [7,8], which can propagate and amplify the oxidative lesions. Compared to other degradation compounds, MDA is obtained in high quantities and is considered a marker of oxidative stress [9,10]. MDA's property of cross-linking other molecules contributes to its toxic potential, at the same time, the mutagenic and carcinogenic effects are attributed to the chemical bonds that MDA forms with the nitrogen bases from the nucleic acid structure [11].
Increased levels of MDA in the brain have been observed in central nervous system (CNS) disorders [3,12,13], such as Alzheimer's disease [14], Parkinson's disease [15], or in cases of consumption or abuse of drugs [16]. A widely known and used method of MDA assay is the spectrophotometric method in which a pink colored complex is obtained (MDA-TBA) from the MDA reaction with thiobarbituric acid (TBA) under high temperature and low pH, as shown in Figure 1.
Molecules 2021, 26, 5066 2 of 10 peroxidation, thus the increased reactivity of these ROS and RNS makes all biological structures susceptible to oxidative processes, particularly the brain [5], due to its high lipid content [6]. Lipid peroxidation reactions affect the cellular membrane, degrading it, and subsequently, degradation products are obtained, i.e., malondialdehyde (MDA) [7,8], which can propagate and amplify the oxidative lesions. Compared to other degradation compounds, MDA is obtained in high quantities and is considered a marker of oxidative stress [9,10]. MDA's property of cross-linking other molecules contributes to its toxic potential, at the same time, the mutagenic and carcinogenic effects are attributed to the chemical bonds that MDA forms with the nitrogen bases from the nucleic acid structure [11]. Increased levels of MDA in the brain have been observed in central nervous system (CNS) disorders [3,12,13], such as Alzheimer's disease [14], Parkinson's disease [15], or in cases of consumption or abuse of drugs [16]. A widely known and used method of MDA assay is the spectrophotometric method in which a pink colored complex is obtained (MDA-TBA) from the MDA reaction with thiobarbituric acid (TBA) under high temperature and low pH, as shown in Figure 1. Despite attempts of optimization of this method, spectrophotometric determination still has limitations due to TBA that can interact with other compounds such as carbohydrates, amino acids, and certain pigments, resulting in higher values [17,18] that are undesirable, especially in human tissue determinations in which more specific methods are required [19]. Thus, in an attempt to avoid the bias of the TBA interaction with other molecules and with the purpose of obtaining MDA values as close as possible to the real values, the present paper aims to validate a method of identifying MDA in the brain. Separation of the analytes was obtained using a high-performance liquid chromatographic (HPLC) system coupled with a diode array detector (HPLC/DAD). This technique overcomes the limitations of the spectrophotometric method in terms of specificity and sensitivity, being a simple, fast, and cost-effective method.
Optimization of Sample Preparation
In order to be as accurate as possible, three methods of sample preparation were tested. In the first method, we used an automatic homogenizer IKA Ultra-Turrax Tube Drive. For the second method, manual trituration of the sample in mortar with pestle was used in the presence of silicon dioxide. The third method combined the above-mentioned methods. Comparing the areas of the peaks obtained by the three processing methods, the following values were obtained: the mean of the three obtained areas under the curve (AUC), 65538.33; standard deviation (SD), 7584.11; and relative standard deviation Despite attempts of optimization of this method, spectrophotometric determination still has limitations due to TBA that can interact with other compounds such as carbohydrates, amino acids, and certain pigments, resulting in higher values [17,18] that are undesirable, especially in human tissue determinations in which more specific methods are required [19]. Thus, in an attempt to avoid the bias of the TBA interaction with other molecules and with the purpose of obtaining MDA values as close as possible to the real values, the present paper aims to validate a method of identifying MDA in the brain. Separation of the analytes was obtained using a high-performance liquid chromatographic (HPLC) system coupled with a diode array detector (HPLC/DAD). This technique overcomes the limitations of the spectrophotometric method in terms of specificity and sensitivity, being a simple, fast, and cost-effective method.
Optimization of Sample Preparation
In order to be as accurate as possible, three methods of sample preparation were tested. In the first method, we used an automatic homogenizer IKA Ultra-Turrax Tube Drive. For the second method, manual trituration of the sample in mortar with pestle was used in the presence of silicon dioxide. The third method combined the above-mentioned methods. Comparing the areas of the peaks obtained by the three processing methods, the following values were obtained: the mean of the three obtained areas under the curve (AUC), 65538.33; standard deviation (SD), 7584.11; and relative standard deviation (RSD%), 11.57. No major differences were observed between the three methods used, in terms of areas. In order to avoid unnecessary prolongation of processing, the automated method was used with the IKA Ultra-Turrax Tube Drive.
Chromatographic Conditions
For MDA analysis, chromatographic separation was performed using a mobile phase acetonitrile (A) and phosphate buffer (20 mM, pH = 6) (B). Isocratic elution was 14% for (A), 86% for (B). The injection volume (loop mode) was 100 µL with an analysis time of 1.5 min. Flow rate was set at 1 mL/min, eluent was monitored with a DAD, and the best chromatogram achieved was at 532 nm, using Supelcosil LC-18 (3 µm) SUPELCO Column 3.3 cm × 4.6 mm and Supelco Column Saver 0.5 µm filter [20].
Linearity and LLOQ
The linearity of the method was verified through the analytical curve using six levels of concentrations, evaluated in triplicates. The analytical curve was described by the linear equation: y = 21,749x − 8928.3 while the regression coefficient was r 2 = 0.998, as illustrated in Figure 2, where y is the analyte peak area ratio and x is the concentration (µg/g) as shown in Table 1.
Molecules 2021, 26, 5066 3 of 10 (RSD%), 11.57. No major differences were observed between the three methods used, in terms of areas. In order to avoid unnecessary prolongation of processing, the automated method was used with the IKA Ultra-Turrax Tube Drive.
Chromatographic Conditions
For MDA analysis, chromatographic separation was performed using a mobile phase acetonitrile (A) and phosphate buffer (20 mM, pH = 6) (B). Isocratic elution was 14% for (A), 86% for (B). The injection volume (loop mode) was 100 μL with an analysis time of 1.5 min. Flow rate was set at 1 mL/min, eluent was monitored with a DAD, and the best chromatogram achieved was at 532 nm, using Supelcosil LC-18 (3 μm) SUPELCO Column 3.3 cm × 4.6 mm and Supelco Column Saver 0.5 μm filter [20].
Linearity and LLOQ
The linearity of the method was verified through the analytical curve using six levels of concentrations, evaluated in triplicates. The analytical curve was described by the linear equation: y = 21749x − 8928.3 while the regression coefficient was r 2 = 0.998, as illustrated in Figure 2, where y is the analyte peak area ratio and x is the concentration (μg/g) as shown in Table 1.
Selectivity
The selectivity of the method in terms of its ability to accurately measure the analyte of interest in the presence of other components that were present in the sample matrix was demonstrated by the analysis of blank matrices. To verify the selectivity, we injected three blank samples prepared according to the sample preparation procedure described in Section 3.3. Sample preparation with the following modification: TBA without MDA, MDA without TBA, and brain sample without TBA in which the reagent was replaced with purified water. After injecting these blank samples in neither one of these cases, interferences did not occur at the retention time of 1.1 min. The peak purity in all cases was above 98.7%. 1.11 ± 0.01 rLLOQ, relative lower limit of quantification; LLOQrec, recovery-corrected LLOQ; rLLOQrec (%), relative recovery-corrected LLOQ.
Accuracy
Quality control (QC) samples at lower limit of quantification (LLOQ) concentration and three different concentration levels (low, medium, and high) were spiked for the determination of precision and accuracy. Five replicates for each level of QC samples were assayed in one run for the intra-day procedure.
The accuracy was evaluated based on the percentage of MDA recovered from the brain matrix. Representative chromatogram of MDA is illustrated in Figure 3. Data for the intra-and inter-day accuracy for MDA at LLOQ and three QC levels are illustrated in Table 2. Analytical range (μg/g brain) 0.20-20 Retention time (min) 1.11 ± 0.01 rLLOQ, relative lower limit of quantification; LLOQrec, recovery-corrected LLOQ; rLLOQrec (%), relative recovery-corrected LLOQ.
Selectivity
The selectivity of the method in terms of its ability to accurately measure the analyte of interest in the presence of other components that were present in the sample matrix was demonstrated by the analysis of blank matrices. To verify the selectivity, we injected three blank samples prepared according to the sample preparation procedure described in Section 3.3. Sample preparation with the following modification: TBA without MDA, MDA without TBA, and brain sample without TBA in which the reagent was replaced with purified water. After injecting these blank samples in neither one of these cases, interferences did not occur at the retention time of 1.1 min. The peak purity in all cases was above 98.7%.
Accuracy
Quality control (QC) samples at lower limit of quantification (LLOQ) concentration and three different concentration levels (low, medium, and high) were spiked for the determination of precision and accuracy. Five replicates for each level of QC samples were assayed in one run for the intra-day procedure.
The accuracy was evaluated based on the percentage of MDA recovered from the brain matrix. Representative chromatogram of MDA is illustrated in Figure 3. Data for the intra-and inter-day accuracy for MDA at LLOQ and three QC levels are illustrated in Table 2.
Precision
Inter-day precision was evaluated on two different days using five replicates for each level of QC samples and at LLOQ concentration. Results of precision (intra-day and interday) were expressed as RSD%. Data for the intra-and inter-day precision for MDA at LLOQ and three QC levels are illustrated in Table 2.
Both within run and between runs precision (RSD%) of the QC samples were ≤15%, and the accuracy ranged between ±15%. These results demonstrated that the method is reproducible for the determination of MDA in a rodent's brain as the results demonstrated that the precision and accuracy are in acceptable limits.
Stability
The stability was analyzed by assaying the frozen (−80 • C) QC samples with the QC samples kept at 25 • C for 24 h and for 48 h, both in triplicates. Analytical recovery varied between 106.69-115.79% after 24 h and between 100.45-114.05% after 48 h at −80 • C. For the samples that have been kept under room temperature, the recovery varied between 98.09-109.06% after 24 h and between 93.26-110.96% after 48 h. The data are listed in Table 3.
Robustness
The robustness of the method was assessed with the performance of variations in three crucial chromatographic parameters (mobile phase ratio, pH value of mobile phase, and flow rate). All assays were performed at a concentration level of 0.5, 7.5, and 15 µg/g for MDA, in five replicates. All the data are listed in Table 4. Changes in retention times as a function of variation of the chromatographic parameter are illustrated in Figure 4. Table 4. Robustness of the method by variation of three chromatographic parameters (mobile phase ratio, pH value of mobile phase, and flow rate).
Preparation of Solutions
For MDA stock solution was prepared by diluting 460 µL of TMP in 100 mL ultra pure water; the concentration of this solution was equivalent to the MDA solution of 2 mg/mL. Standard work solutions at 9 concentration levels were prepared by diluting stock solutions with ultra pure water. Six triplicate samples for linearity (0.2-20 ug/g brain) and three QC samples were prepared at 0.5, 7.5, and 15 ug/g brain. For each QC sample, the analysis was performed in five replicates.
Sample Preparation
Twenty male rats weighing 450-500 g were individually housed in plastic cages, maintained on a 12:12 h light-dark cycle, and fed ad libitum. All animals were decapitated under anesthesia with ketamine and xylazine in a dose mixture of ketamine (100 mg/kg) and xylazine (10 mg/kg), in order to collect the brain samples. The brains were rapidly removed, immediately frozen in liquid nitrogen, and stored at −80 • C until analysis. For MDA analysis, brains were homogenized in IKA Ultra-Turrax Tube Drive and were subsequently divided in equal quantities. Afterward, 1 g of brain sample was spiked with 10 µL of working solution, and then PBS was added in a three times higher volume than the sample volume. Samples were vigorously vortexed for 1 min, and immediately after, samples were centrifuged (10,000× g for 10 min). After centrifugation, acetonitrile (ACN) was added for protein precipitation (1:3, v/v). The samples were centrifuged (10,000× g for 10 min) and the collected supernatant was diluted with pure water (1:1, v/v). A volume of 600 µL TBA (4 mg/mL) and 1000 µL sulfuric acid (2.66 µL/mL) were added to 400 µL sample, followed by heating at 100 • C for 60 min in TS-100C, Thermo-Shaker (BioSan, Riga, Latvia). After heating, the samples were transferred in HPLC vials and analyzed shortly after the derivatization reaction.
Method Validation
In the present study, the validation method was performed in accordance with the regulatory guidelines (FDA 2018). Chosen validation parameters were linearity, selectivity, accuracy, precision, lower limit of quantification (LLOQ), stability, and robustness.
Study Application
In addition, in order to demonstrate the applicability of analytical methods, a study of chronic CNS toxicity was performed on 40 rodents (rats), which were randomly divided into 4 groups comprising 10 rats each (Control, Haloperidol, Metformin, and Haloperidol + Metformin). The treatment consisted of distilled water, haloperidol 2 mg/kg metformin 500 mg/kg, and haloperidol 2 mg/kg + metformin 500 mg/kg in a volume of 1 mL/kg for 40 days, administered through an oral feeding cannula. At the end of the study, all the rodents were decapitated under anesthesia with ketamine and xylazine in a dose mixture of ketamine (100 mg/kg) and xylazine (10 mg/kg), in order to collect the brain samples. The brains were removed, frozen in liquid nitrogen, stored at −80 • C, and afterward, were analyzed with the method presented in this study.
Ethical Considerations
All procedures were conducted in compliance with all experimental procedures in accordance with European Directive 2010/63/EU and was approved by the Ethics Committee for Scientific Research of the George Emil Palade University of Medicine, Pharmacy, Science and Technology of Târgu Mures , (approval no. 533/2019) and by the National Sanitary Veterinary and Food Safety Authority (approval no. 42/2020).
Conclusions
Analytical curves for MDA in the brain were linear for the concentration range of 0.2-20 µg/g, with a regression coefficient of r 2 = 0.998. This validation method demonstrates good accuracy and precision in accordance with regulatory guidelines [23]. According to these, the accuracy must comprise the ±15% interval and the precision must be ≤15%. Regarding LLOQ, accuracy must comprise the interval of ±20% and the precision must be ≤20%.
The method is suitable for MDA quantification from a rodent's brain and in studies that aim for the measurement and estimation of oxidative stress in different treatments and induced pathologies. Unlike the classic spectrophotometric methods [24,25], this method is superior in terms of sensitivity and specificity; the interferences of other compounds capable of absorption at 532 nm in VIS is avoided. Moreover, this method is simple and cost-effective; it does not imply the multiple steps of preparation for analytical extraction that may lead to interactions. Instead, a derivatization reaction is proposed. | 4,193 | 2021-08-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Parasite Load Induces Progressive Spleen Architecture Breakage and Impairs Cytokine mRNA Expression in Leishmania infantum-Naturally Infected Dogs
Canine Visceral Leishmaniasis (CVL) shares many aspects with the human disease and dogs are considered the main urban reservoir of L. infantum in zoonotic VL. Infected dogs develop progressive disease with a large clinical spectrum. A complex balance between the parasite and the genetic/immunological background of the host are decisive for infection evolution and clinical outcome. This study comprised 92 Leishmania infected mongrel dogs of various ages from Mato Grosso, Brazil. Spleen samples were collected for determining parasite load, humoral response, cytokine mRNA expression and histopathology alterations. By real-time PCR for the ssrRNA Leishmania gene, two groups were defined; a low (lowP, n = 46) and a high parasite load groups (highP, n = 42). When comparing these groups, results show variable individual humoral immune response with higher specific IgG production in infected animals but with a notable difference in CVL rapid test optical densities (DPP) between highP and lowP groups. Splenic architecture disruption was characterized by disorganization of white pulp, more evident in animals with high parasitism. All cytokine transcripts in spleen were less expressed in highP than lowP groups with a large heterogeneous variation in response. Individual correlation analysis between cytokine expression and parasite load revealed a negative correlation for both pro-inflammatory cytokines: IFNγ, IL-12, IL-6; and anti-inflammatory cytokines: IL-10 and TGFβ. TNF showed the best negative correlation (r2 = 0.231; p<0.001). Herein we describe impairment on mRNA cytokine expression in leishmania infected dogs with high parasite load associated with a structural modification in the splenic lymphoid micro-architecture. We also discuss the possible mechanism responsible for the uncontrolled parasite growth and clinical outcome.
Introduction
Canine Visceral Leishmaniasis (CVL) shares many aspects with the human disease and dogs are considered the main urban reservoir of L. infantum in zoonotic VL. Canine infection may precede the emergence of human cases [1] and the presence of infected dogs is directly associated with the risk of human infection [2]. The control programs of VL in endemic areas of Latin America include the detection and treatment of infected and sick humans, insecticide spraying in residential outhouses and selective removal of seropositive dogs. Screening and mass culling of seropositive dogs has not been proved to be uniformly effective in control programs [3] and many studies have questioned its effectiveness [4][5][6][7]. Therefore, the knowledge of the immune mechanisms involved in animal pathology and protection plays a pivotal role in the endemic control [8].
Infected dogs develop progressive disease, characterized by lymphadenopathy, hepatosplenomegaly, onychogryphosis, body weight loss, dermatitis, anemia and ultimately death. The large spectrum of clinical presentations ranges from asymptomatic to symptomatic infection [9]. A complex balance between the parasite and the genetic/immunological background of the host are decisive for the progression towards disease. However, no conclusive data are available on the immunological mechanisms responsible for resistance or disease progression in CVL. The infection is characterized by a marked humoral response [10,11] and the parasite load follows the clinical outcome [12]. Several studies show a mixed cellular response related to infection [2,[13][14][15]. Such a mixed response is also observed under different experimental conditions [16]. The immune response to viscerotropic Leishmania parasites is organ-specific [17][18][19] and the spleen is an important target in VL [20]. Overall, in spleen the production of Th1 cytokines (such as IFN-γ, IL-12 and TNF) of both asymptomatic and symptomatic dogs does not show any differences [13,14,20], however they are increased during infection [14]. The predominance of Th2/regulatory cytokines (such as IL-4, IL-10 and TGF-β1) determines the parasite load and persistence without association with clinical groups [14,15]. Nevertheless, Correa et al. [13] found that these cytokines are determinant for disease progression. This organ is a site of parasite persistence where the parasites grow slowly generating important changes both in architecture and organ function. Also, a relationship between a high percentage of T cell apoptosis and the structural disorganization of white pulp may co-contribute to the inefficient cellular-mediated-immune response in CVL [21].
Herein we describe impairment in cytokine mRNA expression in naturally Leishmania infantum infected dogs with high parasite load associated with a structural modification of the lymphoid micro-architecture in spleen. We also discuss the possible mechanism responsible for the uncontrolled parasite growth and clinical outcome.
Ethics Statement
The infected animals included in this study were destined to euthanasia as recommended by the politics of Brazilian Ministry of Health at the Center for Zoonosis Control (CZC). The study has been conducted in accordance to AVMA Guidelines for the Euthanasia of Animals [22]. For euthanasia, dogs were anesthetized with an intravenous injection of 1.0% (1.0 ml/kg) thiopental (Thiopentax, Cristália). Once the absence of corneal reflex induced by deep anesthesia was observed, 10.0 mL of 19.1% Potassium Chloride (Isofarma) were administered by intravenous injection. The Animal Care and Use Committee of Fundação Oswaldo Cruz does not require ethical clearance in these cases, since the animal were not submitted to any experimental procedure. The samples were collected for diagnostics purposes. Informed consent was obtained from all dog's owners.
Study Animals and Clinical Evaluation
The study comprised 92 IFAT-positive mongrel dogs of various ages, with anti-Leishmania IgG antibody titers higher than 1:40. The infected animals were destined to euthanasia following owner consent at the Center for Zoonosis Control (CZC) of four endemic municipalities (Rondonópolis, Barra do Garças, Várzea Grande and Cuiabá) in Mato Grosso, Brazil. Infection was confirmed in all IFAT-positive dogs by one additional serological test, being either ELISA, rapid test Dual Path Platform (DPP CVL, BioManguinhos, FIOCRUZ) and/or parasite detection by culture and/or conventional PCR (kDNA). The infection etiology was confirmed by MLEE in all isolated strains at the Leishmania Collection of the Oswaldo Cruz Institute (CLIOC, www.clioc.fiocruz.br). Isolated strains were deposited as open access. Serum samples from noninfected dogs from a nonendemic area, Rio de Janeiro, RJ, Brazil (control group, n = 15) were also included in serologic analyses. Clinical evaluation was performed by two veterinarians according to the clinical scale adapted from Quinnel and co-workers [23]. In summary, six common signs (dermatitis, onycogryphosis, conjunctivitis, emaciation, alopecia and lymphadenopathy) were scored on a semiquantitative scale from 0 (absent) to 3 (severe). The sum of values was used to achieve the final clinical classification as low (0-2), medium (3)(4)(5)(6) or high (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18) score.
Sample collection and storage
Blood samples were collected from the cephalic vein and serum was stored at -20°C. Immediately after euthanasia, fragments of spleen were harvested and stored in net buffer solution (10 mM NaCl, 10 mM EDTA, 10 mM Tris HCl) for DNA extraction and in an RNAlater Tissue Collection solution (Ambion, Applied Biosystems, Life Technologies Corporation) for RNA extraction. The biopsies were frozen and stored at -70°C prior to processing. Tissue fragments were fixed in buffered formalin for histology. Needle aspirate was seeded in NNN-Schneider Drosophila (Sigma-Aldrich) for parasite isolation.
Serology
The enzyme immunoassay with EIE-Leishmaniose Visceral Canina kit (BioManguinhos, FIOCRUZ) was performed according to the manufacturer with minor modifications. Briefly, sensitized microplates were incubated with diluted dog sera (1:10) at room temperature for 2 hours. Plates were incubated at room temperature for 1 hour with 100 μl of IgG (1:3000, Bethyl Laboratories) and IgM (1:1000, Bethyl Laboratories). The lower limit of positivity (cutoff) was determined by using the mean plus 3 standard deviations of the controls. Sera with OD values equal to or greater than cutoff value were considered positive and OD values below cutoff value considered negative. DPP-CVL was performed as instructed and read in a rapid test reader.
Determination of Parasite burden by Quantitative Polimerase Chain Reaction (qPCR)
Total DNA was extracted from approximately 30 mg of spleen samples. DNA extraction was carried out by the Wizard Genomic DNA Purification System (Promega, Madison, WI, USA) which included a prior digestion phase with 17.5 μl of proteinase K (20 mg/mL) for 12 h at 55°C. DNA was dissolved in 100 μl of tris EDTA buffer (TE buffer). Parasite burdens were estimated by qPCR in spleen samples amplifying small subunit ribosomal RNA (ssrRNA, multy-copy gene) using primers described by Prina et al. [24], while HPRT primers were used to normalize concentrations of canine DNA in each sample (S1 Table). The qPCR reactions were run on the Step One equipment using Power Sybr Green Master Mix (Applied Biosystems, Molecular Probes, Inc.). Purified total DNA (100 ng) were added to a final PCR reaction volume of 20 μl containing Power Sybr Green 1X (Applied Biosystems, Molecular Probes, Inc.), 300 nM of each primer for HPRT or 500 nM for ssrRNA PCR assays. qPCR was performed with an activation step at 95°C for 10 minutes, followed by 40 cycles of denaturation, annealing/extension and reading (95°C for 15 seconds, 60°C for 1 minute and 68°C for 30 seconds) in a Step One termocycler (Applied Biosystems). A melt curve stage was performed for each specific amplification analysis (95°C for 15 seconds, 60°C for 1 minute and 95°C for 15 seconds). All reactions were performed in duplicate for each target and both targets were run on the same plate for the same sample.
Standard curves for HPRT and ssrRNA genes were prepared using serial 10-fold dilutions from 10 -2 to 10 7 of total purified DNA extracted wither from L. infantum (1x10 6 ) or peripheral blood mononuclear cells (PBMCs). A threshold of detection was set for each target gene according to the background level from cycles 6-15 of all valid reactions. Mean threshold cycle (Ct) values were determined for technical duplicates. Ct values were plotted against input logdilutions (base 10) and standard curves for each target determined by a linear regression, with the determined coefficients of determination (R2) used as quality control. Subsequently, the fitted standard curves were used to estimate overall number of parasites in the sample, while the host HPRT gene was used for PBMC number normalization. Thus, it was possible to obtain the number of parasites per 10 6 cells. Amplification efficiency of each target was determined according to the equation: E = 10^(-1/slope). Data processing and presentation were performed using routines written in the R language, for the R statistical package version 2.922 [25].
Parasite burden group determination
The number of groups was defined by the parasite burden, using the estimated log number of parasites per 10 6 cells (base = 10) for each sample. The parasite number cut-off that delimited both groups was optimized by the fitting of mixtures of normal distributions by the standard expectation-maximization (EM) algorithm combined with a non-parametric likelihood ratio statistics with 1,000 permutations for testing the null hypothesis of a k-component fit versus the alternative hypothesis of a (k+1)-component fit to various mixture models, up to a specified number of maximum components, k (k = 5) [26]. A p-value was calculated for each test and once the p-value was above the significance level of 0.05, the test was terminated. These analyses were performed using the mixtools library for the R statistical package version 2.922 [25].
Cytokine gene expression
Total RNA from 50-100 mg of tissue samples was isolated using Trizol Reagent (Invitrogen, Grand Island, NY), according to the manufacturer's protocol. DNase treatment to digest genomic DNA that could lead to false positive gene expression results was accomplished using DNA-free DNase (Ambion, Grand Island, NY). RNA integrity was confirmed on a 3-(N-morpholino) propanesulfonic acid/formamide 1.2% agarose gel stained with SYBR Nucleic Acid Gel Stain (Molecular Probes, Invitrogen Corp., Grand Island, New York). RNA quantity was assessed using a Nanodrop spectrophotometer (Thermo Scientific, Waltham, MA). For cDNA synthesis, 1.0 μg of RNA was reverse transcribed with oligo (d)T primers using the ImProm-II Reverse Transcription System (Promega, Madison, WI), according to the manufacturer's protocol including ribonuclease inhibitor (Recombinant RNasin, Promega, Madison, WI). Reverse transcription reactions were performed in duplicate at a final volume of 20 μL and diluted (1:4) by adding 80 μL of nuclease free water. Reactions without the reverse transcriptase enzyme (No-RT reactions) were performed to control DNA contamination. The qPCR reactions were run at a final volume of 20 μL containing 300 nM of primers, 1X SYBR GREEN master mix (Applied Biosystems) and 4 μL of cDNA template. qPCR was performed with an activation step at 95°C for 10 minutes, followed by 40 cycles of denaturation and annealing/extension (95°C for 10 seconds and 58°C for 1 minute). A melt curve stage was performed for each specific amplification analysis (95°C for 15 seconds, 60°C for 1 minute and 95°C for 15 seconds). All reactions were performed in triplicate in a Step One Plus termocycler (Applied Biosystems).
Gene expression analysis of qPCR data
The fluorescence accumulation data from triplicate qPCR reactions for each sample were used to fit four-parameter sigmoid curves to represent each amplification curve using the qPCR library [27] for the R statistical package version 2.922 [25]. A detailed description of quantitation using Cp (crossing point) can be obtained elsewhere [28]. Genes used in the normalization between the different amplified samples were selected by the geNorm method [29] among a set housekeeping genes (S1 Table). The comparison of means of normalized gene expression values among groups were performed by: (1) a nonparametric T-test with 1,000 permutations for two groups; (2) a nonparametric one-way ANOVA with 1,000 unrestricted permutations, followed by pair-wise comparisons with Bonferroni adjustment, for more than 2 groups. Results were represented in graphs displaying the expression level mean ± standard error of mean for each group. Two-tailed levels of significance less than or equal to 0.01, 0.05 and 0.1 were considered "highly significant", "significant" and "suggestive", respectively. Relationships between differentially expressed gene and sample profiles was investigated by Bayesian infinite mixtures model cluster analysis [30] and represented by 2D heatmaps and dendograms.
Histopathology
Spleen fragments were fixed in 10% buffered formalin, embedded in paraffin and sliced in 5μm thick sections mounted on microscope slides. The sections were stained with haematoxylin and eosin and examined by light microscopy (Nikon Eclipse E400-Tokyo, Japan). Structural changes of spleen lymphoid tissue, cell population in the red pulp and parasite burden were analyzed as described by Santana et al [31]. Briefly, the parameters analyzed included perisplenitis (absent, low, average or high), presence of granuloma and degree of white pulp structural organization (1-well organized: with distinct periarteriolar lymphocyte sheath, germinal center, mantle zone and marginal zone; 2-slightly disorganized: with either hyperplastic or hipoplastic changes leading to a loss in definition of any of the regions of the white pulp; 3-moderately disorganized: when the white pulp was evident, but its regions were poorly individualized or indistinct; and, 4-extensively disorganized: when the follicular structure was barely distinct from the red pulp and T-cell areas). The frequency of lymphoblasts, macrophages, neutrophils and plasma cells in the red pulp were scored as low, average or high. The amount of amastigotes was estimated by counting 40 to 100 fields (x1000 magnification) per section equally distributed between the sub-capsular compartment and the internal red pulp. The results were expressed as ratio of fields with amastigotes/total fields evaluated.
Clinical characteristics and spleen parasite load
Clinical evaluation was performed in 88 dogs according to the severity of signs where 33 animals were low, 29 medium and 26 high scored. All animals included in this study showed at least one positive parasitological test, including parasite culture and/or kDNA PCR. Spleen parasite load was determined by real-time PCR for the ssrRNA Leishmania gene and the group of animals with the least clinical score presented less parasite load (p = 0.01), but with a large variation (S1 Fig). Considering that clinical signs could be a result of uncontrolled factors such as coinfection, nutritional status and other disorders, a statistical analysis considering parasite load was performed resulting in the definition of two groups. Low parasite load group (lowP, n = 46) ranging from 6.3 x 10 0 to 2.82 x 10 4 and high parasite load group (highP, n = 42) ranging from 4.25 x 10 4 to 8.92 x 10 8 Leishmania genomes (Fig 1). The clinical signs observed were
Humoral Immune Response
The production of anti-Leishmania IgG and IgM antibodies were evaluated in 15 uninfected dogs from a non-endemic area (control) and 86 dogs naturally infected by L. infantum, of which 59 belonged to the lowP and 27 to the highP group. The serum samples of two animals were lost during transport. The mean titers of anti-Leishmania IgG antibodies in the lowP group (OD 0.966 ± SD 0.403) and highP group (OD 1.121 ± SD 0.257) were similar, as well as IgM titers OD 0.630 ± SD 0.407 and OD 0.713 ± SD 0.507, for lowP and highP, respectively. In contrast, no positive serum reactivity occurred against the antigen in 15 control dogs for IgG (OD 0.251 ± SD 0.098) or IgM (OD 0.216 ± SD 0.143). Our results show variable individual humoral immune response with higher specific IgG production in infected animals compared to the control group, p < 0.0001 (Fig 3A). No statically significant difference was found between lowP and highP groups but there was an increased frequency of positivity in highP. IgM levels also increased with infection and it seems to be maintained along the infection since no differences were observed among infected groups ( Fig 3B). Notably, the difference in the optical densities in DPP (rapid test) between highP and lowP groups was significant, p < 0.0001 (Fig 3C).
Histopathology of splenic tissue
The tissue organization was accessed in 67% (59/88) of the animals and the potential role of the parasite was analyzed. Overall, intense inflammation (perisplenitis), presence of granulomas in different stages of maturation, lymphoblasts, plasma cells, neutrophils and macrophages were observed (Table 1) (Table 1 and Fig 6). Noteworthy, the direct observation of the number of amastigotes in the subcapsular compartment and in the red pulp corroborated the parasite load obtained by qPCR (Table 1 and Fig 7).
Discussion
Is widely known that L. infantum-infected dogs present a wide interindividual range of clinical signs yet our data indicate that no conclusive pattern of splenic immune response could be associated with clinical presentation. Herein we show that the intensity of tissue parasite seems to be determinant for immune response modulation. Applying a semi-quantitative arbitrary scale we first divided the animals by clinical score: low, medium or high. This division was not able to demonstrate statically significant differences neither for antibody response nor for parasite load. Except by TNF in low score group, no other cytokine evaluated revealed significant differences among clinical groups (S4 Fig). Other studies have also associated Th1 cytokine expression, including TNF, with asymptomatic infection [32,33]. Although the magnitude of cytokine expression varied markedly, parasite load revealed a negative correlation for all assayed cytokines (Fig 4), unlike reported by other authors [34]. In an effort to evaluate the role of the parasite in the spleen, the animals were split in two groups (lowP and highP) by statistical analysis according to splenic Leishmania DNA load in spleen. Serology by ELISA for Leishmania specific IgG or IgM demonstrated no significant difference between highP and lowP groups and an extensive range of reactivity was observed. However, we found a higher frequency of reactivity in the highP than lowP group for IgG and the opposite for IgM. It might indicate that lowP might represent more recently infected animals. This result was corroborated by response detected through DPP test. Although it can be useful to confirm clinically suspected cases, the DPP CVL rapid test is not sensitive enough for detecting asymptomatic canine carriers of L. infantum [35]. We demonstrate that the reflectance values in the DPP test can be related to parasite load. In light of leishmaniasis control, further studies are needed to confirm if values in the DPP test could be related with the potential for dogs to transmit parasites.
The splenic effector response was assayed by RT-qPCR used for mRNA levels detection of both pro-(IFNγ, IL-12, TNF and IL-6) and anti-inflammatory/regulatory (TGFβ and IL-10) cytokines. The correlation between parasite load and mRNA cytokine expression indicates that despite of the remarkable variation in terms of expression, all of them were reduced in heavily infected animals and this could suggest, at some level, immunosupression. Notably, even in experimental infection under controlled conditions, there is a high variable response, suggesting an individual modulation of the immune response by the host [16]. We also observed an association between increasing parasite load and micro-architecture rupture of the spleen. The alterations observed varied from a well-organized white pulp to an extensive structural disorganization. It consists of hyper or hipoplastic changes in the white pulp and changes in follicular structure. These various levels of splenic organization in the dogs were correlated with increased parasite load. Such a breakdown in tissue architecture related to VL has been previously reported in human [36][37][38], murine [39,40] and canine infections [31,[41][42][43]. The development of splenic pathology is associated with disease progression in dogs [31]. In mice model of L. donovani infection, the splenic pathology was associated with high levels of TNF irrespective of parasite burden [44]. Of all the cytokines evaluated, TNF and IL-12 were more markedly reduced (2.3x and 2.5x respectively) with parasite load increase. Higher TNF mRNA levels in LowP group seems to control the parasite growth, but on the other hand can generate tissue damage. The role of TNF and IL-12 in VL has been described in the literature, both in experimental infection of mice [45][46][47][48] as in canine infection [15,20,49,50]. Both are pro-inflammatory cytokines involved in systemic inflammation that stimulate the acute phase reaction. Interleukin 12 is a multifunctional cytokine acting as a key regulator of cell-mediated immune responses through the differentiation of naïve CD4+ T cells into type 1 helper T cells (Th1) producing interferon-gamma (IFNγ) [51]. These cytokines play a pivotal role in the pathogenesis of many chronic autoimmune diseases [52][53][54] and are also crucial for the control of intracellular microorganisms [55,56]. The activation cellular immune responses is associated with IL-12/IFNγ axis that leads to intracellular killing of parasites. When IFNγ-treated cells are infected with pathogens, they are stimulated to make TNF [57]. Notably, VL is an opportunistic infection in patients under biological therapy with anti-TNF drugs [58]. TNF cellular responses can eradicate infectious agents, but can also lead to local tissue injury at sites of infection and harmful systemic effects [56].
Cytokine transcripts in spleen revealed a high heterogeneous response among groups and in the highP group pro-inflammatory cytokines (IFNγ, TNF, IL-12 and IL-6) showed a larger reduction in expression than anti-inflammatory/immunoregulatory (IL-10 and TGFβ) cytokines. These data suggest regulatory mechanisms to prevent tissue damage or increasing fibrosis leading to leakage of parasite control CVL. While clinical resistance has been shown to be associated with the predominant expression of Th1 cytokines, such as IL-2, IFNγ and TNF, susceptibility and parasite persistence is characterized by predominance of Th2 and immuneregulatory cytokines, such as IL-4 and IL-10 [59]. Nevertheless, our results, as well as several other studies, do not support such associations, showing a mixed response to infection [2,[13][14][15]. There is no consensus about the immunological role for a functional T cell phenotype concerning cytokine production in spleen. On the other hand, CD8 T cell exhaustion and the PD-1 (programed cell deth-1) molecule function has been recently described in viral and parasitic infections [60,61]. Also, it was demonstrated T cell exhaustion in splenocytes of human patients with VL [62]. Recently, a PD-1-mediated pan-T cell exhaustion has been shown in peripheral blood of infected dogs with subsequent reduction in cytokine expression [63]. In this context, general reduction of cytokine expression could be also related to exhaustion induced by the excess of circulating antigen in animals with high parasite load.
In conclusion, this study demonstrated the rupture of splenic architecture and failure in cytokine mRNA expression in animals with high splenic parasitism during CVL. Inflammatory cytokine environment [15,16] and possibly proteolytic enzymes [64] produced early in infection cause the progressive destruction of the architecture with loss of marginal zone macrophages and stromal cells affecting cell migration, antigen presentation and lymphocyte activation, as observed in murine experimental infection [44,65]. Consequently, there is a widespread decline in pro-inflammatory cytokines, as we observed, and chemokine expression, as previously reported [34,42], with consequent loss of parasite control or even due to an excess of parasite antigens leading to a strong local and tissue specific immunosuppression. Clinical score was accessed and animals were classified as low (0-2), medium (3)(4)(5)(6) or high score (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Red corresponds to higher gene expression levels. | 5,670 | 2015-04-13T00:00:00.000 | [
"Biology",
"Medicine"
] |
Price Analysis and Forecasting for Bitcoin Using Auto Regressive Integrated Moving Average Model
This paper investigated Bitcoin daily closing price using time series approach to predict future values for financial managers and investors. Daily data were sourced from CoinDesk, with Bitcoin Price Index (BPI) for 5 years (January 1, 2016 to May 31, 2021) extracted. Data analysis and modelling of price trend using Autoregressive Integrated Moving Average (ARIMA) model was carried out, and a suitable model for forecasting was proposed. Results showed that ARIMA(6,1,12) model was the most suitable based on a combination of number of significant coefficients and values of volatility, Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). A two-month test window was used for forecasting and prediction. Results showed a decline in prediction accuracy as number of days of the test period increased; from 99.94% for the first 7 days, to 99.59 % for 14 days and 95.84% for 30 days. For the two-month test period, percentage accuracy was 84.75%. The study confirms that the ARIMA model is a veritable planning tool for financial managers, investors and other stakeholders; especially for short-term forecasting. It is however imperative that the influence of external factors, such as investors’/influencers’ comments and government intervention, that may affect forecasting be taken into consideration.
Introduction
Rapid advancement in digital technology and increasing capacity of computer systems (in terms of speed and data storage) has created an opportunity for Digital Signal Processing (DSP) techniques in engineering to be applied to the finance industry. One of the applications of DSP in Finance is in prediction of future market value of a business, through the use of historical financial data whose quantity is usually massive and requires absolute objectivity in its calculations (Nepal, 2015). Thus, financial managers can make decisions based on statistical analysis of financial time series and the modeling of its behavior; the aim being to perform predictions and systematically optimize investment strategies which has become fundamental to successful investments (Feng and Palomar, 2016).
Bitcoin (BTC) is the world's largest cryptocurrency and its emergence as a veritable digital currency which has captured global attention, in just over a decade, has been unexpected. It is a form of peer-to-peer electronic cash system, without the need to reveal one's identity for a transaction to happen and without a middle man (Nakamoto, 2008). Despite its modest beginning in 2009 when it was launched at $1.00 value, it has grown into tens of thousands of dollars in value. It is measured by market capitalization and amount of data stored on its blockchain (Shen et al, 2018) and it offers lower transaction fees than traditional online payment mechanisms.
As with all businesses and trades, the COVID-19 pandemic has had an impact on trading of Bitcoin and its price. On March 11, 2020, the World Health Organization (WHO) declared COVID-19, a disease caused by a strain of Coronavirus, a global pandemic (Ghebreyesus, 2020). Data from CoinDesk (Coindesk, 2021), an Organization involved in the monitoring and publishing of Bitcoin data was used to observe Bitcoin price behaviour, before and during the pandemic. Bitcoin price index from January 2016 to May 2021 is shown in Figure 1.
It was observed that the increase in price of Bitcoin was gradual from inception to the beginning of 2017 when its price was about $1,000.00. From then, it witnessed steady increases until December 2017 when it increased sharply and peaked at $19,116.979 unit price on December 17, 2017. Thereafter, the price witnessed a decline to minima of $3,952.448 on November 30, 2018 but reversed the downward movement and increased steadily to $5,800.209 on March 13, 2020 (two days after declaration of the pandemic). Despite the pandemic, it was observed that the price of Bitcoin experienced a steep incline and peaked at $57,128.643 on February 22, 2021. This is an increase of almost 1000% within a year. On the one-year anniversary of COVID-19, being March 11, 2021, the closing price of Bitcoin was $56,915.170. The increased interest in Bitcoin and the subsequent price surge can be attributed to investors using it as hedge, being protection against financial loss, (Demir et al, 2020) due to uncertainties raised by the pandemic; and the subsequent national restrictions and lockdowns which led to the suppression of major world economies and global recession. Other factors (CNBC, 2021;Tepper, 2021) that coincidentally contributed to the rise of Bitcoin price during the period include: i. Institutional Adoption of Cryptocurrencies Increasing adoption of cryptocurrencies by some traditional financial institutions (e.g., BNY Mellon, Fidelity, Mastercard) which was seen as an acknowledgement of the future viability of digital assets.
ii. Halving of Bitcoin 'Halving' (Masters, 2019) of Bitcoin in May 2020 which is an event that happens every four years when the reward that bitcoin "miners" receive for mining gets cut in half as a built-in mechanism to slow the creation of new bitcoins and limit bitcoin's supply. It is an event that reminds investors of bitcoin's scarcity thus leading to increased demand.
iii. Adjustment of View Revision of criticism and softening of views of major Wall Street investors/players about cryptocurrencies. iv. Acceptance by Major Payment Platforms Acceptance of cryptocurrencies by major payment platforms (PayPal and Square) with its announcement that it will soon allow buying, holding, and trading of bitcoin and other cryptocurrencies, on its platform which has contributed to the surge.
v. Pandemic-related Stimulus Programs
Stimulus programs by governments around the world have created fear of inflation with investors looking for alternative assets to invest ClosingPriceUSD in, thereby leading to high demand for Bitcoin. It is believed that government monetary aid strengthens the appeal of Bitcoin.
However, by middle of May 2021, there was a dramatic drop in Bitcoin price and this continues till date. The rapid growth in Bitcoin price and its volatility continues to pique the interest of researchers (Demir et al, 2020;Amjad and Shah, 2017;Roche and McNally, 2018;Jang and Lee, 2018;Baur and Dimpfl, 2020;Fauzi et al, 2020).
Various methods have been developed and applied in time series analysis. These include ARIMA model (Box and Jenkins, 1976;Brockwell and Davis, 2002) which uses the current value of the stationary time series based on its values at previous times and errors in values at previous time periods; Artificial Neural Network (ANN) model, which has the ability to learn patterns from time series data and uses these to model the problem and deduce solutions (Zamani et al, 2012;Selvamuthu et al, 2019) and hybrid models which combine the strengths of the ARIMA and ANN models (Merh et al, 2010;Wang, et al, 2012). While models based on neural networks have been found to present higher accuracy in some cases, the ARIMA model is selected for its robustness, simplicity, ease of application and high accuracy for short term forecasting.
The Auto Regressive Integrated Moving Average (ARIMA) model, also known as the Box-Jenkins methodology (Box and Jenkins, 1976;Brockwell and Davis, 2002) in financial analysis, was used in analyzing Bitcoin time series data and forecasting. The ARIMA model is a combination of the autoregressive (AR) model and the moving average (MA) model with the stationarity (differencing or integration) of the time series taken into account. Stationarity (Feng and Palomar, 2016) is an important characteristic for time series analysis which describes the time-invariant behavior of a time series and is much easier to model, estimate, and analyze. Stationarity of a time series is a major assumption in ARIMA modeling and since market prices by nature are non-stationary, stationarity must be ensured by differencing the time series (Brockwell and Davis, 2002) before forecasting can be done. The ARIMA model is simple but nonetheless powerful and it aims to describe autocorrelations in time series data (Brockwell and Davis, 2002;Ariyo et al, 2014). Essentially, the future value of a variable is based on a linear combination of past values of observation (lags) and past errors. Lags are very useful in time series analysis because they indicate the tendency for values to be correlated with previous copies of itself. The ARIMA model can be represented as ARIMA( , , ) model in Equation (1) or ARIMA( , , ) model in Equation (2) respectively.
In this paper, Bitcoin daily closing price time series spanning January 2016 to May 2021 (as represented graphically in Figure 1) was analyzed using MATLAB (R2018a); and forecasts made. This is of particular importance due to the popularity of Bitcoin and volatility of its price. Forecast values can be useful to investors in developing profitable trading strategies. For government regulators and policy makers, it helps to formulate appropriate policies. Overall, it assists relevant stakeholders to take informed decisions.
Experimental
In this section, the methodology used for this work is described. This includes steps such as data collection and data analysis.
Data Collection
Bitcoin daily closing price time series data from Jan 2016 to May 2021 (as represented in Figure 1) was obtained from (Coindesk, 2021). The Bitcoin data comprises four variables: Closing Price, 24h Open, 24h High and 24h Low; all in USD. The daily closing price (USD) was chosen to represent the price of the index to be predicted since it reflects all the activities of the index on a trading day.
Data Analysis
To determine a suitable model, the following steps as described in subsequent paragraphs, were carried out on Bitcoin price time series: 1. Series inspection for determination of stationarity 2. Differencing to ensure stationarity 50|Annals of Science and Technology 2021 Vol. 6(2) 47-56 This journal is © The Nigerian Young Academy 2021 3. Modeling through the 4-step process of i) Model Identification ii) Parameter Estimation iii) Diagnostics iv) Forecasting Inspection of the time series must confirm if it is stationary or otherwise. This is done by visual inspection and plots of the partial autocorrelation function (PACF) and the autocorrelation function (ACF) of the series which is a measure of the relationship between a variable's current value and its past values. Auto correlation summarizes the relationship between the values of the same series at previous times and its plot by lag is called the auto correlation function ACF. Partial autocorrelations summarizes the relationship between an observation at prior time steps with the relationships of intervening observations removed and its plot by lag is called partial autocorrelation function, PACF (Brockwell and Davis, 2002).
Stationarity is further confirmed by the Augmented Dickey-Fuller test which is based on a null hypothesis that there is a unit root in the data (Brockwell and Davis, 2002). In general, a probability value (p-value) of less than 5% indicates rejection of the null hypothesis and proves stationarity while a p-value of greater than 5% indicates acceptance of the hypothesis and hence non-stationarity. Non-stationary data as a rule can be unpredictable and therefore cannot be modelled or forecasted. It must be converted through the process of differencing which can be said to be the number of times that raw observations are differenced. If a time series is made stationary, any model that is inferred from it can be taken to be stationary, therefore providing a valid basis for forecasting (Al-Shiab, 2006).
Model identification involves using the ACF and the PACF (as explained above) of the differenced time series to plot correlograms from which coefficients ( , ) which give the best fitting are determined. The number of times the time series was differenced to ensure stationarity, (Brockwell and Davis, 2002) is also taken into consideration. Hence the coefficients ( , , ) are determined.
Parameter estimation involves determining the number of significant coefficients in the model that is being considered, volatility (variance) values, Akaike Information Criterion (AIC) value, Bayesian Information Criterion (BIC) value and the Ljung-Box test value. The AIC is an estimator of prediction error and evaluates how well a model fits the data it was generated from and the relative amount of information lost; the less the loss, the higher the quality of the model. The Bayesian Information Criterion is another criterion for model selection among a finite set of models. The model with the lowest value of AIC, BIC and volatility is considered the most suitable (Anderson, 2008). The Ljung-Box test is also a unit root test.
Model diagnostics involves running residual ACF to ensure that all time series data is captured by the selected model. This is indicated by all coefficients being within the significance bounds. If this is not the case, parameters must be re-estimated. However, in re-estimating, parsimony must be taken into consideration. This is because parsimonious models give better forecasts than over-parameterized models. Thus, in choosing the most suitable ARIMA model, it is important to keep parsimony in view.
When the model has been confirmed as suitable with the best coefficients, forecasting of future prices of Bitcoin from April 2021 to May 2021 was done using MATLAB Econometrics Tool and was validated by plotting forecasted values against actual series for comparison. Prediction accuracy (MAPE) was also plotted.
Results
By visual inspection (Figure 1), the Bitcoin closing price time series is not stationary. Non-stationarity is further confirmed by the sharp drop-off of the Partial Autocorrelation Function (PACF) plot at lag 1 ( Figure 2a) and the very slow decline of the Autocorrelation Function (ACF) plot (Figure 2b).
The Augmented Dickey-Fuller (ADF) test (Table 1) is applied to the Bitcoin daily closing price time series and it can be observed that the ADF did not reject the null hypothesis and has a p-value of 0.7756 which is greater than the significance level value of 0.05; thus indicating non-stationarity. Therefore, it is necessary to difference the series to obtain stationarity (Brockwell and Davis, 2012). (Table 2) accepting the null hypothesis with a p-value of 1.0000e-03 which is less than the significance level value of 0.05. All these indicate stationarity. Therefore, series became stationary with first difference. With stationarity confirmed, the process for ARIMA modelling of the Bitcoin daily closing price time series was carried out. The following likely models were identified and investigated: ARIMA(2,1,2), ARIMA(2,1,3), ARIMA(2,1,6), ARIMA(3,1,2), ARIMA(3,1,3), ARIMA(3,1,6), ARIMA(6,1,2), ARIMA(6,1,3) and ARIMA(6,1,6). Each model had its parameter values and goodness of fit determined using the combination of number of significant coefficients, volatility, Akaike Information Criteria (AIC) and Bayesian Information Criterion (BIC) values. See Table 3. As a starting point, ARIMA(6,1,6) was conditionally selected based on highest number of significant coefficients and lowest values of volatility and AIC; but must be confirmed by running residual diagnostics to ensure that all its coefficients are within the significance interval.
Running Residual ACF (Figure 5a) on ARIMA(6,1,6) showed that there were outliers at lags 10, 12 and 14 which indicates that not all information of the time series has been captured in the model and there was therefore a need for model re-estimation. Re-estimation involved taking the outliers mentioned above into consideration and re-running residual diagnostics. ARIMA(6,1,12) model was found to present a better performance and its residual diagnostics showed that it has all coefficients located within the confidence interval (Figure 5b). In addition, it has lowest values of volatility and AIC (Table 3)
. Thus, of
This journal is © The Nigerian Young Academy 2021 Annals of Science and Technology 2021 Vol. 6(2) 47-56 |51 all the models considered, ARIMA (6,1,12) is the most appropriate model for this time series.
The Ljung-Box test for residual correlation, squared residual ACF ( Figure 6) shows all coefficients outside the 95% confidence interval signifying that there is no correlation between the coefficients and thus ARIMA(6,1,12) is a good model for forecasting. Table 4 shows actual versus forecast values for Bitcoin daily closing price in April 2021 from which prediction accuracy (MAPE) was derived. It can be observed that ARIMA(6,1,12) gives very close forecast values for the first seven days (April 1-7, 2021), with a prediction accuracy of 99.94%. Prediction accuracy however decreases for longer forecast periods; dropping to 99.59% for 14 days forecast (April 1-14) and 95.84% for 30 days (April 1-30) forecast period. These are all considered good results; being above 95% accuracy. In other words, close predictions resulting in higher accuracy values were obtained for shorter prediction periods. This was the case until after April 18 when a significant dip was experienced and was subsequently followed by a continuous decline.
In addition, forecast for a two-month (April-May 2021) window period ( Figure 7a) and the prediction accuracy ( Figure 7b) were presented. It was observed that as number of forecast days increased, MAPE decreased; having a value of 84.75% at the end of the period. This confirmed that ARIMA modelling is better suited for short-term predictions and less so for longer-terms.
Bitcoin daily closing price time series with forecast values for April -May, 2021 and April -June, 2021, respectively, are shown in Figures 8a and 8b. The model for the selected time periods predicted an upward movement of Bitcoin price. This is in agreement with predictions of some market analysts (McGlone, 2021;Bambysheva et al, 2021;White, 2021). A series of events in May 2021, however, led to an unexpected decline in the fortunes of Bitcoin. Specifically, Bitcoin plummeted to nearly $30k after reaching a record high of more than $64k in April 2021. This can be ascribed to external factors which have been broadly categorized as follows:
Influencers' Comments
Comments of influential persons/investors that directly impact prices. For example, the tweet of Elon Musk on May 12, 2021 in which he said Tesla will no longer accept Bitcoin as payment method due to concerns over its energy usage, leading to loss of billions of dollars in value of the crypto market. Another tweet on June 4, 2021, suggesting 'breakup' with Bitcoin led to a 4.3% decline in price. (Browne, 2021).
Government Intervention:
For instance, Chinese Government's ban on May 18, 2021 whereby domestic banks and financial institutions were forbidden from supporting Bitcoin mining and transactions due to energy and money laundering concerns (BBC, 2021; CBS, 2021).
Other influences
Bitcoin price fluctuations occurred for various other reasons including but not limited to media coverage, actions of Speculators and availability of Bitcoin.
While these factors would have been mostly reflected in the historical data, not all influences can be captured and due to unforeseen events, this can lead to variances between forecasted and actual values of Bitcoin. This highlights the importance of including external factors into forecast models.
Conclusion
Rapid advancement in digital technology has created an opportunity for DSP techniques in engineering to be applied to the Finance industry; such as price forecast of financial products using financial time series. Various methods have been developed and applied in time series analysis which include ARIMA, ANN and Hybrid models which combine the strengths of the ARIMA and ANN models. While models based on neural networks have been found to present higher accuracy in some cases, the ARIMA model is selected for its robustness, simplicity, ease of application and high accuracy for short term forecasting.
In this paper, we have conducted the forecast of Bitcoin daily closing price using the ARIMA model in order to assist investors in their investment decisions. This is because price forecast of Bitcoin constantly attracts attention due to its direct monetary advantage. MATLAB was used for model identification, parameter estimation, diagnostics and forecasting and ARIMA (6,1,12) model was selected as the most suitable based on number of significant coefficients, values of volatility, AIC and BIC, and having all coefficients within the significance interval for residual diagnostics. Prediction accuracy or mean absolute percentage error (MAPE) was obtained for a twomonth (April-May 2021) test window. ARIMA (6,1,12) model gave very close forecast values for the first seven days of forecast (April 1-7, 2021) with a prediction accuracy of 99.94%. This however decreased for longer forecast periods; dropping to 99.59% for 14 days forecast period (April 1-14) and 95.84% for 30 days (April 1-30) forecast period. Despite the reduction, these are considered good results; being above 95% accuracy. Thus, this reinforces the ease of application and suitability of ARIMA models for short -term forecast only; as against more complex models such as artificial neural network models. The study confirms that the effect of the global pandemic on Bitcoin price was positive with surge in its value which can be attributed to investors using it as hedge against uncertainties raised by the pandemic and the subsequent national restrictions and lockdowns which led to the suppression of major world economies and global recession. The time series for Bitcoin prices with forecasted values showed an upward trend of daily closing price but this is contrary to actual market value. This variance can be attributed to the effect of external factors on Bitcoin prices such as tweets/comments of influential persons (e.g. Elon Musk), government intervention (e.g. China's ban on institutional support for Bitcoin mining and transactions) and other factors (e.g. media coverage and activities of speculators) which all combined to weaken prediction.
In conclusion, even though the ARIMA model has been shown to present efficient capability in generating short-term forecasts; other external factors and influences as stated above must also be taken into consideration for a more robust forecast. | 4,883.2 | 2021-12-01T00:00:00.000 | [
"Business",
"Economics"
] |
Plan and Design Public Open Spaces Incorporating Disaster Management Strategies with Sustainable Development Strategies: A Literature Synthesis
Current focus of planning and designing public open spaces has been mostly given on creating sustainable cities contributing to its’ three pillars; economic, social and environmental. However, the negative implications of rapid urbanization and the implication of climate change has increased disaster risk in cities mounting more pressure on the path of sustainable development. Therefore, it is imperative to incorporate the enhancements of disaster resilience with the sustainable development strategies. Yet, the integration of disaster management strategies with planning and designing public open spaces, remains unrehearsed within the urban planning context. Accordingly, this ongoing research study emphasize the need of incorporating disaster management strategies with sustainable development strategies when planning and designing public open spaces in cities. This paper first analyses the disaster management literature, providing evidences of potential use of public open spaces as an agent of recovery, to provide essential life support, as a primary place to rescue and for shelters and potential for adaptive response. Secondly, the paper cross analyses planning and designing literature with disaster management literature to find out the methods and approaches that can be used to harness the identified potentials. Finally, the paper suggests set of strategies to plan and design public open incorporating disaster management strategies with sustainable development strategies.
Introduction
Planning and designing cities towards the sustainability is evidently a challenging task due to long experiencing environmental, social and economic problems such as poverty, crime, poor sanitation, poor housing, air, water, and noise pollution, etc. Moreover, the rapid urbanization causes the concentration of these type of issues in cities at the alarming rate. Further, all these negative implications of rapid urbanization increase the disaster risk in cities by pushing more pressure on land and services resulting inadequate resource management, settlements in hazard-prone areas, lack of capacities, unclear mandated for DRR at the local level and decline of ecosystems and so on [1]. Apart from that, the implications of climate change further increase the risk of natural disasters in cities with an increase in weatherrelated disasters [2] and accelerated global sea-level riserelated coastal hazards [3]. Further, this increase of disaster risks in cities mounts more pressure on the path to sustainable cities. Therefore, it is inevitably important to incorporate the enhancement of disaster resilience into cities' sustainable development.
With this understanding of the importance of making cities resilience to disasters, León and March [4] state urban planning and designing can play a vital role through its ability to integrate multi-dimensional aspects affecting disaster risk reduction. Adding to this, UNISDR [1] states that strategic planning and design of spatial elements and their influence on the natural and built environment are the directives of city's capacity to absorb and recover from the effect of disasters. These spatial elements in cities may vary from buildings, ports, waterbodies to parks, playgrounds and streets. Out of these spatial elements, public open spaces can be considered as one of the key spatial element in modern cities which can play a dynamic role effecting to the economic, social and environmental life of cities. Public open spaces have the potential to act proactive manner, contributing multi-scale within the entire city to solve the current and future problems and issues [5]. However, this potential of public open spaces has not been fully recognized in enhancing cities' resilience to disasters. Confirming this, Hossain [6] argues that the role of public open space to enhance the city's resilience, especially to encourage the adaptive response following a disaster, has not been fully discovered yet. Contributing to this research need, this paper first analyses the existing literature on the potential uses of public open spaces to enhance the cities resilience. Secondly, the initial findings will be cross-analyzed with the planning and designing literature to find out the strategies that can be used to plan and design public open spaces as a strategy for disaster resilience within the sustainable city concept.
Research method
This paper is based on the findings of a literature analysis which was carried out as part of an ongoing Ph.D. research study. Accordingly, a comprehensive review of the literature was carried out covering journal papers, book chapters, conference papers as well as local and international reports within the subject area. At the same time, this literature review has been presented in different national and international audiences where the literature review has been critically examined and modified according to the feedback received.
The need for a new focus on public open spaces
The use of public open spaces was first identified in the 19th century in the United Kingdom and the United States, as a mode to improve the health and quality of life of the working class people who lived in squalid and congested living environment [7]. Further development of the use of public open space in cities, recognized the socio-cultural value of it. Accordingly, it was identified that Public open spaces in cities act as a place to celebrate cultural diversity, to engage with a natural environment, a place to meet the strangers and one can transcend and the other can be anonymous [8]. Adding to this, Carmona [9] states that the external public open spaces provide life breath to the cities by adding recreational opportunities, venues for special events, wildlife habitats and opportunities for the movement of the people. Then the most popular idea of using public open space was to protect the ecologically sensitive areas and other natural resources while providing a recreational use to it [8]. After the introduction of the recreational use to public open spaces, it was identified that there is also a huge economic benefit of adding public open spaces to the city's development. For the reason, natural and recreational elements increase the property value and therefore the tax revenues of the municipalities [10].
With the consideration on climate change and environmental pollution, the environmental benefits of using public open spaces were also identified, including air and water purification, wind and noise filtering, reduce the surface runoff of rainwater and microclimate stabilization [10], [11]. Apart from these socio-cultural, economic and environmental benefits, public open spaces are also identified to improve the mental and physical health of city dwellers. Attractive large public open spaces encourage the walkability and physical activities of the people which can potentially contribute to the health of local residents [7]. Further, urban green parks help to reduce the stress of city dwellers and provide the sense of peacefulness and calmness contributing to the mental health of city dwellers [10].
In summary, the current focus of planning and designing public open spaces have been given on three main areas; social, economic, and environmental which are considered as three main pillars of sustainability. However, the sustainable development should also incorporate the improvements of disaster resilience [12]. Yet, it is little known in the field 'how to use these public open spaces for disaster resilience'. Accordingly, the following literature synthesis analyses the potential uses of public open spaces for disaster resilience.
Public open spaces with a disaster management focus
The analysis of the existing literature revealed that the public open spaces in a city have the potential to be used in three main areas in disaster management: emergency response, recovery, and mitigation.
Emergency response and recovery
Literature related to earthquake and tsunami events, disclose that public open spaces within cities have a significant potential to be used for emergency evacuation and recovery. For instance, Allan and Bryant [13], study the role of public open spaces in an earthquake event in San Francisco, Northern California. This study reveals that, after a major earthquake, open spaces within the city act as a 'second city' using the spaces for simple to complex services such as gathering, building shelters, distribution of goods and service, temporary inhabitation, and commemoration. Therefore, their study highlights the importance of having different typologies of open spaces varying from small squares to parks and playgrounds which can be used for different functions in emergency response and recovery. When using public open spaces for emergency evacuation and recovery, Fuentes and Tastes [14] highlight the importance of consideration on connectivity between these public open spaces. This also confirms the value of Allan and Bryant's discussion as if the open spaces act as a 'second city', these spaces should have a better linkage among them. Further, these studies [14] inform that the connectivity needs to be built through the relationship between open space, resilience and urban design as a fundamental way to plan and design resilient cities.
Adding to this, the literature on Tsunami events [14], [4], establish that public open spaces in cities are assets for 'rapid resilience'. For instance, the studies on tsunami-prone coastal urban communities demonstrate that public open spaces can be used to provide safe assembly, to distribute emergency services and utilities, such as first aids, fresh water, electricity, and communication [4]. Therefore, public open spaces in coastal cities need to be planned and designed with a focus on tsunami resilience considering the factors such as location, capacity and terrain qualities. This confirms that the factors may differ from one disaster to another varying from accessibility, connectivity, terrain quality and capacity yet, there is a significant potential of using public open spaces for emergency response and recovery after a disaster. Further, it was noted that having different types of public open spaces focusing on different functions in disaster resilience also an added advantage for a disaster resilience city.
Disaster mitigation
Apart from emergency management and recovery, the disaster mitigation focused literature reveals that the Public open spaces can also be used to mitigate the disaster risk. Most commonly, flood mitigation strategies first identify the flood-prone areas and to protect these areas from unauthorized encroachments and future development, authorities propose to allocate these spaces for open space uses [15], [16]. Conversely, the National Tsunami Hazard Mitigation Program [17], also emphasizes the use of open spaces as an element to mitigate the Tsunami Risk. They introduce seven basic principles of planning and designing for Tsunami events. Out of these 7 principles, the second principle describes, that Tsunami hazard areas need to be allocated for openspace uses [17]. However, most of these discussions, emphasize the need of keeping tsunami hazard areas as open-spaces and confine the uses in conservation and preservation perspective rather than using it as an asset in city development.
Identifying this need, researchers alike Kubal, Haase et al. [18] promote the idea of using these spaces not merely for preservation and conservation, but for the publicly used spaces such as wildlife habitat areas and nature-related recreational activities. In supporting this view, Ardekani, and Hosseini [19] state that tsunami setback areas can be potentially used for agriculture, open-space or scenic amenity. However, this does not mean to promote an additional development in vulnerable areas, but it should be planned and designed to make the use of hazard-prone areas safer to the community and to get the highest and best use of the urban spaces in cities.
Discussion
Above literature, synthesis revealed that the public open spaces have the potential to be used for emergency response, recovery and mitigation with a focus on making cities resilience to disasters. However, to harness these potentials, public open spaces need to be planned and designed with the focus on disaster resilience use of it. Then the question is 'how to plan and design public open spaces in cities with a focus on disaster resilience' and 'what are the strategies that can be used to plan and design public open spaces for disaster resilience'. Enquiring this, the identified potential uses were crossanalyzed with the sustainability-focused planning and designing literature as follows.
Strategies to plan public open spaces for emergency response and recovery
In cities, the land is a scares resource. Therefore, it is imperative to get the highest and best use from whatever the available land. At the same time, allocating open spaces for the sole use of disaster emergency or recovery is not a practical solution as disasters may occur sometimes seasonally (seasonal flooding, winds, and storms) and some are unpredictable (floods, hurricanes, tornadoes, volcanic eruptions, earthquakes, tsunamis). Therefore, planning open spaces for the sole purpose of emergency planning or recovery without having any connection with everyday life of the city can lead to an extra set of problems such as unsafe isolated places, unstructured open spaces, maintenance cost to municipalities, etc. Further, this is not only a threat to sustainable city concept, in the long run, but these places also will not be physically prepared and will not be identified by the public for disaster emergency or recovery [20]. Therefore, these open spaces for disaster resilience need to be planned aligned with everyday life of the city. In supporting this view, Allan and Bryant [20] state that when emergency management plans and recovery plans are aligned with everyday life of the city through urban planning and designing strategies, it becomes more effective. The studies on tsunami rapid resilience [4], further confirm that public open spaces need to be planned to function well in both emergency and non-emergency situations. Accordingly, it can be understood that, for the effective use of public open space as a strategy for emergency response and recovery, it needs to be planned and designed, aligned with everyday life of the cities.
However, planning and designing public open spaces for emergency response and recovery having a connection with day to day life in cities is not a simple task. Planning for everyday use of the city may include recreational facilities, promote walkability, cycling, green spaces and so on. If the same space needs to be used for emergency response and recovery. It may include, assembly points, sheltering, space to distribute goods and services. Then the place should be planned in a flexible manner allowing the variety of uses. Having a connection with this need, planning and design literature suggest a method call 'loose space'. According to Franck and Stevens [21], 'Loose-fit' spaces are not planned or designed for a specific use. When the place is not planned for a specific use, that spaces are loose, unregulated and open-ended, where the user will decide the use of it rather than following planner's decisions. In supporting this view, Thompson [8] states, unlike the designed space, "Found" spaces often serve people's wide range of needs. Applying the same theory, if the public open spaces can be planned and designed as a loose-fit space with minimal designed features, it has a significant potential to serve the everyday life of the city as well as for a disaster emergency and recovery. For the reason that, the user has the freedom to choose the use of the space, in day to day life the user will be the city dwellers who want to relax, play, walk, and cycle. In an event of the disaster, the user will be evacuees who were evacuated from a hazard-prone area or who need further inhabitation due to loss of houses. Accordingly, designing selected public open spaces as loose space can be a potential strategy to plan public open spaces for emergency response and recovery.
The analysis of the literature further identified the potential use of different types of open spaces for different functions in emergency response and recovery such as shelter, first aid, distribution of goods and services. In relation to this need, planning and designing literature informs that mixing various types of public open spaces to the city layout can address a variety of need of a city. Further, the diversity of public open spaces with their individual characters invite different uses contributing to the city's functionality, vitality and sustainability [8]. These places can be any type of external public open spaces providing leisure opportunities, places for special events, wildlife habitats and even a place just for the movement of the people [9]. In the combination of this notion with the aboveidentified potential use, mixing diversity of public open spaces to the city layout focusing both city's vitality and functionality in disaster emergency is a potential strategy to be used in future cities.
It was also identified that the city's open spaces can act as a 'second city' after a major disaster contributing to multifaceted services such as gathering, sheltering, and temporary inhabitation. Adding to this, studied [13] demonstrate that, when the recovery plans are successfully integrated with urban design, it facilitates to see the city's open spaces as a 'second city' with the network of open spaces. Conversely, Fuentes and Tastes [14] emphasize the need of designing an open space network contributing to urban resilience based on the studies on 2010 earthquake and tsunami in Chile; case study on San Pedro de La Paz. In a similar vein, urban planning strategies value the notion of open space network under the sustainably built environment concept. Confirming this, Rogers and Sukolratanametee [22] emphasize that integrated network of parks and open space can bring multiple benefits such as encourage the walkability, facilitate the sense of community, beneficial for neighborhood designs and promote the interlinked recreational facilities. Adding to this, Carmona [9] states, the network of open spaces connected with green corridors integrate the natural and the built environment which is a key to create sustainable cities. Accordingly, it can be identified that, designing a network of Public open spaces have a significant potential to facilitate both disaster resilience, urban resilience and sustainable cities.
Strategies to plan public open spaces for mitigation
Disaster resilience literature identified that disaster risk and exposure can be reduced by preserving hazard-prone areas as open space uses and possibly can be used as publicly used spaces. Further, as it was mentioned, the land is a scares resource in cities. Therefore, getting the highest and best use from available space can be considered as a vital solution. At the same time, it was identified that public open space can bring many economic benefits to the municipality contributing to economic sustainability. Crompton [23] demonstrate that market-driven factors demand public parks and open spaces as it delivers the highest and best use of public land. Accordingly, open spaces which are preserved and conserved for mitigation purposes can be possibly used for Public open space uses with minimal intervention to the land and with proper safety measures.
Further, this potential conversion of hazard-prone areas to public open spaces should not be an additional development in vulnerable areas, rather it should be a benefit for both mitigation, community resilience and wise use of the space in cities. For instance, Drake and Kim [24] introduce the notion of urban sponge park where they converted a marshy wetland into a residential area and public parks were used as working landscape to divert excess storm water run-off for use in the public park along the canal. Likewise, the urban sponge park achieves multiple objectives including liveable cities, environmentally sustainable and flood resilience built environments. Accordingly, it can be understood that public open spaces need to be planned and designed in a manner addressing multiple objectives incorporating sustainability, disaster mitigation, livable community, protecting hazard-prone areas, protecting wildlife habitat, and enhancing economic vitality.
In summary, the points which were discussed in the discussion section can be graphically presented as shown in Fig. 1
Conclusions
This paper has provided an overview to expand the current focus of planning and designing public open spaces towards enhancing disaster resilience in cities. Accordingly, it was first discussed the need for a new focus informing that the current focus is given on sociocultural, environmental and economic benefit and there is a significant need to focus on disaster resilience. Then, the paper analyzed the literature evidence which discusses the potential uses of Public open spaces for disaster resilience and summarized that public open spaces have the potential to act as a facilitator for emergency evacuation, as an agent of recovery and as a strategy for mitigation.
Then the identified uses were cross-analyzed with the sustainability-focused planning and designing literature inquiring the strategies that can be used to plan and design public open spaces with a disaster focus. Finally, the cross analysis suggested six main strategies. to both disaster resilience and urban resilience, was identified to facilitate the City's open spaces system to act as a 'second city' after a major disaster. Finally, for the potential conversion of hazard-prone areas which allocated for mitigation purpose into public open spaces 5. Plan and design public open spaces addressing multiple objectives (incorporating sustainability, disaster mitigation, livable community, and enhancing economic vitality). 6. Get the highest and best use of available spaces in cities. Furthermore, these literature-based findings can be evaluated and tested with a disaster-specific focus or context-specific focus from further researches. | 4,825.8 | 2018-11-14T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Does evolution of echolocation calls and morphology in Molossus result from convergence or stasis?
Although many processes of diversification have been described to explain variation of morphological traits within clades that have obvious differentiation among taxa, not much is known about these patterns in complexes of cryptic species. Molossus is a genus of bats that is mainly Neotropical, occurring from the southeastern United States to southern Argentina, including the Caribbean islands. Molossus comprises some groups of species that are morphologically similar but phylogenetically divergent, and other groups of species that are genetically similar but morphologically distinct. This contrast allows investigation of unequal trait diversification and the evolution of morphological and behavioural characters. In this study, we assessed the role of phylogenetic history in a genus of bat with three cryptic species complexes, and evaluated if morphology and behavior are evolving concertedly. The Genotype by Sequence genomic approach was used to build a species-level phylogenetic tree for Molossus and to estimate the ancestral states of morphological and echolocation call characters. We measured the correlation of phylogenetic distances to morphological and echolocation distances, and tested the relationship between morphology and behavior when the effect of phylogeny is removed. Morphology evolved via a mosaic of convergence and stasis, whereas call design was influenced exclusively through local adaptation and convergent evolution. Furthermore, the frequency of echolocation calls is negatively correlated with the size of the bat, but other characters do not seem to be evolving in concert. We hypothesize that slight variation in both morphology and behaviour among species of the genus might result from niche specialization, and that traits evolve to avoid competition for resources in similar environments.
Introduction
Studies of character evolution help illustrate the relative importance of speciation rates, extinction selectivity, as well as ecological and genomic factors in macroevolution [1,2]. By determining the ancestral states of characters and tracking subsequent change over time, we can examine the morphological and ecological differences among species to better understand speciation processes [3]. The distribution of character states in a group may evolve by several routes. Shared character states might be the result of evolutionary stasis, in which morphology or behaviour accrue negligible or no change in a lineage over long periods of time. In this scenario, the ancestral state is retained in descendent lineages regardless of the genetic distance and phylogenetic divergence among species [4]. Similar character states might also evolve by convergent evolution, wherein these traits evolve independently in unrelated lineages as a result of adaptation to similar environments or ecological niches [5][6][7][8]. Functionally correlated traits might also evolve by concerted evolution, whereby the adaptive values of a specific behaviour depend on a morphological state [9,10]. A number of diversification processes have been described to explain variation of morphological traits within clades with high divergence rates [11][12][13]. However, not much is known about these patterns in complexes of cryptic species with low morphological disparity. Both evolutionary conservatism and convergence can underestimate phenotypic divergence, and both mechanisms can produce similar evolutionary outcomes [14,15]. The study of processes underpinning the evolution of crypsis can only be investigated when species boundaries are well defined. However, because of their similarity, cryptic species are difficult to distinguish based on morphology alone. The precise identification of species within these complexes therefore often requires the study of genetic or behavioural data [16][17][18][19]. The mastiff bats of the genus Molossus include groups of morphologically similar but genetically distant species, and other groups of species that are morphologically divergent but genetically similar [20-23], which until recently have hindered the resolution of systematic relationships among species of the genus. However, a genomics approach has resulted in a robust phylogeny [24] so that Molossus is an excellent case study of the evolution of morphological and behavioural characters to investigate unequal trait diversification in a monophyletic group with variable rates of evolution among lineages.
Molossus are mainly Neotropical in distribution from the southeastern United States to southern Argentina, including the Caribbean islands [25]. Molossus species are aerial insectivores and are non-migratory, although they have numerous wing adaptations that are associated with high dispersal ability and rapid flight [26][27][28][29]. A recent study using Next Generation Sequencing (NGS) [ . Each cryptic complex is not reciprocally monophyletic, but instead includes morphologically similar species based on characters traditionally used to identify taxa in the genus, such as size, hair patterns, and cranial characters [20,23].
In Molossus, several morphologically similar species (e. g. M. bondae, M. molossus, and M. coibensis) occur in sympatry in the mainland Neotropics and can be distinguished based on their echolocation calls [30], although morphological characters are also necessary for identification. Several of these diagnostic morphological characters are also ecologically and behaviorally important. For example, differences in the infraorbital foramen have been connected to thermoregulation through vasodilation [31] and to sensory acuity of the maxilla in mammals [32]; and the sagittal crest is correlated to bite strength, and consequentially feeding habits [33,34]. Hair patterns have also been associated with defensive and offensive behaviours [35], mate signaling, and camouflage [36]. Dentition is associated with diet, including mechanical aspects of feeding and processing of diverse food textures [37,38]. The occipital bone is a curved structure in the rear of the skull perforated by the foramen magnum, through which several nerves (including the spinal cord) and ligaments pass. This bone contributes to the protection of the brain, but character states of this structure do not correlate to phylogenetic data [39].
In bats, phylogenetic relationships may impose constraints on potential echolocation call design within families [40] and genera [30][31][32][33][34][35][36][37][38][39][40][41] and may explain the differences in call structure within some groups. Conversely, echolocation call frequency might correlate with body size [42], frequency partitioning among species [43], prey size [44], and selective pressures such as foraging strategy and habitat structure [45][46][47]. Although many hypotheses have been proposed to explain diversity in call design, previous studies support the idea that echolocation is evolutionarily flexible and is constantly adapting to maximize prey detection by adjusting to an optimal aural field of view and novel environments [48,49].
Echolocation call patterns are generally organized into search, approach, and terminal phases [50]. Search parameters are limiting factors for insect detection and can give information on how the bats optimize their echolocation calls to search for prey [51,52]. Molossid bats have a long, narrowband search call, a common pattern for insectivorous bats that forage in open areas [30,53]. A narrow bandwidth concentrates the energy of the signal, which helps in the detection of prey at long distances [54,55]. In Molossus, call designs may vary between two to three echolocation pulses depending on species, starting with a lower-frequency pulse, followed by one or two pulses at successively higher frequencies [30,56,57]. This increase of frequencies is hypothesized to allow the detection of a larger number of potential prey sizes and maximize successful capture [30]. Among Molossus, echolocation call designs may also vary in duration, harmonics, and structure depending on the species [30,[57][58][59].
Documenting distinct stereotyped echolocation calls for a group of closely related species would allow us to establish the predominant factors (e.g. phylogenetic stasis, adaptation) involved in evolution of call structure. In this study, we examined traits that varied significantly among some species of Molossus, to test the hypothesis that any lack of variability in morphological (i.e., external and cranial features) and behavioral (i.e., echolocation calls) character states is the result of evolutionary stasis. According to this hypothesis, we would expect that variation among morphological and/or echolocation call character states is correlated with phylogenetic relationship. Alternatively, if morphology and/or echolocation call parameters are independent of phylogeny, these traits are most likely evolving stochastically or via local adaptation. In addition, we examined whether morphology and echolocation calls evolve in concert, and the potential association between morphological characters and echolocation call characters states. However, if morphology and echolocation call design are uncorrelated, these suites of traits are likely evolving independently.
Phylogenetic analysis
This study conformed to the animal care and use guidelines of the American Society of Mammalogists [60] and was approved by the Animal Use Committee of the Royal Ontario Museum. Loureiro et al.
[24] reconstructed a well-resolved phylogenetic tree of Molossus at the species level based on 29,448 filtered SNPs which we used in this study of the evolution of morphology and echolocation calls. The de novo alignment comprised 189 samples from 14 recognized species of Molossus and representatives of two other genera of molossids, Promops centralis and Eumops auripendulus, used as outgroups following Ammerman et al. [61] and Gregorin and Cirranello [62]. We used the maximum likelihood phylogeny provided by Loureiro et al. [24] as an initial tree, and individuals were assigned to species. We reconstructed a Bayesian tree using the program SNAPP v1.1.10 [63] implemented in BEAST [64]. We generated the XML file required as input by SNAPP using the Ruby script (snapp_prep.rb) [65]. We ran SNAPP for ten million generations using default priors. Convergence of the runs was assessed through estimated Effective Sample Size (ESS) values and trace plots in Tracer [66]. After removing 10% of the samples as burn-in, we constructed a species tree using TreeAnnotator [67].
Morphological data
We analyzed 660 specimens from the 14 recognized species of Molossus and two outgroup species (S1 Appendix) [24, 68,69]. (Table 1). For the echolocation call parameters, we analyzed six parameters: 1) Call duration: time from the beginning to the end of a call. 0: Long-more than 0.25 sec; 1: For species identification in Aruba, Bonaire, Cayman Islands, Curacao, Dominican Republic, Mexico, and Nevis, we captured individuals that were identified to species, measured the forearm length, and released them while recording their calls. The person releasing the bats was about 10 m away from the person recording the calls. Releases were conducted in large open areas and the bats were visually followed in flight until the signal of the calls ended, allowing us to record typical search calls [30, 56, 57]. The calls obtained in Belize, Brazil, and Guyana were from free flying bats, but the species of Molossus identified in the call files were also previously caught in mist nests in the respective areas where the calls were recorded. Recordings from Panama were obtained from hand released bats and are described in Gager et al.
[57]. Free flying calls were recorded during the first 3 hours after sunset from areas where only one species of Molossus occurs (Aruba, Bonaire, Cayman Islands, Curacao, Dominican Republic, Montserrat, and Nevis) and were compared with the files originating from hand released calls. In total, we obtained echolocation calls for 12 of the 14 species of Molossus. Hand released calls were recorded using Wildlife Acoustics EM3+, Avisoft-UltraSoundGate 116H, and Avisoft-RECORDER USHG. Passive calls were obtained with Wildlife Acoustic SM4BAT FS to a maximum file duration of 15 seconds and initially processed with Kaleidoscope Pro 5 software (Wildlife Acoustics, Inc) followed by manual verification of species. We analysed the search calls in Raven [70] using a Hamming window, FFT = 512, and an overlap of 93%. Faint calls (less than 30 dB relative amplitude) were removed from the dataset.
We measured the duration, peak frequency, minimum frequency, maximum frequency, bandwidth, and pulse interval of a maximum of 10 search calls per bat recording. We calculated both duty cycle (call duration/ pulse interval � 100) and repetition rate (100 ms/ pulse interval). We also analyzed qualitative characteristics of the call, including maximum number of call alternations in a pulse sequence observed for a species, direction of the end slope, number of harmonics of each pulse and noted the harmonic with highest energy. Attack sequences were not included in the analysis because they were recorded in less than 50% of the studied species. Only the harmonic with highest energy for each species was considered for analysis.
Each quantitative measure was plotted individually using the mean and the standard deviation of each species. If the measurements could be divided into two or more groups with no overlapping of mean and standard deviation in the graphs, they were coded and transformed to discrete characters (Tables 1 and 2). Measurements that did not demonstrate variation among species, with means and standard deviation overlapping in the plots (bandwidth, duty cycle, repetition rate) were discarded and not used in further analyses [71][72][73]. The characters were equally weighted and multi-state characters were treated as unordered. The coded characters were included in a data matrix for analysis, where missing data were denoted as "?" (Tables 1 and 2).
Data analysis
To determine if the ancestral states were retained in the descendant lineages, we estimated the ancestral characters states for morphological and behavioral characters. Maximum likelihood ancestral reconstructions of the evolutionary path of character state transformation were estimated using the phylogenetic tree recovered from the SNPs analysis. Ancestral states of traits were estimated using Mesquite 3.1 [74] based on a one-parameter model. We used the R package phytools [75] to map characters on the phylogenetic tree. The phylogenetic signal measured by correlations between phylogenetic distances and morphological and echolocation distances were evaluated using the R package phylosignal [76]. We also tested the strength of stochastic Brownian Motion for both morphological and echolocation characters using the package phylosignal [76] by computing the indices of Blomberg's K and K � , Abouheif's Cmean, Moran's I, and Pagel's Lambda. Results of these simulations can be used to compare the performances of the different methods and interpret values of indices obtained with real trait data, for a given phylogeny [76]. Independent contrasts between quantitative parameters of echolocation calls and quantitative morphological characters were analyzed using the R package phytools [75]. This approach assumes that species have a common history represented by their phylogenetic relationship, and therefore are not independent entities. Independent contrasts analysis removes the phylogenetic components in the correlation of two variables by generating phylogenetically independent variables from the original character values [77]. Correlations between independent contrasts of variables were examined using least squares linear regressions in phytools [75]. To test for correlation between echolocation call parameters, we conducted linear regression analyses of frequency measurements (maximum, minimum, and peak frequencies) versus bandwidth and duration in R 3.6.1.
Phylogeny
Bootstrap support for nodes in the Snapp tree among the 14 pre-defined species of Molossus is > 85% (Fig 1). The species M. fentoni from Guyana and Ecuador is the sister group of all other species in the genus. The next species to diverge is M. alvarezi from the Yucatán Peninsula, Central America and South America, which is the sister group of the remaining species. Table 1. https://doi.org/10.1371/journal.pone.0238261.g001
Morphological data
The variation of the six morphological characters were consistent with interspecific variation within and among species (Table 1)
Echolocation data
We recorded a total of 1193 calls from 8 species of Molossus and from Promops centralis ( Fig 2; Table 2). In addition, 81 calls from M. coibensis, which were published in Gager et al.
[57], 31 calls of M. sinaloae provided from SONOZOTZ project and CONABIO, Mexico were also analyzed (S2 Appendix). Published information on echolocation calls from several species of Molossus was also used [30, 65,78], including information on two species of Molossus (M. bondae and M. rufus) and an outgroup (E. auripendulus) for which we did not have recordings. Echolocation calls from M. currentium and the recently described species M. fentoni [21] were not recorded, and information on the calls of these species is not available in the literature. Therefore, all echolocation characters of M. currentium and M. fentoni were coded as unknown.
Calls from the same species that were released by hand and those recorded in the field did not show significant differences in mean values (p <0.05), although free-flying calls increased the standard deviation of the measurements when added to hand-released calls. Free-flying calls appear to be more variable than hand-released calls, but the datasets are not significantly different and were combined for analysis. Only six of 11 parameters of echolocation calls were consistent among species and among different populations within each species. These six parameters were considered for further analyses, which included duration of the call, lowest frequency, highest frequency, peak frequency, highest-energy harmonic, and shape of the end slope.
Data analysis
The six morphological and six echolocation call characters were mapped onto the phylogeny generated by the SNP data (Fig 3). The correlograms showed that size, hair band, and upper incisor shape were positively correlated with phylogenetic distances at p<0.05 (Fig 4). The r values of each of these three correlations varied from 0.21 for the upper incisors to 0.39 for the hair band. The distances of the three remaining morphological characters (occipital shape, infraorbital foramen, and sagittal crest) were not significantly correlated with phylogenetic distances (p>0.05) (Fig 4). The correlogram analysis yielded no correlation between any individual echolocation call parameter and phylogenetic distance (P>0.05) (Fig 5).
The indices used to calculate the stochastic Brownian Motion model obtained higher values for echolocation call traits, compared to morphological traits (Fig 6). The indices Blomberg's K and K � , and Pagel's Lambda have significant Brownian Motion for both data sets (p<0.05) indicating stochastically distribution of characters in 70%-80% of the phylogeny. The Abouheif's Cmean and Moran's I values showed significant mean values for the Brownian Motion model in 38% to 42% of the phylogeny for morphological characters and 40% to 55% for echolocation parameters.
Significant negative relationships between independent contrasts of forearm size to the call parameters of minimum frequency (p = 0.01, r 2 = -0.33), maximum frequency (p = 0.01, r 2 = -0.34), and peak frequency (p = 0.04, r 2 = -0.26) were found in regression analyses for all species (Fig 7). However, no significant relationship was found between any other morphological character and echolocation call parameters when the effect of phylogeny was removed (p>0.05). Among echolocation call variables, significant linear regression values were also found between call duration versus maximum frequency (p<0.001; R = 0.62), minimum frequency (p< = 0.02; R = 0.26), and peak frequency (p<0.01; R = 0.39).
Estimated ancestral reconstructions for each morphological and echolocation call character suggest that the ancestor of the Molossus lineage was probably of large body size (75%), had dichromatic dorsal hair with a wide pale band at the base (77%), long and thin upper incisors (94%), delicate and triangular occipital shape (70%), laterally directed infra orbital foramen (79%), and undeveloped sagittal crest in males (99%). The morphological ancestral reconstruction analysis suggests that the ancestor of Molossus was very similar to the extant species M. sinaloae and M. alvarezi, but that characteristics such as monochromatic fur, pincer-like incisors, and small body size are derived states that emerged more than once in the evolutionary history of the genus.
The echolocation call of the ancestral Molossus was likely short in call duration, less than 0.13 ms (82%), and the first harmonic had the highest energy (100%) ( Table 3). The other ancestral states for echolocation call parameters could not be recovered with high probability. However, based on the relationship found between size and echolocation call frequencies, we hypothesize that the ancestral Molossus also had a minimum frequency less than 26 kHz, a maximum frequency less than 35 kHz, and peak frequency less than 29 kHz. We could not predict the structure of the end slope in the ancestral node because this parameter does not seem to be correlated to size or to phylogeny.
Discussion
We tested the hypotheses that, in Molossus, morphological and behavioural states are the result of evolutionary stasis and that morphology and echolocation calls evolved in concert. Distribution of character states most likely evolved by more than one modality. Morphology appears to evolve as a mosaic of adaptation, random drift, and stasis. However, call structure is independent of phylogeny in Molossus, evolving stochastically, and through local adaptations. Frequency of echolocation calls are negatively correlated with body size, and both characters seem to be evolving in concert, but variation in other morphological and behavioural characters among species are not correlated. Therefore, slight variation in both morphology and behaviour among species of the genus might evolve stochastically or via character displacement to avoid competition for resources in similar environments.
Evolution of echolocation calls and morphology in Molossus
Our results show that morphology has a stronger evolutionary signal than behaviour, which is consistent with other studies. In a comparative study using a variety of organisms and traits [79], behaviour is less correlated with phylogeny than are morphology, life history, and physiological traits. Kamilar and Kooper [80] studied phylogenetic signals in primates and reported that although phylogenetic signal varies across traits and categories, behavioural characters had only a moderate to low correlation with the evolutionary branching pattern of the group. A correlation between some morphological and echolocation characters has also been reported in the literature [81,82], which agrees with the findings found herein. A positive, but low, correlation between three individual morphological characters and phylogenetic distances suggests stability of those character states in the phylogeny, supporting the hypothesis that morphological stasis is occurring in some clades within Molossus. These characters are distributed in different morphological suites, including hair pattern, forearm length, and dentition, which suggest that stabilizing selection might be generalized across the phenotype within some groups of species. This pattern has also been observed in cryptic groups of ants [8], fishes [83], and lizards [17]. However, stasis localized in individual clades of the phylogeny might explain why similar species do not always form monophyletic groups. For example, body size is one of the most common traits used to characterize species of this genus, but it only has a correlation of 34% with phylogeny. Some closely related taxa may vary considerably in size from one another, and similar sized groups of bats may not be monophyletic, but instead consist of relatively distantly related species (Fig 3) [24, 84]. Thus, suites of characters traditionally used to define species and species groups in Molossus have led to a confused taxonomy. Three other morphological characters that have been commonly used in species identification and systematic relationships in the genus (shape of the occipital bone, shape of the infraorbital foramen, and the relative development of the sagittal crest) are not strongly correlated with phylogeny. The apparent similarity among species in these character states seems to have arisen multiple times among phylogenetically divergent species, which explains the three nonmonophyletic cryptic complexes within the genus The lack of correlation between morphological and phylogenetic distances indicates that these traits are evolving stochastically or through convergence as adaptation to a particular environment or feeding guild [73,85,86], and may be more correlated with the use of different micro-ecological niches than with the phylogenetic history of a group [74][75][76][77][78][87][88][89][90][91].
In contrast to vocal signals that are phylogenetically informative in birds and other mammals [90,[92][93][94], echolocation calls in Molossus did not appear to reflect phylogenetic patterns. Sensory convergence is considered to be one of the most important factors shaping the echolocation calls in bats [45,46], and might be influenced by prey type and size [91], foraging strategy, and habitat selection [49, 95,96]. Although species of Molossus have similar foraging strategies, they occupy an array of different habitats, such as tropical forests, savannahs, and urban areas [97], which might influence call structure. The prey perception hypothesis is unlikely to explain variability in frequencies since most bats have echolocation frequencies three times higher than required to detect their prey [49,98] and larger bats can detect both small and large prey [99]. However, prey perception might act as a selective force in other echolocation parameters, such as call duration and shape of the terminal slope [100].
2-Minimum Frequency
Species that rely on non-visual signals for orientation and foraging are more likely to be morphologically similar because the changes in these signals are not necessarily related to external morphology [101,102]. However, in Molossus, correlations between body size and call frequencies suggest concerted evolution for these characters. Larger bats have lower call frequencies than smaller bats in agreement with the size-frequency hypothesis proposed by Jones [102]. According to Darwin [103] the length of the vocal cords is related to overall body size, and therefore larger animals usually emit lower fundamental frequencies, which could also explain the correlation. Studies have also suggested that cochlear size and shape is also related to body size in mammals [104], and can explain variation in echolocation in bats [105] and whales [106]. Jakobsen et al. [49] suggested that this relationship between body size and echolocation call frequencies might be explained instead by a constraint imposed by the need to achieve a high directionality of the call, which is not necessarily related to body size. According to these authors smaller bats have shorter jaws, which limit the maximum emitter size. Nevertheless, a recent study using 86 species of vespertilionid bats did not find support for the directionality hypothesis, and demonstrated that forearm size (a proxy for body size) is correlated with echolocation call peak frequency, which was consistent with our results [107].
No other echolocation call parameter measured in our study is correlated with morphological traits in Molossus. These results suggest that morphological and echolocation call characters, other than size and frequency, are evolving independently. However, the duration of the call appears to be correlated with frequency, whereby longer calls have lower frequencies. Species-specific adaptations are often connected with environmental factors, and the evolution of both morphological and behavioural traits can be influenced by micro-ecological selection pressures [108]. In bats, differences in call structure coupled with slight morphological variation might act to minimize competition [99], and thus not be correlated with phylogenetic histories of these species.
The low levels of phenotypic divergence found within the three polyphyletic cryptic species complexes show that unequal trait diversification has evolved mostly through local adaptation or random walk. Indeed, the Brownian Motion model suggests that a significant fraction of both character sets is evolving stochastically, but not all the evolution of these character can be explained by random walks. These results suggest that evolutionary processes other than stasis and Brownian Motion, such as recent adaptation, might affect the evolution of those traits. These patterns explain why so many species within the genus are morphologically and behaviorally similar, regardless of their level of phylogenetic divergence. | 6,087.2 | 2020-09-24T00:00:00.000 | [
"Biology"
] |
Theoretical Investigation of an Alcohol-Filled Tellurite Photonic Crystal Fiber Temperature Sensor Based on Four-Wave Mixing
For this study, a temperature sensor utilizing a novel tellurite photonic crystal fiber (PCF) is designed. In order to improve the sensor sensitivity, alcohol is filled in the air holes of the tellurite PCF. Based on the degenerate four-wave mixing theory, temperature sensing in the mid-infrared region (MIR) can be achieved by detecting the wavelength shift of signal waves and idler waves during variations in temperature. Simulation results show that at a pump wavelength of 3550 nm, the temperature sensitivity of this proposed sensor can be as high as 0.70 nm/°C. To the best of our knowledge, this is the first study to propose temperature sensing in the MIR by drawing on four-wave mixing (FWM) in a non-silica PCF.
Introduction
Temperature sensors based on photonic crystal fibers (PCFs) have been a research hotspot in recent decades due to their small size and high temperature sensitivity [1]. In order to further improve the sensitivity, various methods have been adopted, such as based on fiber loop mirrors (FLMs) [2] and the modulation instability (MI) technique [3]. Materials other than the traditional silica dioxide have also been used to fabricate the PCFs [4], and temperature-sensitive materials, like oil [5], alcohol [6], liquid crystal [7], and silver nanowires [8,9], have been proposed for filling into the PCFs' air holes.
Four-wave mixing (FWM), as an intermodulation phenomenon in nonlinear optics, is an alternative method to enhance the temperature-sensing sensitivity [10][11][12]. FWM originates from third-order nonlinear polarization of light and has been widely applied in fields, including wavelength division multiplexing [13,14], magnetic field sensing [15], strain sensing [16,17], and generation of a supercontinuum spectrum [18,19], to name a few. In optical fibers, at the occurrence of FWM, changes in temperature would induce the signal wave and idler wave to shift the wavelength, which can be utilized for temperature sensing.
Recently, tellurite glass has attracted extensive attention due to its unique features such as wide infrared transmission range, high nonlinear refractive index, high insulation constant, low melting temperature, low glass transition temperature (T g ), and excellent third-order nonlinear optical properties [20]. Possessing a refractive index of~2.0 [21], tellurite glass fibers could support light transmission in near-infrared (NIR) and mid-infrared (MIR) regions, which is impossible for traditional transmission in near-infrared (NIR) and mid-infrared (MIR) regions, which is impossible for traditional silicon material due to its large loss. The tellurite material provides a good platform for the generation of FWM and offers an opportunity for temperature sensing in the MIR.
In this paper, a temperature sensor utilizing a tellurite PCF is designed based on FWM. In order to achieve MIR temperature sensing, we optimize the fiber parameters and design the fiber structure to be solid with the exception of three adjacent holes filled with alcohol in Mode Solution software. According to the amount of degenerated FWM, temperature sensing can be realized by detecting the wavelength drift of signal wave and idler wave at the change of temperature. Through the use of MATLAB software programming, the sensitivity of this temperature sensor can reach 0.70 nm/°C at a pump wavelength of 3500 nm. Being simple in structure and high in sensitivity, this sensor could be used for light induction through human body MIR radiation detection.
Structure design of the PCF
The tellurite PCF used for the proposed sensor consists of two kinds of tellurite materials: TeO2-ZnO-Na2O-P2O5 (TZNP) for the cladding and TeO2-LiO2-WO3-MoO3-Nb2O5 (TLWMN) for the fiber core and rods [22]. The component proportion of TLWMN for the rods is slightly different from that for the core, leading to a refractive index difference of 0.025. The core diameter is indicated by dc, and d1, d2, and d3 are the diameters of the first layer rods, the second layer rods and the third layer rods, respectively. Λ1, Λ2, and Λ3 are the sizes of rod spacing in the first layer, the second layer, and the third layer in Figure 1. . dc is the core diameter, d1, d2, and d3 are the diameter of the rods in the first layer, the second layer, and the third layer, respectively. Λ1, Λ2, and Λ3 are respectively the size of rods spacing in the first layer, the second layer, and the third layer.
To realize temperature sensing in the MIR, we need to generate FWM in the MIR, which requires the tellurite PCF to possess a dispersion curve with zero dispersion wavelength (ZDW) and be as flat as possible in the MIR. For this purpose, the dispersion curve is simulated within the MIR range of 2500 to 4000 nm by respectively changing the core diameter (dc) and the rod diameters (d1, d2, and d3). Λ1 is fixed to be 2 μm, Λ2 2√3 μm, and Λ3 4 μm. Figure 2a shows the calculated dispersion curves with variation of dc (2, 2.2, and 2.4 μm). Other parameters are as follows: d1 = 1.4 μm, d2 = 1.8 μm, d3 = 1.4 μm. We can see that with the increase of dc, the ZDW appears in the wavelength range of 2500 to 4000 nm while the dispersion curves become less flat. As a result, dc is 2.4 μm, which is the most desirable. Similarly, the dispersion curves with the variation of d1 (0.6, 1, and 1.4 μm) and d2 (1, 1.4, and 1.8 μm) are respectively shown by Figure 2b,c. Parameters d1 = 1.4 μm, d2 = 1.8 μm are selected. Figure 2d illustrates the curve with the variation of d3 while dc = 2.4 μm, d1 = 1.4 μm, d2 = 1.8 μm. It can be seen that the smaller the rod diameter, the flatter the dispersion curve. However, when d3 is too small, the dispersion curve brings out a few changes, which means that the PCF's binding force to the light is extremely weak, leading to more light being transmitted in the cladding and loss value Figure 1. Structure of tellurite photonic crystal fiber (PCF). d c is the core diameter, d 1 , d 2 , and d 3 are the diameter of the rods in the first layer, the second layer, and the third layer, respectively. Λ 1 , Λ 2 , and Λ 3 are respectively the size of rods spacing in the first layer, the second layer, and the third layer.
To realize temperature sensing in the MIR, we need to generate FWM in the MIR, which requires the tellurite PCF to possess a dispersion curve with zero dispersion wavelength (ZDW) and be as flat as possible in the MIR. For this purpose, the dispersion curve is simulated within the MIR range of 2500 to 4000 nm by respectively changing the core diameter (d c ) and the rod diameters (d 1 , d 2 , and d 3 ). Λ 1 is fixed to be 2 µm, Λ 2 2 √ 3 µm, and Λ 3 4 µm. It can be seen that the smaller the rod diameter, the flatter the dispersion curve. However, when d 3 is too small, the dispersion curve brings out a few changes, which means that the PCF's binding force to the light is extremely weak, leading to more light being transmitted in the cladding and loss value increasing. Therefore, there is a need to control the diameter of the third layer rods and ensure that PCF has sufficient restraint on light and a dispersion curve as relatively flat as possible. In order to achieve these two points, and taking advantage of comparing the dispersion and loss values obtained by changing the diameter of the third layer rods, d 3 was chosen as 1.4 µm.
Sensors 2020, 20, 1007 3 of 10 increasing. Therefore, there is a need to control the diameter of the third layer rods and ensure that PCF has sufficient restraint on light and a dispersion curve as relatively flat as possible. In order to achieve these two points, and taking advantage of comparing the dispersion and loss values obtained by changing the diameter of the third layer rods, d3 was chosen as 1.4 μm.
Structure Design of the Temperature sEnsor
Based on the fiber parameters of dc = 2.4 μm, d1 = 1.4 μm, d2 = 1.8 μm. d3 = 1.4 μm, Λ1 = 2 μm, Λ2 = 2√3 μm, and Λ3 = 4 μm, the tellurite PCF was designed as a temperature sensor. However, the tellurite glass has a thermo-optic coefficient of minus six orders of magnitude, which is not beneficial for temperature sensing. To overcome this problem, it was proposed to replace certain rods with air holes filled with temperature-sensitive material: alcohol. The thermo-optic coefficient of alcohol is ξ = Δn/ΔT = −4 × 10 −4 /°C, which is two orders of magnitude higher than that of tellurite glass. By replacing the solid rod with alcohol-filled air holes, the quantitative change of the refractive index of tellurite glass with temperature variation can be ignored during the calculation of the effective refraction index. The refractive index of alcohol can be calculated as a function of temperature. The original refractive index of alcohol is shown by [23]. On the basis of formula Relevant fiber parameters such as effective refractive index, nonlinear coefficient, and dispersion curve can be calculated using the finite element method.
Structure Design of the Temperature sEnsor
Based on the fiber parameters of d c = 2.4 µm, 3 µm, and Λ 3 = 4 µm, the tellurite PCF was designed as a temperature sensor. However, the tellurite glass has a thermo-optic coefficient of minus six orders of magnitude, which is not beneficial for temperature sensing. To overcome this problem, it was proposed to replace certain rods with air holes filled with temperature-sensitive material: alcohol. The thermo-optic coefficient of alcohol is ξ = ∆n/∆T = −4 × 10 −4 / • C, which is two orders of magnitude higher than that of tellurite glass. By replacing the solid rod with alcohol-filled air holes, the quantitative change of the refractive index of tellurite glass with temperature variation can be ignored during the calculation of the effective refraction index. The refractive index of alcohol can be calculated as a function of temperature. The original refractive index of alcohol is shown by [23]. On the basis of formula Relevant fiber parameters such as effective refractive index, nonlinear coefficient, and dispersion curve can be calculated using the finite element method.
In the next step, one, two, and three glass rods in the first layer are respectively replaced by air holes filled with alcohol, and ∆T = 0 • C. Figure 3a,b, gives information about the effective refractive index and the loss curves of these three replacement cases at wavelengths ranging from 2500 to 4000 nm. It is clear that in spite of increasing the number of alcohol-filled air holes, the loss presents no significant disparity. Figure 3c describes the calculated dispersion curves of the three replacement cases. It is to be noted that when adopting three alcohol-filled air holes, the dispersion curve is much flatter, and it is not closer to zero than the other two filling methods at 2500 to 3000 nm but closer to zero at 3000 to 4000 nm. In order to achieve FWM more easily, we chose filling three holes and controlled the pump wavelength in the range from 3000 to 3600 nm, which is the flattest part of this dispersion curve. Additionally, there are normal dispersion region and abnormal dispersion region which can make contributions to the comparison of temperature sensing in the different dispersion regions. Figure 3d demonstrates the calculated nonlinear coefficients whose values have an overall increase with the growth of the replacement number. For the degenerate FWM, g = √ (γP 0 ) 2 − (κ/2) 2 . When the phase-matching (PM) condition is satisfied, the theoretical maximum gain is γP 0 . As a result, the larger the nonlinear coefficient γ, the bigger the gain coefficient g. When replacing three adjacent rods in the first layer, the nonlinear coefficient is the largest, so is the gain at the satisfaction of PM condition. Therefore, by replacing three solid rods in the first layer with alcohol-filled air holes temperature sensing is accomplished. In the next step, one, two, and three glass rods in the first layer are respectively replaced by air holes filled with alcohol, and ΔT = 0 °C. Figure 3a and 3b, gives information about the effective refractive index and the loss curves of these three replacement cases at wavelengths ranging from 2500 to 4000 nm. It is clear that in spite of increasing the number of alcohol-filled air holes, the loss presents no significant disparity. Figure 3c describes the calculated dispersion curves of the three replacement cases. It is to be noted that when adopting three alcohol-filled air holes, the dispersion curve is much flatter, and it is not closer to zero than the other two filling methods at 2500 to 3000 nm but closer to zero at 3000 to 4000 nm. In order to achieve FWM more easily, we chose filling three holes and controlled the pump wavelength in the range from 3000 to 3600 nm, which is the flattest part of this dispersion curve. Additionally, there are normal dispersion region and abnormal dispersion region which can make contributions to the comparison of temperature sensing in the different dispersion regions. Figure 3d demonstrates the calculated nonlinear coefficients whose values have an overall increase with the growth of the replacement number. For the degenerate FWM, g = √(γP0) 2 − (κ/2) 2 . When the phase-matching (PM) condition is satisfied, the theoretical maximum gain is γP0. As a result, the larger the nonlinear coefficient γ, the bigger the gain coefficient g. When replacing three adjacent rods in the first layer, the nonlinear coefficient is the largest, so is the gain at the satisfaction of PM condition. Therefore, by replacing three solid rods in the first layer with alcohol-filled air holes temperature sensing is accomplished. As can be seen from the above, when the structural parameters of the tellurite PCF are d c = 2.4 µm, 3 µm, and Λ 3 = 4 µm, a flattened dispersion curve with one ZDW in the MIR can be obtained within the wavelength range of 2500 to 4000 nm. Owing to the insensitivity of tellurite glass to temperature, we take out the rods and replace them with air holes filled with alcohol. Through comparing three different filling methods, it is clear that the curve obtained from three adjacent glass rods replaced with air holes filled with alcohol in the first layer is relatively flat. In the following work, the pump wavelength from 3000 to 3600 nm was selected. With three holes filled, dispersion in this wavelength range is closest to zero over the entire range. It is beneficial to satisfy the PM condition and realize FWM in the MIR.
Results
On the basis of FWM theory, when the PM condition is satisfied, the change of temperature (∆T) can induce a shift in the signal wavelength, which can be utilized as a means to realize temperature sensing. PM condition is given by where κ and ∆k are, respectively, the nonlinear and linear phase mismatch, γ is the nonlinear coefficient, and P 0 is the sum of two pump powers. Different pumping wavelengths were selected within the range of 3000 to 3600 nm to detect the sensor's temperature sensitivity in the MIR, which were 3000, 3100, and 3550 nm. The pump power (P 0 ) is 100 W and the fiber length is 8 cm. Firstly, ∆T = 0 • C, and the optical signal gain intensity and PM diagram obtained at these three pumping wavelengths are given in Figure 4.
In each group of figures, the intersections of the red line and the yellow line meet the PM condition. These intersection points correspond to the maximum peak value on the left and right side of the blue curve in each group of images, which are idler gain peak and signal gain peak, respectively. At 3550 nm, two PM conditions are satisfied, which induce two pairs of signal waves and idler waves. We call the signal and idler wave near the pumping wavelength as the first-order signal/idler wave while those far away as the second-order signal/idler wave. The generation of two pairs of signal and idler waves could produce more nonlinear effects.
Sensors 2020, 20, 1007 5 of 10 As can be seen from the above, when the structural parameters of the tellurite PCF are dc = 2.4 μm, d1 = 1.4 μm, d2 = 1.8 μm, d3 = 1.4 μm, Λ1 = 2 μm, Λ2 = 2√3 μm, and Λ3 = 4 μm, a flattened dispersion curve with one ZDW in the MIR can be obtained within the wavelength range of 2500 to 4000 nm. Owing to the insensitivity of tellurite glass to temperature, we take out the rods and replace them with air holes filled with alcohol. Through comparing three different filling methods, it is clear that the curve obtained from three adjacent glass rods replaced with air holes filled with alcohol in the first layer is relatively flat. In the following work, the pump wavelength from 3000 to 3600 nm was selected. With three holes filled, dispersion in this wavelength range is closest to zero over the entire range. It is beneficial to satisfy the PM condition and realize FWM in the MIR.
Results
On the basis of FWM theory, when the PM condition is satisfied, the change of temperature (ΔT) can induce a shift in the signal wavelength, which can be utilized as a means to realize temperature sensing. PM condition is given by where κ and ∆k are, respectively, the nonlinear and linear phase mismatch, γ is the nonlinear coefficient, and P0 is the sum of two pump powers. Different pumping wavelengths were selected within the range of 3000 to 3600 nm to detect the sensor's temperature sensitivity in the MIR, which were 3000, 3100, and 3550 nm. The pump power (P0) is 100 W and the fiber length is 8 cm. Firstly, ΔT = 0 °C, and the optical signal gain intensity and PM diagram obtained at these three pumping wavelengths are given in Figure 4.
In each group of figures, the intersections of the red line and the yellow line meet the PM condition. These intersection points correspond to the maximum peak value on the left and right side of the blue curve in each group of images, which are idler gain peak and signal gain peak, respectively. At 3550 nm, two PM conditions are satisfied, which induce two pairs of signal waves and idler waves. We call the signal and idler wave near the pumping wavelength as the first-order signal/idler wave while those far away as the second-order signal/idler wave. The generation of two pairs of signal and idler waves could produce more nonlinear effects.
(a) Furthermore, the four graphs in Figure 5 are the curves of signal waves and idler waves moving as temperature changes. Figure 5a illustrates the wavelengths shift when the pump wavelength is 3000 nm. At this time, the dispersion value is in the abnormal dispersion region. The calculated sensitivities of the signal wave and the idler wave are 0.46 and 0.23 nm/°C, respectively. Similarly, Figure 5b shows that the pump wavelength is 3100 nm, and the dispersion value is in the normal dispersion region. As can be seen from the figure, the temperature sensitivity of the signal wave is 0.50 nm/°C, while the sensitivity of the idler wave is 0.30 nm/°C. As shown in Figure 5c and 5d, the dispersion value is in the anomalous dispersion region. After calculation, the temperature sensitivity of the first-order signal wave is 0.70 nm/°C, and that of the idler wave is 0.29 nm/°C. The temperature sensitivities of the second-order signal wave and idler wave are 0.41 and 0.17 nm/°C, respectively. Furthermore, the four graphs in Figure 5 are the curves of signal waves and idler waves moving as temperature changes. Figure 5a illustrates the wavelengths shift when the pump wavelength is 3000 nm. At this time, the dispersion value is in the abnormal dispersion region. The calculated sensitivities of the signal wave and the idler wave are 0.46 and 0.23 nm/ • C, respectively. Similarly, Figure 5b shows that the pump wavelength is 3100 nm, and the dispersion value is in the normal dispersion region. As can be seen from the figure, the temperature sensitivity of the signal wave is 0.50 nm/ • C, while the sensitivity of the idler wave is 0.30 nm/ • C. As shown in Figure 5c,d, the dispersion value is in the anomalous dispersion region. After calculation, the temperature sensitivity of the first-order signal wave is 0.70 nm/ • C, and that of the idler wave is 0.29 nm/ • C. The temperature sensitivities of the second-order signal wave and idler wave are 0.41 and 0.17 nm/ • C, respectively. Figure 6 describes the function of signal wavelength with the temperature at the pump wavelengths of 3000, 3100, and 3550 nm. The obtained signal crests have a good linear relationship with the temperature (∆T), so the signal wavelengths which shift with the changes in temperature wavelengths can be utilized in temperature sensing. From the above theoretical analysis, it can be concluded that when pump wavelength is 3550 nm, the temperature sensitivity of the proposed sensor is the highest, which can reach 0.70 nm/ • C at ∆T changing from −40 to 60 • C.
Sensors 2020, 20, 1007 8 of 10 Figure 6 describes the function of signal wavelength with the temperature at the pump wavelengths of 3000, 3100, and 3550 nm. The obtained signal crests have a good linear relationship with the temperature (ΔT), so the signal wavelengths which shift with the changes in temperature wavelengths can be utilized in temperature sensing. From the above theoretical analysis, it can be concluded that when pump wavelength is 3550 nm, the temperature sensitivity of the proposed sensor is the highest, which can reach 0.70 nm/°C at ΔT changing from −40 to 60 °C. Table 1 compares the performance of our proposed temperature sensor with those reported previously. It is clear that the FWM-based temperature sensor has a sensitivity much higher than that of [4], [14], [24,25], and is only lower than that of [26]. However, the detection range of our work is −40~60 °C, which is five times that of [26] (20~40 °C). Additionally, the fiber proposed in [26] is a gold-coated PCF, and the thickness of its gold film cannot be controlled accurately in real practice, which may greatly influence its temperature sensitivity. This study clearly shows the efficiency of FWM in a tellurite PCF for temperature sensing, which not only obtains higher sensitivity but also realizes the temperature sensing in the MIR. Table 1 compares the performance of our proposed temperature sensor with those reported previously. It is clear that the FWM-based temperature sensor has a sensitivity much higher than that of [4,14,24,25], and is only lower than that of [26]. However, the detection range of our work is −40~60 • C, which is five times that of [26] (20~40 • C). Additionally, the fiber proposed in [26] is a gold-coated PCF, and the thickness of its gold film cannot be controlled accurately in real practice, which may greatly influence its temperature sensitivity. This study clearly shows the efficiency of FWM in a tellurite PCF for temperature sensing, which not only obtains higher sensitivity but also realizes the temperature sensing in the MIR.
Conclusions
In this paper, by carefully designing the fiber parameters of a tellurite PCF and further improving the fiber structure, a temperature sensor with high sensitivity in the MIR has been designed. It has a solid structure except for three adjacent holes filled with alcohol to avoid the difficulty of selective filling in the experiment. Unlike traditional fiber-optic temperature sensors, we draw on the FWM to realize temperature sensing. When pumped at 3550 nm in the MIR, this sensor achieves a sensing sensitivity as high as 0.70 nm/ • C. It can be applied in fields such as fingerprint unlocking and the body radiation photosensitive system. Due to the limitation of experimental conditions, this work only provides the theoretical simulation and analysis, which hopefully lays a good foundation for the development of MIR optical sensing devices in the future. | 5,688.2 | 2020-02-01T00:00:00.000 | [
"Physics"
] |
Simulation and design of ECT differential bobbin probes for the inspection of cracks in bolts
All Various defects could be generated in bolts for a use of oil filters for the manufacturing process and then may affect to the safety and quality in bolts. Also, fine defects may be imbedded in oil filter system during multiple forging manufacturing processes. So it is very important that such defects be investigated and screened during the multiple manufacturing processes. Therefore, in order effectively to evaluate the fine defects, the design parameters for bobbin-types were selected under a finite element method (FEM) simulations and Eddy current testing (ECT). Especially the FEM simulations were performed to make characterization in the crack detection of the bolts and the parameters such as number of turns of the coil, the coil size and applied frequency were calculated based on the simulation results.
Introduction
Oil filters for a use for vehicular parts are being used under the high temperature and cooling of the engine and defects could be generated under repetition in the operation environment as well as shape changes such as very high internal loss for the bolts [1]. Also, it is impossible to check the defect caused inside the bolt as shown in figure 1 visually. These defects could make engine efficiency dropped when operating the engine, and may lead to accelerated wear and damage to the engine parts by many abrasive particles contained in the lubricating oil, if not prevented in advance upon finding in its initial stage. Such things may affect the life and efficiency of auto engine leading to serious economic problems. Therefore, as an applicable method for detecting such fine surface defects of a few hundred μm inside the bolt, ECT techniques are known as the best among the non-destructive evaluation methods [2][3][4][5][6][7][8][9]. Oil filter bolts have high possibility of causing wears in the lubrication phase. Types of wear are of fusion, wear and burning respectively. By high temperature, cooling and high speed operation of engine, scant oil flow, foreign material from outside and particles in the oil may cause wear [2]. Defects developed on the surface were classified into circumferential crack, axial crack and angular crack respectively, and the work in this study has focused on the development of differential bobbin eddy current sensor that is applicable to detect the circumferential crack.
In this study, standard specimens with a rod type were prepared and differential bobbin eddy current sensor was designed, which can detect fully circumferential cracks on the surface of the specimen. Using the designed sensor, experiment was conducted for the standard specimen, and the results of the experiments were compared against each other. It was found that the differential bobbin eddy current sensor thus developed was appropriate for detecting cracks on the bolt.
Eddy current testing
When AC current flowing coil is brought near to the conductor of test specimen, the primary magnetic field generated by the current flowing in the coil induces secondary magnetic field on the conductor. An eddy current coil developed from AC current can be obtained roughly by AC Circuit that includes resistance and inductance. Impedance, Z, the ratio between the voltage and current, is obtained by use of Ohm's Law. Impedance, Z, is shown as follows.
=
(1) When an AC current flow through coil inductance, L, at frequency f, impedance of coil is identical to the induced reactance of the circuit, X_L. Likewise, impedance of AC current flowing in the coil of resistance R and inductance L at operating frequency f is shown as the following formula.
Magnitude of impedance in the coil and phase angle can be expressed as follows.
Standard penetration depth
When eddy current probe is placed on the test specimen, the eddy current induced into the test
Wheatstone bridge
Two general systems are used; i.e. electrical bridge circuit and filter circuit. These two systems make impedance, Z, the value related with basic signal, electrically equilibrium. As shown in figure 2, Wheatstone Bridge was employed in the experiment. Formula of equilibrium is as follows. Once the condition of Eq. (6) is reached, the system becomes equilibrium and the volt meter reads zero (0). Therefore, phase value and amplitude are measured using Wheatstone Bridge [5].
Eddy current design and manufacture
In this study, the ultimate objective can be realized by use of interior eddy current sensor when detecting the surface defect developed in the oil filter bolts. Interior eddy current sensor is structurally identical to the exterior eddy current sensor for detecting exterior surface defect in the simulation test specimen as shown in figure 3. Design parameters required for designing eddy current sensors can be thought of coil wire diameter, coil gap, coil width, height, coil turns, lift-off, frequency and others as shown in figure 4(a). In this work, coil gap, coil width and frequency were selected as design parameters. For coil wire, coil of 0.1 mm in diameter was used, with 100 coil turns and 0.5 mm lift-off. With consideration of other parameters, a most optimum eddy current sensor was developed. Lift-off is one of important design parameters in designing eddy current sensor. Smaller lift-off makes the magnetic field induced in the test specimen stronger. So, lift-off was set to 0.5 mm. Using the bobbin eddy current sensor thus manufactured, defect signal of fully circumferential cracks were detected by moving the test specimen in axial direction. Experiment was conducted for change of test frequency and for change of coil width and gap.
Making standard test specimen
Simulating oil bolt of 25 mm inner diameter as shown in figure 2, rod shape standard test specimen of 25 mm outer diameter and 231 mm length was made for inspecting exterior surface defects. Among the defects that may be generated outside, circumferential cracks of 0.5 mm width and 0.1, 0.2, 0.4, 0.6, 0.8 and 1 mm depths respectively that may be possible to inspect by the bobbin were artificially notched by EDM on the test specimen. Standard Test Specimen was of AISI 1045 Steel.
Eddy current test systems
In order to evaluate the signal property of the differential bobbin eddy current sensor applicable to detect defects in the test specimen, eddy current system was developed by application of Wheatstone Bridge as shown in figure 5(a).
In order to make Bobbin Eddy Current Sensor possible to evaluate defect detecting property of test specimen having fully circumferential cracks, frequency generating system has employed Function Generator of Tektronix having frequency band of 100 MHz. Through these Lock-in amplifier and Oscilloscope, change of amplitude value, i.e. amplitude change of impedance and change of phase value could be obtained. Experiment instruments used for sensor design and performance evaluation were as shown in figure 5
Evaluation of ECT property by change of frequency
For frequencies used in this work, amplitude and phase value were obtained as test results by lock-in amplifier and oscilloscope as shown in figure 6. The experiment was carried out with frequencies of 10, 20, 30, 50 and 100 kHz, and values of amplitude and phase were shown in figures 6(a). It could be observed that deeper the depth of defect in the specimen made bigger the change of amplitude and phase value. Frequencies used in the experiment were overlapped as shown in figures 6(a) and 6(b). In figure 6(a), it was found that signals from 20 kHz and 30 kHz were of similar strength each other and stronger than from other frequencies. This indicates that these frequencies are capable to detect defects of 0.2 mm depth since the values of 2δ for frequency of 20 and 30 kHz are 0.29 mm and 0.23 mm respectively in table 1.
Therefore, in this study, frequencies of 20 kHz and 30 kHz could be selected as finally applicable for depth of defect minimum 0.2 mm [10]. In this study, frequencies of 10, 20, 30, 50 and 100 kHz were used in the experiment, which is the frequency range decided by Eq. (5) and table 1. With changing of frequencies, surface defect signals from the test specimen were obtained. As shown in figure 7, differences of amplitude and phase value by change of defect depth could be observed. From the experiment, it could be found that frequency of 50 and 100 kHz had lower sensitivity of detecting defect signals than the other 3 frequencies of 10, 20 and 30 kHz.
In figure 7, 20 kHz and 30 kHz showed similar level of change, and phase value became smaller by the order of 10, 20, 30, 50 and 100 kHz. Even though 10 kHz made the biggest change of phase value, it was excluded in further experiment since this is not adequate for detecting defect of 0.2 mm depth, the final objective of this study. So, it was found that frequency of 20 kHz could be a best case for detecting cracks due to higher sensitivity.
Evaluation of ECT Characteristics by Change of Coil Gap
Experiment was conducted with change of Coil Gaps for test frequency of 20 kHz and for coils of 1 mm and 2 mm width respectively. With change of Coil gaps in 0.5 mm levels by 3 mm, 3.5 mm, 4 mm and 4.5 mm respectively, change of signal characteristics for 4 coil gaps could be obtained as shown in figure 8, and made analysis.
When compared at test frequency of 20 kHz as shown in figure 9, ∆Phase value of coil gap for coil width of 1 mm and 2 mm respectively, it was found that the ∆Phase value was the biggest at coil gap of 3.5 mm, and the value became bigger when coil width was 1 mm [10].
Evaluation of ECT Characteristics by changes of frequency for coil gap
Since good performance was shown with 1mm coil width and 3.5 mm coil gap in Charter 3.5, values analysed there were made base for obtaining the graph of figure 10. In order to find which frequency shows bigger phase value change among frequencies of 20 kHz and 30 kHz, graph of figure 10 was obtained. From figure 10, bigger change of phase value was observed for frequency of 20 kHz than that of 30 kHz [10].
ECT Sensor Design Parameters based on FEM Simulation
ECT sensor design parameters were set up in order to simulate the signals of the eddy current based on FEM-based eddy current simulation as shown in figure 11 and simulation was carried out for the defect signal of eddy current probe. Exciter and receiver were utilized as shown in figure 11(a) and the variation of signals in the eddy current was simulated to arrow direction in bolts. Figure 11(b) shows a case of mesh generation in ECT simulation. Figure 12 shows the simulation results of the distribution and signals of the eddy currents for the bolt internal defects with 0.15 mm, 0.2 mm, 0.5 mm and 1 mm in defect depth under the difference coil gaps (3.5 mm, 4.0 mm and 4.5 mm). It was found that higher impedance variations of Lissajous plane (relation between resistance and reactance) were generated in figure 12(a). This result in the gap of 3.5 mm is the most reasonable signal and it could be applied to design parameter in differential probe. Therefore, the design parameters for bobbin-types were optimized under Eddy current FEM simulations.
Performance evaluation of bobbin eddy current sensor
From the results of above experiment, optimum coil width, coil gap and operating frequency were selected. Based on these selected parameters, differential bobbin probe was designed and manufactured finally as shown in figure 13. Performance of the eddy current sensor designed under the same experimental condition done as above using the designed probe was verified. Result of performance evaluation for the bobbin eddy current sensor is shown in figure 14. Figure 14(a) shows resistance of differential bobbin eddy current sensor at frequency of 20 kHz, and (b) is reactance. From (a) and (b) and Eqs. (2) and (3), graph plotted with resistance in X-axis and reactance with Yaxis gives impedance plane of figure 14(c). As can be seen in figure 14(c), magnitude of impedance becomes bigger by bigger size of defect. Therefore, it was found that the differential bobbin eddy current sensor designed through the process of selecting design parameters as above is suitable for detecting the target surface defect of 0.2 mm depth at frequency of 20 kHz [10].
Conclusions
In this study, a differential bobbin eddy current sensor for detecting surface defects on the test specimen was developed through experiment, and the base for manufacturing interior bobbin eddy current sensor was worked out. Introduced conclusions are as follows; It was found that the phase value and amplitude becomes gradually bigger with depth of the test specimen deeper at the respective test frequencies as based on the experiment for standard test specimen of 25 mm outside diameter AISI 1045 Steel and differential bobbin probe having 0.1 mm diameter and 1 mm coil width in 100 turns of winding conducted with 3.5 mm coil gap at test frequency of 10, 20, 30, 50 and 100 kHz respectively. In an effort to make the frequency band to be applied on the differential bobbin eddy current sensor narrower, pertinent amplitude and phase value were obtained by use of lock-in amplifier. Frequency of 20 kHz was found most adequate followed by 30 kHz next, Frequency of 100 kHz was found making defect signal saturated for defect larger than 0.2 mm. Among coil width of 1 mm and 2 mm, change of phase value was bigger in case of 1 mm, and 3.5 mm coil gap was the best for change of phase value among the 4 coil gaps. With the defect becoming bigger, 20 kHz frequency was found showing better sensitivity for signal characteristics in both cases of coil width. Therefore, it was decided to select the coil of 1 mm width and 3.5 mm gap as the coil parameter for final design. The coil thus fabricated is judged applicable to experiment at frequency of 20 kHz. Parameter finally selected in this study was 1 mm coil width, 3.5 mm coil gap and 20 kHz operating frequency. Under this condition, it was possible to design differential bobbin eddy current sensor suitable for detecting 0.2 mm defect successfully, the final objective defect. FEM simulations show the impedance becomes bigger in the impedance plane with depth of defect going deeper as ECT experimental results. It is possible to design interior eddy current probe for inspecting oil filter bolt based on the results and ECT simulation of this work. | 3,393.6 | 2015-12-09T00:00:00.000 | [
"Materials Science"
] |
Analysis of the motivation of college students' participation in governance: endogenous demands and exogenous promotions.
. The participation of university students in school governance is an inevitable requirement for building a modern higher education community. As participants in this community, students undoubtedly play an important role in governance, whether they are "producers and consumers" in the economic sense, "rights holders and obligation implementers" in the legal sense, or "responsible persons" in organizational behavior. To study the mechanism of student participation in university governance, it is necessary to first explore the motives behind student participation in university governance, delve into its logical starting point, and thus construct a mechanism system. The participation of college students in university governance has both internal and external driving factors. Constructing a motivation mechanism for student participation in university governance, consisting of subjective attitude, participation needs, self-efficacy, participation system, democratic communication, and information feedback, helps to comprehensively understand the internal correlation of the motivation factors for student participation in university governance, and provides useful theoretical support or reference basis for strengthening university governance. The motivation of students’ participation in university governance should be explored from the endogenous demands of students and the exogenous driving force of whole external environment.
Human Subjective Motivation and Its Characteristics
The main driving force of a person is a collective force, which is a combination of subsystems that interact and influence each other within the body.From a psychological perspective, human subjectivity can be distinguished from aspects such as cognition, emotions, will, and ideas.Marxist humanistic thought points out that human subjectivity usually refers to the basic attributes that a subject should possess as a subject, which is the qualitative regulation of why a subject becomes a subject, and the agency, autonomy, and autonomy that the subject *Correspondence author<EMAIL_ADDRESS>as a person in thought and action [1] [2].Consciousness is a process in which subjects continuously improve their cognition in situations.Firstly, self-awareness is manifested as the subject's self-perception and awareness, and on this basis, forms identification and grasp of self-stability; Secondly, consciousness is manifested as the subject's cognitive consciousness of the world and other things; Selectivity is the subject's response to 'what to do', 'how to do' How to do it Choice.In practice, the selectivity of the subject depends on a comprehensive judgment that may work together with human goals and tendencies in reality.For students, selectivity is reflected in the various functions that universities possess, and students make value judgments and choices based on their own needs among multiple options.Students' selectivity not only has directional significance for higher education reform, but also reflects their understanding of university governance Participation in decision-making on major university affairs.Creativity has a higher level of transcendence than self-awareness and selectivity, and is also the highest manifestation of subjective initiative.Creativity is a conscious correction of reality and a modification of conventional choices.
The psychological process of students in university governance can be roughly divided into four stages: first, the cognitive stage.Students have a fundamental understanding of the correlation between the governance process of universities and their own needs, and believe that they have the willingness, ability, and necessity to participate in their cognitive state.Secondly, the stage of psychological consciousness.Students should establish a sense of "ownership" based on their cognition, combine the development of the school with their own, and realize their own and others' judgments.Once again, the goal selection stage.Students' psychology has an overall goal orientation and selective grasp of participating in university governance, such as the purpose, channels, and expectations of participation.Students' choices generally follow the "economic" principle of seeking advantages and avoiding disadvantages, maximizing the benefits, and minimizing the disadvantages.Finally, the stage of subject creation.Another undeniable reason for students' participation in university governance is their enthusiasm (including irrational factors such as interests and hobbies), which spontaneously generates continuous improvement and motivation for the content, methods, and effectiveness of participation.
Internal Motivation Factors of Student Subjects
Taking the characteristics of human subjectivity as the logical starting point, combined with consciousness, selectivity, and creativity, and the individual subjectivity in the field dynamics theory, the corresponding development forms three generative elements of students' intrinsic motivation: subjective attitude, participation needs, and self-efficacy.The generative mechanism of the motivation is shown in the figure.
Subjective attitude
Attitude, in the final analysis, is a motivational mechanism that refers to the psychological tendency formed by the subject's cognition, evaluation, and value judgment of the object.It is also a potential behavioral tendency, manifested as a state of readiness and continuity of behavior.The attitude of students towards participating in university governance refers to their understanding and evaluation of university governance, as well as their attitude choices based on value judgments.From the perspective of educational organizational behavior, attitude is composed of cognitive component, affective component, and behavioral component.The cognitive component is a narrative description of a certain belief, serving as the basis for the emotional component.Emotional components are a combination of stable emotions and persistent feelings in attitudes, and emotions can affect behavioral outcomes.The behavioral tendency component is the individual's intention to take action towards people, things, and things.Among the student body, the degree of recognition, acceptance, and recognition of university governance by students determines their participation in university governance.
Participation needs
The theory of field dynamics holds that an individual's needs are the driving force for stimulating behavior, and through action, the needs are met.Sun Zhenxiang, Zhang Danqing.Research on the Generative Motivation of Teacher Informatization Leadership -Analysis Using Field Dynamics Theory [3].From the perspective of educational organizational behavior, the occurrence of motivation is based on needs, which is a supportive relationship between the object and the subject, and a state of psychological will or demand satisfaction.Need reflects the dependence of individual survival and development on external conditions [4].The need for student participation in university governance refers to the motivation for students to make decisions related to their expectations and willingness towards the university, and is the degree of their expected participation in university governance.The driving factors for participation are based on the Achievement Motivation Theory.This theory was proposed by American scholar David McClelland in the 1950s, CDavid C. McClelland, and focuses on the needs of achievement, power, and affiliation of group members.The need for achievement refers to a person's pursuit of outstanding standards in their career or the internal drive to achieve success.Achievement requires a strong sense of motivation, manifested in the pursuit of success, perfection, and excellence towards specific goals.McClellan found that those in need of achievement may have a strong interest in their job performance.Power refers to the ability to control others' behavior in one way rather than another, and power needs to be divided into personal power and social power.Personal power and social power have different fields of action, with the former only applied to personal life and the latter applied to public life.Due to the social nature of universities, the power of students to participate in university governance can be seen as a form of social power.People who have a strong need for power tend to focus more on prestige and its impact on other stakeholders.The need for belonging lies in the recognition of a community, emotional dependence, and the need to establish good and harmonious interpersonal relationships with others.This is an important condition for maintaining social interaction and interpersonal harmony.People who need belonging tend to pursue the construction of a community, a harmonious environment, and an atmosphere.
Self efficacy
The 'identity' of students in university governance is an important factor in promoting their participation.Bert Bandura stated in his book "Self efficacy: Training in Control" that "in the operational mechanisms of organizations, the belief in individual efficacy is the most important and universal.People's expectation of the effectiveness of their actions is the prerequisite for generating motivation."The driving factors of students' participation ability are constructed through self-efficacy, also known as Social Cognitive Theory Based on it, it refers to an individual's belief in their ability to complete tasks.Self efficacy is limited to specific fields and is an expectation of successfully completing a task and behavior, which affects our motivation to complete tasks or participate in activities.Self efficacy focuses on perceived competence, which can be divided into outcome expectations and efficacy expectations.Outcome expectation refers to a belief that a specific behavior leads to a specific outcome, that is, the expectation of success.Efficiency expectation refers to the belief that we possess the knowledge and skills necessary to complete a task.Students with high expectations for efficiency and process are more confident in completing learning tasks, persevering in facing difficulties, and fully motivated to learn; Students with low expectations for efficiency and low expectations for process are more likely to be discouraged when faced with failure, leading to a lack of motivation and unwillingness to learn.Starting from the Marxist theory of human subjectivity and Lewin's theory of field dynamics, an endogenous driving force mechanism for students' participation in university governance is formed.
The "subjective attitude", "participation needs", and "self-efficacy" of students are the endogenous driving factors for students' participation in university governance, and the driving force system formed is shown in the figure.Abraham H. Maslow pointed out that human motivation must be achieved under the conditions of having relationships with the environment and others.Any theory of motivation should not only include the body itself, but also the decisive role of environment and culture [5].School is the main field for students' learning and life, and the school field dynamics in university governance refer to the sum of forces that can provide students with the ability to participate in university governance through the interaction of various components in the governance process.The motivation for students' participation in university governance must rely on the "field" of the school.Understanding and grasping the "dynamic field" in which it is located, that is, the "living space" in the field dynamics theory, is the key to analyzing the factors that generate students' external motivation to participate in universities.
Lewin believes that a person's actions in a certain time and space are in a certain external environment.The psychological life space formed around self-awareness (LSP).It is determined by the psychological dynamic field, which integrates the behavioral subject with the environmental object to form a dynamic field, which has a practical impact on human behavior and jointly affects it.The living space mode is shown in the figure .In the school field where students participate in university governance, the intrinsic motivation of the student subject constitutes the goal (E) of their participation in university governance.Through the influence and interaction of the environment, the psychological field and movement path between the student subject and the school environment can be constructed, as shown in the figure.In Figure 4, the living space pattern is composed of individuals (P) and environment (CE), and the environment is a psychological environment composed of quasi physical, quasi social, and quasi conceptual facts; 1-P represents the personal domain of a person composed of needs, desires, and consciousness; P-M represents the perceptual motor area; P represents an individual composed of 1-P and P-M.In Figure 4, individuals (P) and environmental CE interact and interact with each other, forming a collective force that drives the subject to take action, thereby breaking through boundaries and entering the action behavior or spatial field (S) to achieve established goals (CG).From this, it can be concluded that the participation of students in university governance requires the interaction between the internal motivation of the student subject and the external motivation of the school field.The combined force of internal and external motivation is the key factor determining whether the student subject can participate in university governance actions.
The Dynamic Generative Factors of School Field
Extracting relevant variable indicators from the C.L.E.A.R model of citizen participation in Stoke, this study divides the exogenous driving factors of student participation in university governance in the school field into three aspects: participation system, democratic communication, and information feedback.The driving system is shown in the figure, and each driving subsystem is closely related, interdependent, and collaborative.Based on the previous definition of the concept of student participation in university governance, this study limits the governance of universities within the school field to the strategic level, management affairs level, personnel level, curriculum teaching level, and student affairs level.
Participation system (1) Democratic system
As a criterion for designing the relationships between various subjects within an organization and a constraint for regulating their relationships, institutions can be seen as the inherent essence of modern university governance [6].A sound democratic system is an important external factor in ensuring the realization of students' right to participate in universities.Anna Planas found in her survey of obstacles to student participation in university governance that 48% of students believe that the main reason for the lack of motivation to participate in governance is the lack of systems and mechanisms.The institutional regulations of university governance can be formulated from two aspects: relevant national laws and regulations and internal rules and regulations of universities.In the real governance of universities, students' right to participate is not guaranteed as it should be in the internal and external systems.On the one hand, there is a lack of student rights in laws and regulations.On the other hand, there is a lack of expression of students' participation rights, operational rules, and procedural provisions in the university charter, which is guaranteed by the internal system of the university.Therefore, a sound democratic system is an important factor in promoting students' participation in university governance.Whether students are given governance power and the intensity of power will have a significant impact on the effectiveness of university governance.
(2) Incentive policies The most crucial factor in the motivation mechanism is motivation, which reflects the interaction between the incentive subject and the incentive object through a rational system, creates various conditions that meet the needs of the incentive object, stimulates their work motivation, and generates a specific behavioral process to achieve goals.Motivation needs to run through the entire process of the subject's motivation generation, behavioral process, and the achievement of behavioral goals.Motivation includes three aspects: firstly, the system of initiating behavior.Behavioral initiation refers to setting inducing factors based on the needs of the motivating object -reward resources that mobilize the object's enthusiasm.Based on the investigation, analysis, and prediction of the object, a series of reward forms owned by organizations such as spiritual rewards and material rewards are established.Secondly, a behavior oriented system.It is the direction of effort, behavior, and values that an organization expects from its members.The purpose is to enable individual behavior to achieve both personal and organizational goals.Thirdly, the system of behavioral constraints.A restrictive system is a system design that restricts the behavior norms, work effort level, and time and space scope of incentive objects.This includes educating members on values, work attitudes, and behavioral styles to match their actions with organizational style.
Democratic communication
Communication is expression, and the establishment of communication mechanisms among diverse subjects is an important guarantee for achieving university governance.Gong Yizu.University Governance Structure: The Cornerstone of Modern University Systems [7].Good democratic communication can ensure the smooth expression of students' interests, and effective expression of students' rights and interests will increase their motivation to participate in university governance.
Therefore, establishing a smooth communication mechanism in universities is a necessary means to promote students' participation in university governance.Students express their interests and demands with other organizational departments through multiple channels and methods, which requires them to participate in the entire process of communication before, during, and after relevant decisions.Therefore, smooth democratic communication includes two aspects: first, effective and smooth participation channels.
At the operational level of university governance mechanisms and participation channels, the absence of democratic forms of communication will inevitably restrict students' normal voice.Secondly, there are diverse ways to participate.Exploring multiple forms of democratic communication channels is an important part of promoting students' participation in university governance.The carrier of student participation must optimize its own functions, create a dynamic mode of student participation, build an information based participation channel, and provide an organizational platform with clear goals and sound mechanisms for student interest expression and power exercise.
Thirdly, the organizational atmosphere is harmonious and the environment is equal.Campus organizational atmosphere refers to the relatively stable and sustained environmental characteristics in a school, which are based on student experience and collective behavior perception [8].At present, some universities select students as principals and student contacts, reflecting students' opinions and demands through their contacts.However, in practical operation, student assistants believe that they cannot escape the dilemma of hierarchical authority and always lack confidence in the communication process.Therefore, democratic communication is a key link for students to truly "open their hearts".In the democratic negotiation process of university governance, the student body and other school authorities should pay attention to the equality of negotiation status, adopt a frank communication attitude, and ensure the transparency and openness of negotiation information and results.
Information feedback
Human interaction requires interactivity, which means that one's voice needs to be valued.This emphasis is reflected in listening and giving back, which provides motivation for participation.Looking at the current situation, the feedback from universities on student information presents a state of "silent" or "low voice", which affects students' enthusiasm for participating in university governance.For example, in the daily management of universities where students participate, democratic symposiums, president reception days, president mailboxes, and other methods are important ways for students to participate in university governance, aiming to widely solicit students' opinions and suggestions.
However, many management practices result in students' participation and non-participation being the same, as well as their participation and lack being the same.Students' consultation and suggestions on plans, as well as the planning of school development goals After participation in reform and development, there was no information response or result feedback.The theory of citizen participation in public policy holds that if managers ignore the need for public influence and only value the process of citizen participation, while neglecting the sharing of citizen participation influence, they will face the risk of participation failure.
At this time, the public will feel a lack of motivation and unwilling to participate, and find that their own influence is limited, so they will no longer have fantasies about participation.Researchers believe that the impact of information feedback on participation in university governance can take two forms: firstly, opinion adoption.The proposals and opinions of stakeholders are adopted by managers, bringing change to the school and contributing their own value to the school.Secondly, timely information feedback.Although the opinions of stakeholders may not have been adopted, managers provide feedback and explanations.In the process of student participation in university governance, information exchange should also form a closed loop of "return and return", and the construction and improvement of information feedback mechanisms are important support.
Combining Stoke's citizen participation C.L.E.A.R model, starting from the blue dimensions of the school field: participation system, democratic communication, and information feedback, this paper analyzes the external driving factors of student participation in university governance, and explores the external driving mechanism of student participation in university governance, as shown in Figure 6.The democratic system in the participation system grants students the right to participate in university governance through the formulation of relevant policies, defining the scope and boundaries of participation; The incentive system is specifically divided into "behavior initiation system", "behavior guidance system", and "behavior restriction system", which runs through the entire process of student subjects from generating participation motivation, participating in behavior, to completing participation results; Democratic communication and information feedback serve as guarantees for students' participation in university governance, forming an external driving mechanism for students' participation in university governance in the school field.
Fig. 1 .
Fig.1.The formation mechanism of students' internal motivation in university governance
Fig. 5 .
Fig. 5.The university dynamic system in university governance
Fig. 6 .
Fig. 6.The formation mechanism of students' external motivation in university governance | 4,589.2 | 2024-01-01T00:00:00.000 | [
"Education",
"Economics",
"Political Science"
] |
Possible ground states and parallel magnetic-field-driven phase transitions of collinear antiferromagnets
Understanding the nature of all possible ground states and especially magnetic-field-driven phase transitions of antiferromagnets represents a major step towards unravelling the real nature of interesting phenomena such as superconductivity, multiferroicity or magnetoresistance in condensed-matter science. Here a consistent mean-field calculation endowed with antiferromagnetic (AFM) exchange interaction (J), easy axis anisotropy (γ), uniaxial single-ion anisotropy (D) and Zeeman coupling to a magnetic field parallel to the AFM easy axis consistently unifies the AFM state, spin-flop (SFO) and spin-flip transitions. We reveal some mathematically allowed exotic spin states and fluctuations depending on the relative coupling strength of (J, γ and D). We build the three-dimensional (J, γ and D) and two-dimensional (γ and D) phase diagrams clearly displaying the equilibrium phase conditions and discuss the origins of various magnetic states as well as their transitions in different couplings. Besides the traditional first-order type one, we unambiguously confirm an existence of a second-order type SFO transition. This study provides an integrated theoretical model for the magnetic states of collinear antiferromagnets with two interpenetrating sublattices and offers a practical approach as an alternative to the estimation of magnetic exchange parameters (J, γ and D), and the results may shed light on nontrivial magnetism-related properties of bulks, thin films and nanostructures of correlated electron systems. A mathematical method for better understanding the exotic properties of magnetic materials is demonstrated by researchers in China. Hai-Feng Li from the University of Macao has developed calculations that predict the way so-called correlated matter can change from one state to another. Correlated materials are so called because the electrons within them all interact with each other to give the substance extraordinary properties. These include superconductivity, multiferroicity and large magneto-resistance effects. Applying a magnetic field to such materials can make it switch from one of these states, or phases, to another. Li’s theoretical framework combines both cooperative and competitive electron interactions to predict these phase changes. With this, a map of all equilibrium phases can be derived, and this provides insight into the origins of the various magnetic states.
INTRODUCTION
Nontrivial magnetism-related properties such as superconductivity, multiferroicity or magnetoresistance of correlated electron systems [1][2][3][4][5] continue to be exciting fields of research in both theoretical and experimental condensed-matter science. Such experimental observations pose their specific challenges to a complete theoretical framework. [6][7][8][9] These macroscopic functionalities may intricately connect with quantum phase transitions, strictly speaking, occurring at zero temperature and corresponding fluctuations on the border of distinct phases of a quantum phase transition. [10][11][12][13][14][15][16][17][18] Such quantum phase transitions and fluctuations can be realised and finely tuned by a nontemperature control parameter such as pressure, chemical substitution or magnetic field. A complete understanding of such experimental observations necessaries a full reveal of all possible ground states and especially magnetic-field-driven phase transitions and fluctuations of magnets, which is the central topic of our present study focusing on a theoretical calculation accommodating competitive and cooperative interactions [19][20][21][22][23] for collinear antiferromagnets.
For a collinear antiferromagnet below the Néel temperature, when a magnetic field (B) applied along the antiferromagnetic (AFM) easy axis reaches a critical value (B SFO ), the AFM sublattice spins suddenly rotate 90°so that they are perpendicular to the original AFM easy axis. This is the traditional spin-flop (SFO) transition, typically a first-order (FO) type in character. After this, the flopped spins gradually tilt toward the field direction with increasing field strength (B4B SFO ) until they are completely aligned at a sufficiently high magnetic field (B SFI ), which is the so-called spin-flip (SFI) transition. These magnetic-field-driven magnetic phase transitions of collinear antiferromagnets are schematically sketched in Figure 1.
Experimentally, identifying the nature of a SFO transition, FO or second order (SO) remains a major challenge in condensed-matter science mainly due to the technically unavoidable effect of misalignment between the relevant AFM easy axis and an appliedfield direction. Néel for the first time proposed theoretically the possibility for a SFO transition in 1936. 24 Subsequently, it was observed experimentally in a CuCl 2 ·2H 2 O single crystal. 25 Since then, the SFO phase transition has been extensively investigated, and the corresponding phenomenological theory has been comprehensively developed, generally confirming that it is of FO in nature. [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41] However, most of the reported 'sharp' SFO transitions 34,42,43 display no magnetic hysteresis effect characteristic of a FO phase transition. This was attributed either to a low magnetic anisotropy 34,43 or to a softening of surface magnons. 44 In addition, some FO SFO transitions are obviously continuous occurring in a broad magnetic-field range, which was attributed either to a domain effect resulting from an inhomogeneous character of the diluted systems or to a misalignment of applied magnetic field with regard to the AFM easy axis. 32,45,46 On the other hand, such kind of continuous magnetic phase transitions, 47 32,45,46 is usually beyond the present experimental accuracy, and any larger misalignment may change a FO-type into a SO-type broadening SFO transition. Although early theoretical calculations predicted an intermediate regime bordering with the AFM and spin-flopped states, 51-53 these either have not yet been confirmed based on the principle of minimum total potential free energy, or the used theoretical model overlooked the singe-ion anisotropy that is of very important for lanthanides and actinides. [54][55][56][57][58] In addition, a rotating ferromagnetic (FM) phase was also predicted for a SFO antiferromagnet while increasing magnetic field along the AFM easy axis. 51,59 However, such kind of unusual magnetic phase has never been observed experimentally, which renders the validity of the phase undecided.
Herein, the magnetic-field-driven SFO and SFI phase transitions of localised collinear antiferromagnets with two sublattices are explored with a mean-field theoretical calculation. Our model unifies all possible magnetic ground states and reveals some interesting magnetic phase transitions and coexistences of some of the magnetic states. This study unambiguously reveals a SO-type SFO transition via comparing numerically the relative sublattice-moment-related free energy. We conclusively rule out the possibility for a rotating FM-like magnetic state. 51 This model calculation consistently covers all possible magnetic-field-driven magnetic states of collinear antiferromagnets. We further deduce an alternative to the estimation of magnetic exchange parameters (J, γ and D).
Derived equilibrium magnetic states
Possible equilibrium magnetic states can be derived from different combinations of the FO partial differential equations, i.e., ð1Þ 2DM 0 sin ϕ cos ð2βÞ þ γM 0 sin ϕ -B sin β ¼ 0; In the following, the four combinations (1-4) will tentatively be solved, and the resulting solutions will be connected with physical meanings accordingly.
(i) First, the combination (1) involves the most formidable challenge, and one can obtain ultimately two solutions: The former case (A) is associated with an AFM ground state as shown in Figure 1a, whereas the latter case (B) signifies a correlated change of ϕ with β. As shown in Figure 1, 0°⩽ ϕ ⩽ 90°. Consequently, there are two boundary magnetic fields corresponding to the second solution of the combination (1) (i.e., a SFO transition). When ϕ = 0, sin ϕ = δ sin β = 0. One can deduce that the initial magnetic field for the beginning of the SFO transition When ϕ ¼ π 2 , δ sin β = 1, therefore, the final magnetic field for the ending of the SFO transition When B SFOB ⩾ B SFOF , one can derive the precondition of a FO SFO transition: D ⩾ 0 and 2D+γ40. On the other hand, when B SFOB o B SFOF , i.e., -1 2 γ <D<0, a surprising SO SFO transition occurs spontaneously, which originates from a negative singleion anisotropy (relative to the magnetic interaction) that is additionally restricted to a certain range by the anisotropic exchange interaction (γ).
(ii) The combination (2) implies that which corresponds to the process of a SFI transition (Figure 1c). When β ¼ π 2 , implying a spin-flipped (SFID) state ( Figure 1d). Therefore, the SFI transition field B SFI depends not only on the moment size M 0 but also on the values of J, γ and D. (iii) From the combination (3), one can deduce that When β ¼ π 2 , both sublattice moments M + and M − are perpendicular to the AFM axis M 0 -M 0 þ , forming a rotating (with magnetic field B) FM-like state. The value of ϕ can intrinsically be modified by a change in magnetic field B.
Free energy calculations
To calculate free energy scales of the deduced magnetic states from the four combinations (1)(2)(3)(4), one can substitute their respective equilibrium phase conditions as discussed above back into Equation (28) and then obtain: the one corresponding to the deduced SO SFO transition ( Figure 1b) is presented individually as below due to its complexity: To quantitatively compare the free energies, Equations (12,13,14,15,16,17) in the following the comparison will be divided into three parts based on the value of D.
Nature of the SFO and SFI transitions As shown in Figure 2a, an AFM state persists up to B SFOB , then a SO SFO transition occurs in the range of magnetic fields B SFOB ⩽ B ⩽ B SFOF , followed by a SFI transition at B4B SFOF . Finally, all sublattice spins are aligned along the magnetic field direction at B SFI . By contrast, as shown in Figure 2b,c, an antiferromagnet experiences a FO SFO transition at B FO-SFO and then enters directly into the process of a SFI transition. It is pointed out that an occurrence of the SFO transition is attributed to the existence of magnetic anisotropy, γ and/or D. In the SFOD state, Therefore, the angle β can never be zero, which is a sharp contrast to the traditional FO-type SFO transition, where β = 90°in the SFOD state.
We calculate the angles ϕ and β, and further confirm the FO and SO SFO transitions. The nature of a SFO transition can also be recognised by the character, continuous or discontinuous, of the first derivative of the free energy ( Figure 2) with regard to magnetic field based on the Ehrenfest's criterion 61 for the FO and the SO phase transitions. A continuous slope change is clearly illustrated in Figure 2d, where one can easily deduce that the second derivative ∂ 2 E/∂ 2 B is indeed discontinuous. By contrast, an abrupt change in the slope is obviously displayed at B FO À SFO in Figure 2e,f. To better understand the magnetic phase transitions with field, the values of the angles ϕ and β ( Figure 1) for all deduced magnetic states are calculated in the whole magnetic field range as shown in Figure 3a
DISCUSSION
Equilibrium phase conditions of the magnetic states and nature of the magnetic phase transitions We first rule out the rotating FM-like state. It is clear that in the magnetic-field range B ⩽ B FM , the relative sublattice-momentrelated free energy E FM − like is always higher than those of other allowed magnetic states (Figure 2), indicating that the rotating FM-like state does not exist at all in view of its relatively higher free energy.
To clearly present the deduced magnetic ground states and associated magnetic phase transitions with magnetic field, we calculate the three-dimensional (J, γ and D) and the two-dimensional (γ and D) phase diagrams as shown in the up Figure 5. In this study, for an antiferromagnet J40. When Jo 0, on the other hand, the magnet houses a FM state (Figures 4a,f and 5f). In addition, for the existences of the SFO (FO or SO) and SFI transitions, B SFOF 40 (Figure 1b; Equation (20)), B SFO 40 (Figure 1c; Equation (22)) and B SFI 40 (Figure 1d; Equations (20) and (22)). One thus deduce that J4 1 2 ð2D -γÞ for the validation of these magnetic states. Furthermore, by comparing Equation (12) with Equation (15), one can finally conclude that there exists the possibility for a FM state even when J40, as shown in Figures 4f and 5f, where 0<J < 1 2 ð2D -γÞ. From foregoing remarks, we know that for a FO SFO transition, D ⩾ 0 and D4 -1 2 γ. By including the condition of J4 1 2 ð2D -γÞ for the validated existence of an antiferromagnet, one can divide the FO SFO transition into two regimes: (i) FO SFO transition 1: D4 7 1 2 γ and J4 1 2 ð2D -γÞ (Figures 1c and 4a); (ii) FO SFO transition 2: 0 D 7 1 2 γ (Figures 1c and 4b). In addition, for a SO SFO transition, -1 2 γ <D<0 (Figures 1b and 4c). It is pointed out that when -1 2 γ <D< 1 2 γ, it is always true that J4 1 2 ð2D -γÞ. The difference between the two types of FO SFO transitions (1 and 2) in the context of J is that for the FO SFO transition 1, J40 and J4 1 2 ð2D -γÞ; by contrast, for the FO SFO transition 2, J can be any values larger than zero. As shown in Figure 4d, when D ¼ -1 2 γ, B SFOB = B SFOF = 0 (Equation (20)). Therefore, the antiferromagnet directly enters a SFI transition (Figure 5b). To further demonstrate this interesting magnetic phase transition, we calculate the relative free energies and the variations of the angles ϕ and β (with the parameters M 0 = 4 μ B , J = 2 T/μ B , D = − 0.2 T/μ B and γ = 0.4 T/μ B ) as shown in Figure 3c,d. It is clear that this magnetic phase transition is theoretically favourable. It is more interesting that if J = − γ, Equation (12) = Equation (15), which implies that the AFM state can coexist with the SFID state (Figure 5c,g). Based on the above discussion, it is reasonable to deduce that when J = 0 (a paramagnetic state) and D = γ = 0 (without any magnetic anisotropy), all paramagnetic spins will direct and be bounded to an appliedfield direction when B40 (Figure 5d). This is the so-called superparamagnetic state.
When D< -1 2 γ, E AFM (Equation (12)) is always larger than E x-axis (Equation (13)), which indicates that the AFM easy axis will change from the z to the x direction (Figures 4e and 5e). Therefore, the AFM easy direction is determined by the competition between magnetic anisotropies, γ and D.
An alternative method of estimating the magnetic exchange parameters (J, γ and D) As foregoing remarks, when -1 2 γ <D<0 (Figure 1b), a SO SFO transition occurs in the antiferromagnet. With the known exchange parameters (J, γ and D), one can calculate the SFO (B SFOB and B SFOF ) and SFI (B SFI ) fields, i.e., On the other hand, if the values of B SFOB , B SFOF and B SFI are known, one can calculate the corresponding values of J, γ and D according to the following deduced equations from the above Equation 20, i.e., When D ⩾ 0, D4 -1 2 γ and J4 1 2 ð2D -γÞ (Figure 1c), a FO SFO transition occurs, and Although it is impossible to solve the above Equation 22 to extract the detailed values of J, γ and D, one can deduce that Hence, one can calculate two special cases, i.e., and if γ ¼ 0 then D ¼ applied, all spins will directly go to the SFID state and point to the applied-field direction. This is the so-called superparamagnetism. (e) When J40 and D4 -1 2 γ, the AFM easy axis is along the z direction, whereas when D< -1 2 γ, the x axis becomes an AFM easy direction. (f) When J40 and J4 1 2 ð2D -γÞ, the magnet houses an AFM state, whereas when J < 1 2 ð2D -γÞ, the spins are ferromagnetically arranged. (g) When J40 and J ¼ 1 2 ð2D -γÞ, it is reasonable to deduce that the AFM state coexists with the FM state.
Traditionally, through fitting the relevant Q (momentum)-E (energy) spectra recorded usually by inelastic neutron scattering, one can extract the magnetic exchange parameters (J, γ and D).
Here based on our model, one can first obtain the values of B SFO (B SFOB and B SFOF ) and B SFI for a suitable SFO and SFI compound, e.g., via magnetisation measurements using a commercial physical property measurement system or a Quantum Design MPMS-7 SC quantum interference device magnetometer (San Diego, CA, USA). Then, the values of J, γ and D can be estimated according to Equations (21), (24) or (25).
CONCLUSIONS
In summary, a consistent mean-field calculation of the SFO and SFI phase transitions has been performed for localised collinear antiferromagnets with two sublattices. In this study, we can unify all possible magnetic ground states as well as related magnetic phase transitions within one model. Some special magnetic states are derived with a change in the strength of magnetic field: (i) A rotating FM-like state (that is finally ruled out); (ii) A SO SFO transition; (iii) A direct SFI transition from the AFM state without experiencing a SFO transition as usual; (iv) An existence of the FM state; (v) A coexistence of the AFM and FM states even when the magnetic exchange is of AFM.
Based on the quantitative changes of the ground-state free energies, the case (i) has been clearly ruled out, and the others indeed exist theoretically. This model calculation unifies the AFM state, FO and SO SFO transitions, SFOD state, SFI transition as well as the SFID state. Their respective phase boundary conditions are extracted and clearly listed. We find an alternative to the estimation of magnetic exchange parameters (J, γ and D). Inelastic neutron scattering studies of suitable real SFO and SFI compounds to extract the relevant parameters for an experimental verification of the phase boundary conditions and especially the studies in the intermediate coupling regimes to explore possible quantum fluctuations will be of great interest and challenge, and Equation (28) merits a tentative expansion with more agents such as temperature and an angle denoting the misalignment between AFM axis and applied magnetic field direction.
MATERIALS AND METHODS
The calculation presented here is limited to purely localised collinear AFM systems, ignoring the effect of valence electrons on magnetic couplings. For a two-sublattice AFM spin configuration (Figure 1), the corresponding Hamiltonian terms consist principally of magnetic exchange, spin-exchange anisotropy, single-ion anisotropy and Zeeman coupling to an external magnetic field. Assuming that an AFM easy direction consistent with the localised sublattice moments M + and M − is along the z axis ( Figure 1a) and that the subsequent completely flopped spins are parallel to the x axis (Figure 1b and c), the sublattice-moment vectors within the xz plane (Figure 1b) can thus be written as: respectively, wherex andẑ are the unit vectors along the x and z axes, respectively, and the angles ϕ, β 1 and β 2 are defined as marked in Figure 1. Therefore, the resultant sublattice-moment-related free energy (E) within a mean-field approximation can be calculated by: where the four terms in turn denote the four Hamiltonian components as the foregoing remarks, and J (40), γ and D are the AFM coupling, anisotropic exchange and single-ion anisotropic energies, respectively. In an unsaturation magnetic state, with increasing magnetic field B (|| z axis) as shown in Figure 1a,b, the sublattice moment M + (M − ) increases (decreases) as a consequence, which leads to β 1 oβ 2 . At the lowest temperature T = 0 K, i.e., in a real saturation magnetic state, M + ≡ M − = M 0 and thus β 1 ≡ β 2 = β. Hence, Equation (27) can be simplified as: | 4,601.8 | 2016-10-14T00:00:00.000 | [
"Physics"
] |
PALYNOFACIES STUDIES OF THE LATE CRETACEOUS (TURONIAN-MAASTRICHTIAN) STRATA FROM
Geographically located within Gombe, Bauchi, and Yobe states, the Gongola Sub-basin has drawn attention from several academics looking to increase Nigeria's oil reserves relative to the inland frontier basins. This paper's goal is to determine the thermal maturity of strata from drill samples of the Fika shales in Nigeria's Northern Benue Trough, Gongola Sub-basin. To predict the well section's maturity and kerogen type, this study uses optical and organic facies studies. Twenty-seven samples of ditch cuttings were prepared using the universally accepted acid palynological procedure. The dispersed mounted slides revealed a variety of pollen, spores, and palynomacerals upon microscopic inspection. The well section under study exhibits a range of thermal maturation from mature to late mature, indicating the possibility of producing oil and gas. This corresponds to a range of thermal alteration values of 4 – 6 and equivalent vitrinite reflectance (%Ro) values of 0.6% – 1.35%. The total recovered Sedimentary Organic Matter (SOM) in this study was classified into Palynomorphs, Amorphous Organic Matter (AOM) and Phytoclasts and plotted on a ternary graph. The Percentage frequencies of AOM, Phytoclast and Palynomorphs were compared with the zones of the Tyson Ternary diagram. Most of the distribution frequencies lie within zones II, IX, VI and Iva suggesting Kerogen types III (gas-prone) and II (oil-prone)
INTRODUCTION
Palynomorphs can be used successfully for a wide range of geological studies apart from biostratigraphy, including sediment provenance Studies (Vecoli and Samuelsson, 2001), Structural geology (Delcaillau et al., 1998;Dorning1986), geo-thermometry (Pross et al., 2007) and source rock potential (Jiang et al., 2016), because organic matter (OM) is known for its high sensitivity to thermal evolution.Palynomorphs, such as sporomorphs and acritarch, are composed of impervious organic polymers, the precise nature of which is still unknown.One noteworthy feature of these polymers is the internal reorganization of their molecular structure, which is brought about by processes that occur during burial (such as depth and duration, geothermal flux, and fluid geochemistry).These processes result in colour alteration that is directly related to the maximum temperature attained.Furthermore, weathering-related post-depositional oxidation can brighten the colour of palynomorphs in addition to corroding or even destroying them (Traverse, 2008 and references therein).Visual kerogen typing, a form of organic petrography, is the microscopic technique used to examine kerogen.It is based on the idea that optically categorized kerogen particles can be connected to the hydrocarbongenerating potential (Staplin, 1969) of a source rock.These microscopic observations are Mostly done using concentrated kerogen with refractory minerals prepared on a slide.At the same time, the Thermal Alteration Index (TAI) is a maturity indicator based on observations of the progressive change in the colour of spore and pollen particles in kerogen with increasing maturity (e.g., Gutjahr, 1966;Correia, 1969;Staplin, 1969).Staplin (1969) created the first formal scale, which was a 1-5 scale with + and − notations to indicate intermediate steps.In contrast to other geochemical methods for studying the thermal alteration of sediments, these are sophisticated but rather costly procedures.A few numbers of researchers, including Obaje (2004) and Abubakar (2008), conducted investigations on the petroleum potential of the Gongola Sub-basin following Chevron Nigeria Ltd.'s drilling attempt in 1992.They use the organic geochemical approach to study the source rock maturity of the basin.Abubakar, (2014) also reviewed the petroleum potentials of the Benue Trough and Anambra Basin making comparisons with other West and Central African basins.Recent works include the work of Raji, (2015) who used borehole samples from the Gongola Sub-basin to study the Rock-Eval pyrolysis, vitrinite reflectance, and infrared spectroscopy to evaluate their organic richness, thermal maturity, and petroleum-generating potential.The purpose of this paper is to study the source rock potential of the strata penetrated at Borehole Jauro Jatau community at the outskirt of Gombe Town (Figure. 1) by using Spore/Pollen colour changes and Organic Facies to determine the maturity of the sediments.This study determines thermal alteration by visually evaluating palynomorph colour (i.e., Spore Color, Thermal Alteration, and Palynomorph Darkness).This method is less expensive and reasonably easy to use when compared to the costly methods used by previous authors.The Benue Trough, a rift basin that stretches 800 km in length and 150 km in width and contains up to 6,000 meters of Cretaceous-Tertiary sedimentary strata, the Gongola subbasin is situated northeast of the Benue Trough.According to (Akande and Erdtmann, 1998), both post-deformational strata of Campanian-Maastrichtian to Eocene ages are found in the Gongola Sub-basin.Additionally, a lot of anticlines and synclines have been created by folding, faulting, and local uplift of strata that predate the mid-Santonian compressional phase (Benkhelil, 1989).In the aftermath of mid-Santonian magmatism and tectonism, the Benue Trough's depositional axis was moved.The Northern Benue Trough is composed of the Gongola Sub-basin and the Yola Sub-basin.The Lau-Gombe Sub-basin is the third basin; while it is not wellknown, some have acknowledged it (Akande et al;1998Whiteman, 1982)).The stratigraphic succession in the Gongola and Yola Basins is illustrated in (Figure .2).The lacustrine and fluvial Bima were unconformably deposited in the Precambrian basement during Albanian time.The formation contains carbonaceous clay, shales, and mudstones.The Bima was divided by Carter et al. (1963) into lower, middle, and upper units.The Middle Bima is said to be Shales with some limestones and is thought to have been deposited under more aqueous anoxic conditions (lacustrine, briefly marine).The Yolde Formation is Cenomanian and rests conformably on the Bima Sandstone.This formation represents the beginning of a marine incursion into this part of the Benue Trough and was deposited in a transitional/coastal marine environment made up of clays, claystone, shales, limestones, and sandstones.In the Gongola Sub-basin, the Pindiga Formation rest conformably on the Yolde Formation.This formation, which was deposited in a transitional/coastal marine environment, marks the start of a marine transgression into the Benue Trough.It is made up of clays, claystones, shales, limestones, and sandstones.From the Turonian to the Late Maastrichtian period, a total marine invasion of the Northern Benue Trough is represented by this Formation.Lithologically, the Kanawa Member is represented by Shales, pale limestones, and minor sandstones that are intercalated with dark or black carbonaceous limestones.The Middle Marine Sandstones Members of Dumbulwa, Daben Fulani, and Gulani are then deposited on top of this formation in some areas.The Fika Shale consist of extremely fissile, bluish-green carbonaceous, occasionally pale-coloured gypsiferous shales associated with rare limestones.The Late Cretaceous strata are represented by the Gombe Sandstone and followed by the almost fully continental Kerri-Kerri Formation.The Gombe sandstone is dominantly composed of sandstones, clay, coal, lignite and coaly shale intercalations.The Gombe sandstone is mostly made up of intercalations of clay, lignite, coal, and sandstones with coaly shale.The Tertiary Kerri-Kerri Formation consists of sandstones, claystones, and siltstones.
MATERIALS AND METHODS
The lithologic description was done by visual assessment of each sample during drilling using a colour chart to establish the colour, a grain size chart and magnifying hand lens were used for the grain-size description.The borehole site is located within the outcrop of Fika Shale at 10° 14' 54.35"N and 11° 9' 34.87"E.The borehole was drilled to a total depth of twenty-seven meters and sampled at 1m intervals.The samples were subjected to the palynological preparation technique of (Batten and Stead, 2005) to recover the palynomorphs from the sediments.The analyses were carried out at the National Centre for Petroleum Research and Development (NCPRD), located at the Abubakar Tafawa Balewa University Bauchi Palynological Laboratory.Twenty grams of each sample was treated with 35% HCL acid under a fume cupboard for the removal of carbonates.The residues were completely washed with distilled water.Then 48% HF acid was added to the sample and kept for 24 hours to dissolve the silicates present in the samples.Thereafter, the residue was diluted with distilled water and carefully decanted, then followed by complete washing with distilled water to remove fluoro-silicate compounds which are usually formed from the reaction with HF.The residue is now sieved and separated.The sieved residue was not oxidized using concentrated nitric acid (HNO3), because this treatment will selectively remove amorphous organic matters that often co-exist with the palynomorphs, but can lighten the dark-hued palynomorphs.The residues for palynological slides were then stained and prepared with glycerin jelly.The Olympus CX41 Binocular Light Transmitted Microscope was used to examine the slides.The identification of palynomorphs and palynomacerals was accomplished with the help of palynological albums and the published works of earlier researchers, including Chukwuma-Orji (2018), Oboh-Ikuenobe (1998), Abubakar (2011), andNjoh (2017).The visual pollen and spore colouration of the identified palynomorphs for each sample was compared with the thermal alteration scale developed by Batten (1980).This was also correlated with the thermal alteration index, vitrinite reflectance and degree of maturation as suggested by Batten (1980).The total recovered Palynomacerals in this study were classified into Palynomorphs, Amorphous Organic Matter (AOM) and Phytoclast groups and plotted on a ternary diagram of Tyson 1995.The Percentage frequencies of sedimentary organic matter (AOM), Phytoclast and palynomorphs were compared with the zones of the ternary diagram.
RESULTS AND DISCUSSION Lithology organic matter Composition and colour variation
The lithologic composition and organic matter variation of the studied well (Fig. 3), consists of shales, fine-grained sandstone, and sandy mudstones.Visual observations of the ditch cutting samples with the Munsell colour chart indicate that the shale has fissility, and has a colour range from light grey to greenish grey, occurring at a depth interval of 0 -20 m of the well section.The palynomorphs and palynodebris constituents recovered at various depth intervals are generally, abundant, and diverse (Fig. 4).A total count of 526 palynomorphs were recovered.The pollen and spores' constituents occur in abundance with few counts of dinoflagellate cysts and others.Their distribution and influence are stratigraphically presented in Figure 3.
Thermal Maturation
Pollen and Spore colour and their corresponding thermal alteration index (TAI) and vitrinite reflectance (%Ro) values have been developed by several academics, including (Staplin, 1969;Collins, 1990;Batten, 1980), for the goal of determining thermal maturation.However, the results presented here (Table 1; Fig. 5) are based on the Batten (1980) thermal alteration scale for spore colouring, which is scored from 1 to 7 and indicates the colour change of polymorphs from yellow, orange, brown, to black.The palynomorphs/palynodebris colouration ranges from light brown, light medium brown to dark medium brown (4-5) for the first 20m of the borehole which signifies a matured oil generation stage and a dark brown through very dark brown to black which is represented by numbers 5-6 ranking on the Batten classification table indicating Mature to late matured stages respectively.Special consideration was given to longranging species like Longapertites sp., Tricolporopollenites sp., and Cingulatisporites Ornatus, and they all show this colour changes with increasing burial.The degree of thermal maturation of the studied samples can be interpreted as matured to Late Matured stage and, hence prone to oil and gas production.This corresponds to a range of thermal alteration values of 4 -6 and an equivalent vitrinite reflectance (%Ro) value of 0.6% -1.35% (Fig 5 ).
Figure 5: Correlation of pollen/spore exine wall colour from Jauro Jatau sediments with palynomorphs colour chart and there corresponding thermal alteration index and vitrinite reflectance of Batten (1980).
Discussion
The use of thermal maturation as a tool for accessing the Petroleum Potentials of sedimentary basins has been underrated by different scholars and oil exploration companies, this method has been one of the most reliable methods as shown by this research when we crosscheck with those researchers that employed the conventional methods which include Rock-Eval pyrolysis and Vitrinite reflectance, the two major studies carried out by earlier researches in this basin indicates an immature kerogen type for the Pindiga formation why also suggesting a more deeper formation for oil exploration, that was why this study was carried out on only the Fika shale within the Pindiga formation in order to increase the resolution of the results and the results shows a different characteristics for the Fika shale.Considering the cost of using those methods, palynology is a cheaper and more reliable substitute.This research was done to support and add to the existing knowledge with regards to the Late Cretaceous Petroleum System in the Gongola Sub-basin, after the successful exploration of oil from the early Cretaceous system all efforts are to develop a reliable system for the late Cretaceous units of the basin, from the thermal maturation and organic facies results from these studies, the Fika shale has shown to be oil and gas prone which we believe will attract more government interest in the basin.
CONCLUSION
The need to understand the hydrocarbon prospectivity and contribute to the pre-requisite knowledge required to attract investors to the inland Gongola Sub-basin necessitated this palynofacies study and thermal maturation analysis.Twentyseven (27) cuttings samples were subjected to the standard acid palynological preparation method.Microscopic analysis of the strewn mounted slides yielded diverse pollen, spores and palynomacerals.The palynomorphs/palynodebris colouration ranges from light brown, light medium brown to dark medium brown (4-5) for the first 20m of the borehole which signifies a matured oil generation stage.Also, there was a 7m of Dark brown through Very dark brown to Black colouration which is represented by numbers 5-6 ranking on the classification table and represents a late Matured Stage which indicates a more gas-prone stage.The total recovered sedimentary organic matter in this study was classified into Palynomorph, AOM and Phytoclast groups and plotted on a ternary graph.The Percentage frequencies of sedimentary organic matter (AOM, Phytoclast and palynomorphs) were compared with the zones of the ternary diagram.Most of the distribution frequencies are coming under the zone II, IX, VI and IVa areas, which indicated Kerogen type III (gas prone), and II (Oil Prone), respectively.The results obtained from this study have indicated a good prospectivity of the Jauro Jatau borehole.However, prediction of the hydrocarbon potential of the Gongola Sub-basin using spore/pollen colour and organic facies is not sufficient to conclude for successful accumulation of oil and gas, other geological factors must be considered.Hence, the result of this research should serve as a preliminary view of the hydrocarbon potential of the Fika shales Gongola Sub-basin and should be supplemented by detailed geochemical analytical techniques (such as Rock-Eval pyrolysis, Vitrinite reflectance (%Ro), Total organic carbon (TOC) and numerical thermal alteration index) for more reliable determination of the bulk source rock potential.
Figure 1 :
Figure 1: Map showing the geology of the Northern Benue Trough with borehole location.(Modified after Goro et al., 2021)
Table 1 :
Microscopic observation of spore/pollen and palynodebris colour changes and their corresponding thermal alteration index (TAI) and degree of maturation in Jauro Jatau borehole Depth proposed a ternary kerogen plot consisting of kerogen categories, AOM (Amorphous organic Matter), Phytoclast and Palynomorphs.Based on the ternary plot, Late Jurassic sediments and other Mesozoic-Cenozoic rocks have been studied and found that palynological kerogen having similar composition and paleoenvironmental settings (from different geologic times) tends to occupy the same position in the ternary plot.The total recovered Sedimentary Organic Matter in this study was classified into Palynomorph, AOM and Phytoclast groups and plotted on a ternary diagram of Tyson 1995.The Percentage frequencies of sedimentary organic matter (AOM), Phytoclast and palynomorphs were compared with the zones of the ternary diagram (cf.Tyson 1995) (fig 6).Most of the distribution frequencies fall under zones II, IX, VI and IVa areas which indicate Kerogen type II (Oil Prone), and III (gas prone), respectively (Table 2.). | 3,516.8 | 2024-03-11T00:00:00.000 | [
"Geology"
] |
The red fluorescent protein eqFP611: application in subcellular localization studies in higher plants
Background Intrinsically fluorescent proteins have revolutionized studies in molecular cell biology. The parallel application of these proteins in dual- or multilabeling experiments such as subcellular localization studies requires non-overlapping emission spectra for unambiguous detection of each label. In the red spectral range, almost exclusively DsRed and derivatives thereof are used today. To test the suitability of the red fluorescent protein eqFP611 as an alternative in higher plants, the behavior of this protein was analyzed in terms of expression, subcellular targeting and compatibility with GFP in tobacco. Results When expressed transiently in tobacco protoplasts, eqFP611 accumulated over night to levels easily detectable by fluorescence microscopy. The native protein was found in the nucleus and in the cytosol and no detrimental effects on cell viability were observed. When fused to N-terminal mitochondrial and peroxisomal targeting sequences, the red fluorescence was located exclusively in the corresponding organelles in transfected protoplasts. Upon co-expression with GFP in the same cells, fluorescence of both eqFP611 and GFP could be easily distinguished, demonstrating the potential of eqFP611 in dual-labeling experiments with GFP. A series of plasmids was constructed for expression of eqFP611 in plants and for simultaneous expression of this fluorescent protein together with GFP. Transgenic tobacco plants constitutively expressing mitochondrially targeted eqFP611 were generated. The red fluorescence was stably transmitted to the following generations, making these plants a convenient source for protoplasts containing an internal marker for mitochondria. Conclusion In plants, eqFP611 is a suitable fluorescent reporter protein. The unmodified protein can be expressed to levels easily detectable by epifluorescence microscopy without adverse affect on the viability of plant cells. Its subcellular localization can be manipulated by N-terminal signal sequences. eqFP611 and GFP are fully compatible in dual-labeling experiments.
Background
Since the cloning of the green fluorescent protein (GFP) cDNA and its first heterologous expression in the early 1990s [1,2], the use of intrinsically fluorescent proteins (IFPs) has become one of the most powerful tools in molecular and cell biology. These proteins are applied as reporters in gene expression studies, as indicators of intracellular physiological changes, for monitoring dynamics of organelles and proteins, for investigation of protein-protein interactions in vivo and as fusion partners in studies of the subcellular localization of proteins [3,4].
From the very beginning, many efforts have been made to optimize various features of the native GFP with the aim to improve its application in biological research. These modifications include for instance improved folding efficiency, higher expression level or increased solubility [3]. Cyan and yellow fluorescent derivatives of GFP have been created for investigations requiring the simultaneous distinguishable tagging of more than one protein at a time [5,4]. These are used to compare the spatial distribution or the expression pattern of two or more proteins and for the analysis of protein-protein interactions by FRET. So far no red fluorescent variant of GFP has been reported. Recently, investigation of several non-bioluminescent anthozoan species has led to the isolation of various true red fluorescent proteins (RFPs) [6]. Among these, DsRed and its derivatives are the most commonly used in molecular and cell biological research [7].
Since plants contain a large number of multi-gene families, comparisons of the subcelluar localizations of the individual members are necessary as part of the comprehensive analysis of these proteins. The possibility to label several proteins with different fluorescent proteins is a great advantage when analyzing their respective subcellular localization. As a crucial prerequisite for such studies, the compartments to which the fusion proteins are targeted have to be unequivocally identified. This is often done by staining with compartment-specific dyes. Mitochondria for instance can be visualized by staining with the red fluorescent dye MitoTracker ® Red CM-H2Xros (Molecular Probes, Eugene, OR) which specifically interacts with the respiratory chain. The staining procedure, however, is time-consuming, invasive and short-lived and can be replaced simply by co-expression of a spectrally different second fusion protein with a defined subcellular localization. Additionally, the fused target sequence of the fluorescent marker protein can be readily exchanged, which allows selective labeling of nearly every subcellular structure under investigation without the need to have a specific dye for the different compartments.
Despite the discovery of a multiplicity of fluorescent proteins in the red spectral range in recent years [6], so far almost exclusively different forms of DsRed have been used for studies in molecular cell biology in plants [8][9][10][11][12]. These proteins are applied in dual-labeling experiments together with GFP or alone to report on promoter activity or as a marker in transgenic plants. To introduce an alternative RFP for the application in plant cells and to expand the palette of red fluorescent reporters for plant research, we tested the suitability of the red fluorescent protein eqFP611 from the sea anemone Entacmaea quadricolor as a marker in subcellular localization experiments in plants. eqFP611 shows far-red fluorescence with excitation and emission maxima at 559 nm and 611 nm, respectively, and therefore exhibits an extraordinarily large Stokes shift of 52 nm [13]. In contrast, the respective values for DsRed are 558 nm, 583 nm and 25 nm, respectively [13]. Both eqFP611 and DsRed have comparable molecular masses of 25.93 kDa and 26.05 kDa, respectively, for the monomers. The extinction coefficient of eqFP611 (78,000 M -1 * cm -1 ) is slightly higher than that of DsRed (75,000 M -1 * cm -1 ). Fluorescence quantum yields for eqFP611 and DsRed are 0.45 and 0.7 and the photobleaching quantum yields are 3.5 * 10 -6 and 0.8-9.5 * 10 -6 , respectively. Similar to DsRed, the emission of eqFP611 is constant between pH 4 and 10. Though both form tetramers at physiological concentrations, eqFP611 has a reduced tendency to oligomerize and aggregate as compared to DsRed. With a maturation half-time t 0.5 of 4.5 h at 24.5°C [14], fluorophore maturation of eqFP611 is much faster than that of DsRed (t 0.5 > 24 h at 24.5 °C) [13].
We demonstrate that native eqFP611 can be expressed in plant cells. Fusions of this protein with respective N-terminal signal sequences can be efficiently targeted to mitochondria and peroxisomes. We performed co-expression experiments with eqFP611 and GFP and created vectors for the straightforward application of the eqFP611 gene in plants.
eqFP611 can be functionally expressed in plant cells
Recently, eqFP611, the gene for a red fluorescent protein from the sea anemone Entacmaea quadricolor, has been cloned and characterized [13,14]. This protein has been succesfully expressed in bacteria and animal cells [13], but has not yet been tested in plants.
To test its use as a marker in plants, the native eqFP611 cDNA was cloned into a pUC19-based vector. In the resulting plasmid peqFP611, expression of this gene is governed by the strong constitutive cauliflower mosaic virus 35S promoter (CaMV 35S) and the nopaline synthase terminator (NOS T) sequences. Upon inspection of Nicotiana tabacum mesophyll cells transfected with this plasmid in the epifluorescence microscope, the red fluorescence was clearly detectable with a filter set (HQ545/ 30/HQ 610/75) usually used for visualization of MitoTracker Red and here later referred to as MitoTracker filter set (Fig. 1). The protein accumulates in the nucleus and in the cytosol, where it is evenly distributed and does not form any visible aggregates, but is clearly absent from the chloroplasts. No such fluorescence was detectable in untransfected control cells, confirming that the red fluo-rescence indeed originates from the expression of the introduced eqFP611. Protoplasts were analysed 16 hours after transfection. Incubation for an additional 24 hours did not markedly increase the intensity of the red fluorescence, suggesting the maximal level of mature protein to be essentially reached within 16 hours after transfection. Protoplasts expressing eqFP611 looked perfectly normal and did not show any detrimental effects of this fluorescent protein.
These results show that eqFP611 can be readily used in plants, since the functional protein accumulates to detectable levels without any obvious adverse effects. In contrast to GFP, whose original jellyfish-derived cDNA was misspliced specifically in plants at a cryptic splice site [15], no modification of the eqFP611 coding sequence is necessary for efficient expression in plants.
As expected from its spectral characteristics, the fluorescence is easily detectable with a filter set (see above) that excludes the red autofluorescence of chlorophyll, a crucial advantage for an RFP applied in mesophyll cells. Similar to GFP [16], the native eqFP611 accumulates in the nucleus and in the cytosol in plant cells. Thus, it should be suited to investigate protein targeting into e.g. mitochondria, peroxisomes and plastids within plants. In HeLa cells, native, unmodified eqFP611 was also found in the nucleus and the cytosol [13].
Targeting eqFP611 to mitochondria
To investigate whether eqFP611 can indeed be used as reporter protein for the analysis of subcellular protein sorting, import into plant mitochondria was exemplarily tested. To this end, the presequence of the mitochondrial isovaleryl-CoA-dehydrogenase (IVD) was added to the Nterminus of eqFP611 (plasmid pIVD145-eqFP611). The IVD presequence was chosen because it has previously been found to efficiently target a GFP fusion protein exclusively to mitochondria [17]. In addition, the protein has been repeatedly detected in proteomic analyses of this organelle, demonstrating its unambiguous localization in mitochondria [18][19][20]. Inspection of the protoplasts transfected with pIVD145-eqFP611 using the MitoTracker filter set revealed the red fluorescence to be restricted exclusively to rod-shaped structures of 1 -2 μm in length distributed throughout the cell ( Fig. 2A). This pattern is characteristic for a mitochondrial localization of the fusion protein. No red fluorescence was detectable in other parts of the protoplasts. Thus, eqFP611 can be efficiently targeted to plant mitochondria, its subcellular localization being exclusively determined by the targeting information of the signal peptide fused to its N-terminus. Furthermore, this result confirms that eqFP611 is efficiently transported through two membranes while retaining its ability to fold properly for effective fluorescence. Similar to the native eqFP611, prolonged incubation of the protoplasts did not increase the intensity of the fluorescence.
The picture of the transfected protoplast displayed in Fig. 2A demonstrates nicely that the use of the MitoTracker filter set is appropriate to easily detect the red fluorescence of eqFP611 while effectively blocking chlorophyll autofluorescence. The latter is clearly visible through the FITC (fluorescein isothiocyanate) filter set (HQ 470/40/ HQ 500 LP), which in turn blocks the fluorescence of eqFP611 (Fig. 2B). This autofluorescence in the chloroplasts exactly fits to the areas without fluorescence in Fig. 2A. Furthermore, the untransfected cells surrounding the eqFP611-expressing protoplast in Fig. 2A clearly show that no other autogenous fluorescence is visible through the MitoTracker filter set.
To assess the relative stability of the eqFP611 fluorescence in plants, we qualitatively compared the time elapsed until bleaching of the red fluorescence in protoplasts transiently expressing IVD145-eqFP611 and of MitoTracker ® Red CM-H2Xros (Molecular Probes, Eugene, OR) used for staining of untransfected protoplasts. This latter mitochondria-specific fluorescent dye has excitation/emission maxima of 579 nm and 599 nm, respectively. When individual cells of both approaches were inspected under identical light conditions in the fluorescence microscope, the fluorescence of IVD145-eqFP611 was at least as stable eqFP611 without presequence
Co-expression of eqFP611 and smGFP4 in tobacco protoplasts
Experiments like subcellular localization studies in which one of the fluorescent proteins is used to mark a distinct cellular compartment, require the simultaneous expression of two different fluorescent proteins. If eqFP611 is to be used routinely in such applications, its expression must be fully compatible with other IFPs, e.g. GFP. To test whether co-expression of both fluorescent proteins is indeed useful, tobacco protoplasts were simultaneously transfected with the constructs pIVD145-eqFP611 and pIVD145-smGFP4. Both plasmids contain identical mitochondrial targeting sequences fused to the N-termini of eqFP611 or smGFP4, respectively. Most of the succesfully transfected protoplasts incorporated both plasmids and expressed both eqFP611 and smGFP4. Identical patterns of the red and the green fluorescence in these protoplasts confirmed the co-expression of both proteins in the same cell (Fig. 3). In addition to the GFP-derived green fluores-cence in the mitochondria, the red chlorophyll autofluorescence in the chloroplasts is seen with the FITC filter set (Fig. 3B).
To examine whether the transport into mitochondria of both fusion proteins occurs independently of each other and to exclude a possible chance "piggy back" effect during subcelluar transport of the two chimeric proteins, tobacco protoplasts were transfected with a different combination of plasmids. This time, pIVD145-smGFP4 was used for co-transfection with plasmid pKAT2-eqFP611, which latter encodes a recombinant protein of the peroxisomal targeting signal 2 (PTS2) [21] of 3-keto-acyl-CoA thiolase 2 (KAT2) [22] N-terminally fused to the eqFP611 reading frame. Red and green fluorescences were again found exclusively in the expected organelles (Fig. 4). The green fluorescence is observed in mitochondria, while the red fluorescence is visible in approximately 1 -2 μm large roundish structures, a shape expected for leaf peroxisomes. No green fluorescence is seen in these organelles and conversely no red fluorescence is detected in mitochondria. This strongly suggests that if there is any interference, it does not disturb the correct targeting of the Mitochondrially targeted eqFP611 To verify that the KAT2-eqFP611 fusion protein was indeed targeted to peroxisomes, pKAT2-eqFP611 was used for co-transfection together with p35S-N-TAP2(G)pex. The latter plasmid encodes a GFP fusion protein targeted to peroxisomal membranes by the C-terminal 36 amino acids of cotton ascorbate peroxidase (APX). As shown in Fig. 5, the patterns of the green and the red fluorescence overlap, indicating the correct peroxisomal localization of KAT2-eqFP611. Green fluorescence seems to be more intensive at the boundaries of the peroxisomes, while the red fluorescence is equally distributed within the organelles. This is consistent with the predicted intra-peroxisomal localization of the APX and KAT2 proteins, respectively. No green or red fluorescence is visible outside the peroxisomes. These experiments demonstrate that the N-terminal peroxisomal targeting signal 2 efficiently directs eqFP611 to the corresponding organelle and that this RFP can thus be exployed to study protein sorting into peroxisomes in plants.
Thus, as demonstrated by the expression in both mitochondria and peroxisomes, eqFP611 is a suitable partner for GFP in double-labeling experiments. When the two IFPs are co-expressed in the same cell, no mutual interference regarding development of fluorescence or intracellular sorting is observed. Additionally, both eqFP611 and GFP fluorescences can be easily distinguished by their emission spectra. The previously reported minor green Co-expression of eqFP611 and smGFP4 fusion proteins targeted to mitochondria S m S a E P fluorescence of eqFP611 was undetectable under the conditions used (Fig. 2B and 4B) [13].
Furthermore, despite the tendency of eqFP611 to form tetramers [13], its fusion proteins can be efficiently and reliably targeted to organelles. The transport across single (peroxisomes) or double (mitochondria) membranes does not interfere with the formation of the higher order structure necessary for emitting fluorescence. In addition, the fusion of a signal sequence to its N-terminus has no negative influence on the red fluorescence of eqFP611.
Expression of both eqFP611 and smGFP4 from a single plasmid
Transformation of Nicotiana benthamiana leaves by injection of Agrobacterium tumefaciens [23] containing IFP fusion genes is another fast and simple method for the analysis of the subcellular localization of a protein. This procedure is presumably closer to the in vivo conditions than protoplast transfection, since the transformed cells remain in the original tissue context. In addition, this approach does not require the relatively laborious preparation of protoplasts. In this case, expression of the two fusion proteins from the same plasmid is advantageous, since a single transformation event is sufficient to ensure that every transformed cell contains both IFP genes. Apart from that, expressing both fluorescent proteins from the Co-expression of peroxisomally targeted eqFP611 and mitochondrially targeted smGFP4 Figure 4 Co-expression of peroxisomally targeted eqFP611 and mitochondrially targeted smGFP4. Co-transfection of N. tabacum wild-type protoplasts with two separate plasmids encoding eqFP611 with a peroxisomal targeting signal 2 (pKAT2-eqFP611) and smGFP4 with a mitochondrial presequence (pIVD145-smGFP4). Images of a cell transfected with both constructs through MitoTracker (A) and FITC (B) filter sets, respectively. Scale bars: 10 μm. same plasmid under identical promoters should generate equal amounts of RFP and GFP within a cell. The entire procedure should be easier since only a single construct has to be handled. To investigate the feasibilty of this procedure, plasmid pIVD144-eqFP611-IVD145-smGFP4 containing both the eqFP611 and the smGFP4 genes with mitochondrial presequences each under control of a CaMV 35S promoter was constructed and first tested by transfection of tobacco protoplasts. Again, both red and green fluorescence could easily be detected in the same cell (Fig. 6). The fluorescence is found exclusively in mitochondria, the patterns of both red and green fluorescence being identical. This result is indistinguishable from the experiment with the same eqFP611 and smGFP4 expression cassettes encoded on two different plasmids (Fig. 3), but this time every transfected protoplast expressed both eqFP611 and smGFP4.
ible in mitochondria of epidermal cell layers (Fig. 7), demonstrating the convenient use of the corresponding vector in this system.
Tobacco plants stably expressing mitochondrially targeted eqFP611
A third way to use eqFP611 as a mitochondrial marker in plant cells is the generation of transgenic plants constitutively expressing mitochondrially targeted eqFP611. To create such plants, the RFP-expression cassette of pIVD145-eqFP611 was cloned into pBI121. The resulting plasmid pIVD145-eqFP611-pBI121 was stably transformed into tobacco by leaf disc transformation. Several independent plant lines were regenerated from transgenic calli and screened for bright red fluorescence in mitochondria. Red fluorescent mitochondria were observed in all T 0 transformants, but expression levels varied between individual plants. In addition, segregation was observed in the next generation. Thus, only the offspring of the most strongly fluorescent T 1 plant was used for propagation (Fig. 8). The transgenic plants completed their life cycle like wild-type plants and the red fluorescence in mitochondria was stably transmitted up to the T 3 generation, the last generation analyzed. No phenotypic differences were observed between the transgenic and wild-type plants. Thus, eqFP611 obviously causes no cytotoxic or other detrimental effects even upon constitutive expression over several generations.
Conclusion
Our results consistently demonstrate that eqFP611 meets all requirements for a potential fluorescent reporter protein for application in plants. It can be expressed in plant cells from the unmodified E. quadricolor cDNA sequence to levels easily detectable by epifluorescence microscopy without any adverse affect on viability. eqFP611 fluores-Mitochondrially targeted eqFP611 and smGFP4 expressed from the same plasmid cence can readily be separated from the red chlorophyll autofluorescence by using appropriate filter sets. Its subcellular localization can be efficiently controlled by N-terminal signal sequences. eqFP611 and GFP are fully compatible in dual-labeling experiments since there is no cross-interference with regard to expression and intra-cellular sorting and their fluorescence spectra can be clearly distinguished.
In addition, the plasmids created in the course of this work are convenient tools for the investigation of the subcellular localization of proteins in plant cells. The constructs encoding IFP fusions proteins with mitochondrial and peroxisomal targeting sequences can be used to express markers for the visualization of the corresponding organelles. The targeting sequences can also be easily exchanged to create new IFP fusions with any protein. Furthermore, all IFP expression cassettes can be transferred by HindIII/EcoRI digestion into the plant transformation vector pBI121 and derivatives thereof. Finally, the tobacco line stably expressing eqFP611 targeted to mitochondria is a useful source for protoplasts with an endogenous mitochondrial marker.
In summary, eqFP611 represents a true alternative to other RFPs and can be added into the tool box of red fluorescent proteins for use in plants.
Plasmid construction/cloning strategy
The eqFP611 wild-type coding sequence (696 bp) was PCR amplified from a respective cDNA clone [13] with primers eqFP611-H 5'-cacccgggatgaactcactgatcaagg-3' (in which the EcoRI site at nucleotide position 4 relative to the start codon was eliminated) and eqFP611-R 5'tcgagctctcaaagacgtcccagtttg-3'. The PCR product was digested with XmaI and SacI and cloned into the respective site in the vector pIVD145-smGFP4 [17], in which eqFP611 replaced the smGFP4 gene. The resulting plas- The plasmid peqFP611 for the expression of eqFP611 without presequence was obtained by excision of the IVD presequence from pIVD145-eqFP611 by BamHI digestion followed by religation.
To study subcellular targeting of two fusion proteins simultaneously, a plasmid carrying two genes for different fluorescent proteins fused to identical mitochondrial targeting sequences (pIVD144-eqFP611-IVD145-smGFP4) was constructed. Briefly, IVD-eqFP611 and IVD-smGFP4 fusions both under control of a CaMV 35S promoter were introduced into the same plasmid in head-to-head orientation separated by a spacer sequence. Both presequences can be exchanged separately by XhoI (eqFP611) and BamHI (smGFP4) restriction digestion, respectively. Cloning details are available on request.
For constitutive expression of eqFP611 and GFP fusion proteins in plants, plasmids suitable for agrobacteriamediated transformation were constructed. To generate pIVD145-eqFP611-pBI121, the HindIII-EcoRI fragment containing the eqFP611 expression cassette was removed from plasmid pIVD145-eqFP611 by cutting with EcoRI and partial digestion with HindIII. This DNA fragment was ligated into pBI121 digested with the same enzymes, which replaces the GUS cassette in this vector.
The vector backbone of psmGFP4 (sometimes also designated psmGFP) has been reported to be based on pUC118 and to contain the sequence ggatccaaggagatataacaatgagt Constitutive expression of mitochondrially targeted eqFP611 Figure 8 Constitutive expression of mitochondrially targeted eqFP611. Protoplasts derived from stably transformed N. tabacum plants constitutively expressing eqFP611 targeted to mitochondria (pIVD145-eqFP611-pBI121). (A) Image taken through MitoTracker filter set. Scale bar: 10 μm. (B) Plasmid used for transformation. The IFP expression cassette is identical with that in Figure 2, but has been inserted into pBI121. [24]. Our plasmid psmGFP4 and all its derivatives deviate from the published configuration in some aspects. Sequencing of pIVD145-smGFP4 shows the sequence downstream of the CaMV 35S promoter to be tctagaggatcctatg...(IVD)... ggatcccgcccgggatg...(smGFP4)... (start codons in bold). PCRs with one primer binding in the vector backbone and the other one in the CaMV 35S promoter or smGFP4 coding sequence in our psmGFP4 clearly show that the multiple cloning site is not orientated like in pUC118 and pUC18 but like in pUC119 and pUC19 (data not shown).
The absence of a 473 bp fragment in a digestion of the plasmid pIVD144-eqFP611 with RsaI (data not shown) rather indicates a pUC19-like instead of a pUC119-like configuration of the psmGFP4-derived vector-backbone.
Polymerase chain reactions
All PCRs were performed with BD Advantage™ 2 Polymerase Mix (Becton Dickinson GmbH, Heidelberg, Germany), Phusion™ High-Fidelity DNA Polymerase (BioCat GmbH, Heidelberg, Germany) or self-produced Taq polymerase, respectively. Amplifications were done in 22 to 35 cycles under conditions recommended by the manufacturer (BD Advantage 2, Phusion). Reactions with selfproduced Taq polymerase were done following standard protocols [25].
All PCR-derived DNA fragments were sequenced after cloning, except the RFP-expression cassette in pIVD144-eqFP611-IVD145-smGFP4. In this case, only the IVD144 mitochondrial presequence was analyzed by sequencing.
Transformation procedures
PEG-mediated transient transfection of protoplasts was essentially carried out as described previously [26]. For transfection with single constructs, 60 μg DNA were used. In case of simultaneous transfection with two separate plasmids, 30 μg to 60 μg of each plasmid DNA were used.
Transgenic Nicotiana tabacum L., cv Petit Havana plants were generated essentially as described elsewhere [27]. Expression of IVD145-eqFP611 in the T 0 , T 1 , T 2 and T 3 plants was followed by fluorescence microscopic analysis of parts of the lower epidermis of leaves.
Agrobacteria-mediated transformation of N. benthamiana by leaf infiltration was performed as described before [23].
Strain GV2260 of A. tumefaciens was used for experiments requiring T-DNA transfer. | 5,409.6 | 2007-06-06T00:00:00.000 | [
"Biology"
] |
Mathematics education , democracy and development : Exploring connections
knowledge is to be maintained and owned by all, then the relations between academic, Western or conventional mathematics and the different mathematical knowledges and practices of different groups and individuals have to be brought into dialogue with each other, to be connected and contextualised. By valuing different kinds of mathematics and ways of knowing (and doing) mathematics, different peoples are valued and respected. Notwithstanding that the playing field of the different mathematics are not level, for mathematics to have a restorative power in situations of conflict, there has to be at the very least, recognition that there are different ways of knowing the world mathematically, which may be relevant, useful and appropriate in different contexts. The enormous power of academic mathematics to cast its gaze on almost any human activity today and re-present or appropriate it through its discourse gives healing and restorative ‘mathematical truths’ a particularly important place in mathematics classrooms. The legacy of colonialism and apartheid, which damaged the growth of indigenous knowledge systems, must be addressed both for its own sake to reclaim lost and hidden ‘mathematical truths’ and also because it provides possibilities for new knowledge, even if defined in terms of academic or Western knowledge systems. The role of ‘acknowledgement’ in restoring dignity lies in the recognition that different cultures on every continent, in different periods of its history, have contributed mathematical knowledge. Acknowledging multiple histories is part of healing. The hegemony of Western or academic mathematics has been challenged for the ways in which conventional histories of mathematics have ignored, marginalised, devalued or distorted the contributions of peoples and cultures outside Europe – of China, India, North Africa and the Arab world – to that mathematics that is referred to as academic or Western mathematics. Joseph (1991) points out that [s]cientific knowledge which originated in India, China and the Hellenic world was sought out by Arab scholars and then translated, refined synthesised and augmented at different centres of learning... from where this knowledge spread to Western Europe. (p. 10) However, Eurocentric historiographies of mathematics have also been criticised from another perspective: for failing to acknowledge the independent histories of mathematics of peoples who have developed their own mathematics, particularly the indigenous peoples of different regions of Africa, America and Australia (Ascher, 1991). A healing and restorative mathematics would therefore be one that recognises the rich mathematical histories of peoples not only in terms of conventional mathematics but on its own terms and its own forms, which may or may not be easily distinguishable as mathematics, and would be dignified by being given a proper space and engagement in mathematical curricula. Recognising multiple ‘mathematical truths’, as well as the processes by which these truths come to be constructed, allows for improved possibilities for the critique of truths in mathematics to be found within mathematics. In particular, these varied forms of ‘mathematical truths’ have the potential
Mathematics education and its links to democracy and development are explored in this article, with specific reference to the case of South Africa.This is done by engaging four key questions.Firstly, the question of whether mathematics education can be a preparation for democracy and include a concern for development, is discussed by drawing on conceptual tools of critical mathematics education and allied areas in a development context.Secondly, the question of how mathematics education is distributed in society and participates in shaping educational possibilities in addressing its development needs and goals is used to examine the issues emerging from mathematics performance in international studies and the national Grade 12 examination; the latter is explored specifically in respect of the South African mathematics curriculum reforms and teacher education challenges.Thirdly, the question of whether a mathematics classroom can be a space for democratic living and learning that equally recognises the importance of issues of development in contexts like South Africa, as a post-conflict society still healing from its apartheid wounds, continuing inequality and poverty, is explored through pedagogies of conflict, dialogue and forgiveness.Finally the question of whether democracy and development can have anything to do with mathematics content matters, is discussed by appropriating, as a metaphor, South Africa's Truth and Reconciliation Commission's framework of multiple 'truths', to seek links within and across the various forms and movements in mathematics and mathematics education that have emerged in the past few decades.
Introduction
There is no doubt that notions of democracy and development are highly contested in themselves and in education; hence, so too would be any exploration of their links to mathematics education.Whilst recent research, theory and practice, has emerged in the literature about connections between mathematics education and democracy and the related issues of equity and social justice, arguably much less has been written about mathematics education and development and related aspects such as poverty.
The four questions addressed in this article were inspired and have evolved from those first posed by Skovsmose (1994) with respect to general education and democracy.They were later reformulated to focus more directly on mathematics education (Vithal, 2003).Both have been framed within mathematics education from critical perspectives.This article brings in a further dimension, that of development, and attempts to connect the triad of mathematics education, democracy and development, through the four key questions, using the case of mathematics education in South Africa.This expansion enables a more explicit engagement with issues that are found across countries like South Africa, with developmental features such as high levels of poverty and inequality, when considering connections between mathematics education and democracy.
The article explores the following four questions: 1. Can mathematics education be a preparation for democracy that includes a concern for development?2. How is mathematics education being distributed in society and thereby shaping educational possibilities? 3. Can mathematics education pedagogy be a space for democratic living and engaging development issues?4. Can considerations about democracy and development in mathematics education have something to do with mathematics content matters?
This article begins, at most, a conversation about notions of development in deepening and broadening understandings of the many facets of mathematics education in a young democratic, post-conflict society.From the many gains and challenges emerging from post-apartheid South Africa, a selection of issues are engaged in each of the questions being posed to reflect on the many aspects of mathematics education in a transforming society.
Mathematics education as a concern for democracy and development
The first question that is posed is: Can mathematics education, by itself and as part of general education, provide an introduction and preparation for democracy, and teach about democracy in ways that contribute to a society's development agenda?This question is discussed by drawing on the conceptual tools of critical mathematics education, in particular the formatting power of mathematics within a 'developmental state'.
Mathematics grows as it addresses questions and problems arising from within its own self-referential system(s), but increasingly it also advances as a discipline as it is applied to a diversity of problems in society, from everyday life to warfare and to poverty.D'Ambrosio (1994), points out the paradox in which mathematics is centrally implicated: In the last 100 years, we have seen enormous advances in our knowledge of nature and in the development of new technologies.
… And yet this same century has shown us despicable human behaviour.Unprecedented means of mass destruction, of insecurity, new terrible diseases, unjustified famine, drug abuse, and moral decay are matched only by an irreversible destruction of the environment.Much of this paradox has to do with an absence of reflections and considerations of values in academics, particularly in the scientific disciplines, both in research and in education.Most of the means to achieve these wonders and also these horrors of science have to do with advances in mathematics.(p. 443) Much of the concern in the developed contexts of the north is with how rapid advances in science and technology are fundamentally changing those societies and how these changes might pose a threat to democracy because of how they might limit the capacity of the electorate to participate meaningfully in understanding and influencing decisions that affect their lives.Another aspect of the concern is with the complexity of the science and technology and having to rely on experts (an 'expertocracy'), in particular the capacity of politicians and decision-makers to fully grasp the implications.In developing countries like South Africa, with 'emerging' or 'young' democracies, a much less literate electorate, lower levels of (science and technology) education amongst politicians and smaller pools of experts, this threat increases manyfold, especially with the global transfer and trade in science and technologies, which usually take place from developed to developing nations in view of their significant development challenges.
The question of how mathematics participates or is recruited in this is explained by Skovsmose (1994), through what he calls the 'formatting power of mathematics': [M]athematics produces new inventions in reality, not only in the sense that new insights may change interpretation, but also in the sense that mathematics colonises part of reality and reorders it.(p.42) The issue here is not that mathematics itself does anything, but rather it is about how mathematics is used by people, institutions or 'agencies' through all types of applications that come to produce and result in a formatting of society.Society today is increasingly mathematised.Keitel (1993;Keitel, Kotzmann & Skovsmose, 1993) has demonstrated the complexity of this through the relationship between social abstractions and thinking abstractions.As mathematics is applied to all sorts of processes, structures, problems and organisations, these in turn change and require further mathematisation.Whilst an increasing amount of implicit mathematics is found in all areas of life today, it requires a more sophisticated mathematically literate person to question the applications within a democracy.Paradoxically, at the same time less procedural mathematics is needed as technology takes over, for example the widespread availability of calculators.
This has serious implications for the mathematics education provided across the education system to strengthen democracy and fuel development.For countries like South Africa, if the notion of the formatting power of mathematics is accepted then it is imperative to educate those who will come to participate in that formatting power and address problems of development as well as those who will need to be able to react to it to ensure that fair, equitable and just solutions are found.It may be argued that a mathematics curriculum has an obligation to produce both 'insiders' and 'outsiders' to the formatting power of mathematics.The insiders are those who come to participate in the formatting (as high performers), be they 'constructors or producers' of mathematics or 'operators or users' of mathematics.The outsiders are those who must read and react to that formatting power as (mathematically literate or numerate) consumers of mathematics or as the marginalised of society (Skovsmose, 2003;Vithal, 2004a).
The recently introduced new Mathematics and Mathematics Literacy curricula for Grades 10−12 in South Africa could be said to roughly match these two requirements (discussed later).What is evident in this line of argument is that access to and competence in mathematics serve very different purposes.Both the presence and absence of mathematics education has real consequences; it is neither neutral nor value free.Mathematics is used in a multitude of ways in society: to predict, control, interpret, describe and explain within a particular cultural, economic and sociopolitical context.
Mathematics for those who will come to participate as the 'formatters' or 'high performers' in positions of power (as government or experts), especially in developing countries with scarce high-level human resources, needs to include critical engagement about development and poverty and integrate an ethics and social responsibility as part of the multiple goals of mathematics education.This is important since a democratic competence cannot be assumed and nor does 'high-level' or abstract mathematics necessarily produce an integrated critical mathematical literacy competence.This is because the thinking tools and language of mathematics do not by themselves provide the full means for criticising its applications in society.Similarly, a mathematical literacy for the majority has to be more than a functional or practical literacy.It is expected to integrate a critical and democratic competence with a mathematical competence if a citizenry is to participate meaningfully in a young democracy and growing economy and be able to grasp the mathematical basis implicit in the decisions taken for or against them.Mathematics needs to be responsive to a diversity of contexts, avoiding a ghettoisation of the mathematics curriculum and yet providing a mathematics that is inclusive of the consumers of mathematics and those who do not enter further and higher education or the world of work.It is not only necessary to know when a problem is dealt with using mathematics, whether the correct mathematics has been chosen, whether the mathematics has been correctly executed and whether the result can be relied on; it is also important to include reflections about how the use of mathematics to solve the problem relates to the broader (social, political, economic) context and its possible consequences, whether the problem could have been solved without mathematics and whether the evaluation or reflections could have been done differently (Keitel et al., 1993;Skovsmose, 1994).
The developmental challenge for mathematics education is not confined to particular parts of the world.It is a global challenge.At the Centennial Symposium of the International Commission of Mathematics Instruction (ICMI), at which mathematics educators reflected on mathematics education of the past hundred years and attempted to map out directions for the future, Setati (2008) proposed a significant new role in contributing to the acceleration and attainment of the United Nations Millennium Development Goals of eradicating poverty, promoting gender equality and universal primary education.She argued that mathematicians and mathematics educators need to work together -from different levels of the education system, in different aspects of research and practice, from different perspectives, and from different parts of the world -to address poverty, injustices, inequity, illiteracy and access to education.In proposing a shift to the developing world, she recommended ICMI studies of mathematics education within contexts of poverty, multilingualism and multiculturalism and a focus on mathematical literacy.Supporting development goals through general education in order to deepen and strengthen democracy is well understood, well known and accepted.However the key role mathematics education has in this, and how it can provide an introduction to democratic life and values and thereby to a better life for all, is arguably less so.
Within developing and poorer countries, mathematics education has an explicit and critical role.Lubisi (2008) spells out the direct connection between mathematics education and notions of the 'developmental state' as it is being deliberated in South Africa.For him, mathematics education is the key to empowering people with knowledge and skills that are necessary to reach the targeted economic growth rates, to create employment and to fight poverty.He argues that analyses of most of the skills areas of the economic sectors that are being targeted to ensure that this growth is achieved require mathematics.In particular, the shortage of artisans and technicians in South Africa, that is, those who are involved in various kinds of applications of mathematics as the 'users or operators', has become evident and needs to be taken seriously.Therefore, strengthening mathematics (and science) teaching in schools is important in order to reach development goals and the needs of economic growth.
For developing countries, improving the basic conditions of peoples' lives, including schooling and the quality of all aspects of mathematics education is crucial to sustaining democracy.
Mathematics education provides not only access to mathematical knowledge and skills, which is important for living in the 21st century, but in many countries, performance in mathematics determines access to jobs and further or higher education studies in a range of areas, from the natural and physical sciences to economics and technology.It is for this reason that mathematics is on the one hand regarded as a gateway subject to a large number of these high-status, highpaying professions, but on the other hand also functions as a gatekeeper for the many who fail to learn and perform at the requisite levels or are failed by the education system.In this respect, mathematics education functions implicitly to stratify society.How it does this is important to analyse if it is to be addressed so as to open more and better life opportunities for all students, whatever role they come to fill as producers, users or consumers of mathematics.As a high-stakes subject, it is not surprising that that there is much concern about who gets access to and performs well in mathematics and who gets excluded.On the day of their birth, Nthabiseng and Pieter could hardly be held responsible for their family circumstances: their race, their parents' income and education, their urban or rural location, or indeed their sex.Yet statistics suggest that those predetermined background variables will make a major difference for the lives they lead.Nthabiseng has a 7.2 percent chance of dying in the first year of her life, more than twice Pieter's 3 percent.Pieter can look forward to 68 years of life, Nthabiseng to 50.Pieter can expect to complete 12 years of formal schooling, Nthabiseng less than 1 year.… Nthabiseng is likely to be considerably poorer than Pieter throughout her life.… Growing up, she is less likely to have access to clean water and sanitation, or to good schools.
So the opportunities these two children face to reach their full human potential are vastly different from the outset, through no fault of their own.
Such disparities in opportunity translate into different abilities to contribute to South Africa's development.… As striking as the differences in life chances are between Pieter and Nthabiseng in South Africa, they are dwarfed by the disparities between average South Africans and citizens of more developed countries.Consider the cards dealt to Sven − born on that same day to an average Swedish household.His chances of dying in the first year of life are very small (0.3 percent) and he can expect to live to the age of 80, 12 years longer than Pieter, and 30 years more than Nthabiseng.He is likely to complete 11.4 years of schooling − 5 years more than the average South African.These differences in the quantity of schooling are compounded by differences in quality: in the eighth grade, Sven can expect to obtain a score of 500 on an internationally comparable math test, while the average South African student will get a score of only 264 − more than two standard deviations below the Organisation for Economic Cooperation and Development (OECD) median.
Nthabiseng most likely will never reach that grade and so will not take the test.(pp.1-2) The above narrative is reproduced in detail to illustrate how mathematics education operates as a system to open or limit educational possibilities for students.It also points to the necessity for locating mathematics education research, practice, policy and theory within a broader landscape in relation to other aspects that impact and shape (mathematics) educational opportunities.It shows the need to link mathematics education to multidisciplinary development studies, addressing issues of concern to developing countries, and especially to social and economic development.
South Africa is a useful case as a young democracy of almost two decades within which four waves of curriculum reforms have occurred, and in which mathematics educators, mathematicians, and a range of other 'stakeholders' have had the possibility to participate in shaping the mathematics curriculum.It has been observed that there are two main imperatives driving and shaping curriculum debates in South Africa: one is the post-apartheid challenge for greater equity and social justice, to redress decades of deliberate inequalities and to entrench and deepen democratic life; and the other is the global competitive and development challenge to provide opportunities to learn and access knowledge and skills to participate effectively in the internationalised and globalised economy of the 21st century (Vithal & Volmink, 2005).
The development and democracy challenges for mathematics education are captured in the lives of Nthabiseng and Pieter.Despite each successive Minister of Education effecting some or other official curriculum change since the advent of democracy, with incremental improvements in school infrastructure and resourcing, Nthabiseng and Pieter continue to experience very different and substantially unequal implemented and attained mathematics curricula.However, in recognising the formatting power of mathematics, this discussion points to the potential of mathematics education to transform both Nthabiseng's and Pieter's lives in contributing to strengthening both democracy and development imperatives of South Africa.
Mathematics education, its distribution and educational possibilities
The second question is: How is mathematics education, in terms of mathematical knowledge, skills, values and attitudes, distributed in society and thereby shaping educational possibilities?This question is engaged through South African learners' mathematics performance in the much-publicised international studies and the national Grade 12 examination results.Possible explanations for these are then explored in terms of the mathematics curriculum, its recent reforms and related teacher education challenges.
One of the ways in which the distribution of mathematics education is made visible and public is through international studies of student mathematics performance and national tests and assessments.In the public imagination, shaped by the media, mathematics education is reduced to league tables of student mathematics performance scores.South Africa's repeated ranking at the very bottom of international studies of student mathematics performance and equally poor outcomes in the annual high-stakes national Grade 12 matric examination results, when each are released, follow with endless speculation about the reasons and causes of South Africa's continued poor mathematics performance.
The mathematics performance of Nthabiseng and Pieter and their consequent educational possibilities and life journeys allude to the deeply unequal conditions of schooling and opportunity to learn which have endured almost two decades since the advent of democracy.Inequities in the quality of South African schooling and living conditions are reflected in the test and assessment outcomes.However, an aspect that has not received much public attention is whether these studies, tests and assessments do indeed offer an accurate account of the mathematics knowledge and skills of learners.If all learners are deemed to have some mathematical knowledge by virtue of having lived in particular communities or cultures, as ethnomathematics for example argues, what do the tests in fact reveal, if anything, about what mathematics learners know and understand?
These large-scale assessments, which are costly to mount, are often driven as much by political imperatives as they are by educational ones and conducted within funding and other constraints.Methodological issues about the language in which the tests are conducted, familiarity with the format of the test items and the reliance on only paper and pencil assessments are seldom discussed publically to qualify the outcomes and findings.Even though these international studies have long come under criticism by mathematics educators and the use and misuse of their results cautioned against (e.g.Kaiser, Luna & Huntley, 1999), they have been latched onto by governments, including South Africa's, and used to introduce wide-scale national testing regimes.In the UK, which has a history of national testing, studies based on these national tests to explain leaner performance demonstrate the caution with which these results need to be interpreted.Cooper and Dunne (2000) showed, by comparing test and interview data, that many children fail to demonstrate in tests mathematical knowledge and understanding that they actually possess.They showed learners' confusion over the requirements of 'realistic' test items as compared to 'esoteric' items, and how this was particularly the case for children from working-class backgrounds.This is an important finding, relevant to the new South African school curriculum, which foregrounds applications and context in mathematics, as the newly introduced annual national assessments are being implemented at all levels of the education system.It raises serious questions about what the assessments really reveal and about whom or what.
The point here is not to suggest that testing should not take place, but it is necessary to understand how such test outcomes have become a public window to the mathematics classroom and have come to generate a particular discourse about the distribution of mathematical knowledge to which politicians and policymakers are particularly responsive.The outcomes of such studies need to be qualified with reference to issues of methodology and contextualised historically, taking into account the sociopolitical, economic, urbanrural and cultural dynamics.In the Trends in International Mathematics and Science Study, which yet again confirmed South Africa's poor performance, Reddy (2006) categorised learner performance scores according to the previously racially segregated schools and showed, not surprisingly, that despite their levels of resourcing, the former White schools (Model C) performed only at the international average, whilst former African schools performed at half the average of the White schools.African schooling has come under intense scrutiny, has had a myriad of interventions and is being researched for its lack of effectiveness.White schooling, however, has remained outside the research gaze and has not been interrogated, for example, for its failure to exceed international averages given their disproportionate share of considerable resources and other advantages.Much less is known about why Pieter is not performing as well as Sven, given comparable educational contexts and advantages.
Another very public lens on the distribution of mathematical knowledge and skills is the high-stakes annual Grade 12 mathematics matric examination, written by some half a million students each year, which plays one of the most important direct roles in apportioning further educational opportunities.These results are released at the end of each year amidst much fanfare and commentary.
We could well ask: What are the chances Nthabiseng and Peter would be amongst those to have studied and passed mathematics?Issues of race and gender have been foregrounded in post-apartheid South Africa in considerations of who gets access to higher education.Analyses by Kahn (2005) demonstrate a steady increase in the numbers of African students studying and passing mathematics at the Higher Grade (HG), which provides eligibility to access university.The 1991 figures have increased tenfold by 2005, with just under 10 000 African students successfully passing HG matric mathematics.This is, however, a miniscule proportion relative to both the numbers of African learners that enter schooling in Grade 1 and those who make it to Grade 12 each year.According to Kahn (2005) 'the white community generates science based skills at something close to saturation level' (p.142).Pieter is assured of a pass in mathematics to enter university but what of Nthabiseng?
Although overall gender differences in participation and performance in matric mathematics are not significant, when race is intersected with gender, major differences are found between African females and other female candidates.Restricting analyses of performance to only gender has been found to mask large disparities in matric HG mathematics pass rates for African females, which in 2002 were found to be only a quarter of the pass rates of White females (Centre for Development and Enterprise, 2004).It is possible to speculate that even amongst her White, Indian and Coloured sisters, Nthabiseng has had the system odds stacked against her passing mathematics to secure entry into higher education.The issue of race, however, is a vexed one and the Department of Basic Education, in releasing matric performance statistics, has in recent years reported on gender but not race data.Notwithstanding the dangers and arguments against entrenching apartheid constructed racial categories, this analysis demonstrates that such information is crucial to developing appropriate and targeted interventions that can impact the most marginalised in society.Being able to identify the Nthabisengs of South Africa for redress to ensure she does not continue to carry a disproportionate burden of apartheid's damage.
The South African national mathematics curriculum underwent a major reform for the Grade 10−12 band in which all Grade 12 students from 2008 wrote either a Mathematics or a Mathematical Literacy examination.These changes were made to address two main problems that directly relate to the mathematical knowledge distribution and education opportunity challenges.The first problem was that although the numbers taking mathematics (whether HG or SG [Standard Grade]) in the period from 1995 to 2007 had risen to above 60%, much of this increase was due to an increase in SG mathematics enrolments but it was a pass in HG mathematics that typically gave entry to higher education opportunities.The numbers studying and passing the sought-after and much-needed HG remained low and appeared to have reached a ceiling that remained below 30 000.The mathematics taught at SG, which was largely procedural and excluded key topics that were found in the HG curriculum, did not provide the kinds of mathematical competencies needed for further higher education studies to fuel the high-level science, economic and technology needs of the country and sustain the supply of constructors or producers of mathematics or indeed even the users or operators of mathematics (Skovsmose, 2003;Vithal, 2004a).A second problem that contributed directly to inequities in the distribution of mathematical knowledge and skills was that at least 40% of matriculants each year did not take any mathematics at all and hence were not taught any mathematics in schools between Grades 10 and 12.This meant they were not provided with even the competence to be consumers of mathematics that a citizenry in a democratic South Africa of the 20th century should at least have acquired through schooling.Mathematical Literacy was intended for this group.
One of the difficulties that the new Mathematics and Mathematical Literacy curricula have faced is a lack of consensus and clarity about what each of these are, their relation to the previous HG and SG mathematics curricula and their relation to each other.As a new subject, Mathematical Literacy faces a particular difficulty in escaping the image of a practical or functional lower order mathematics (for those who were deemed incapable of or uninterested in doing mathematics) rather than being conceptualised as a different integrated contextualised competence.Deriving from a broader 'mathematics for all' movement, it has been variously labelled within policy, theory, research and practice as numeracy, quantitative literacy (Steen, 2001) and mathemacy (Skovsmose, 1994), amongst others.This points to different ideological orientations, intentions and goals of such a mathematics, which extends from a concern with acquiring basic mathematics to a sophisticated critical integrated competence (Vithal, 2004a).The most recent South African Mathematical Literacy curriculum policy describes some of the key elements as involving 'the use of elementary mathematics', 'authentic real-life contexts', decision-making and communication and 'the use of integrated content and/or skills in solving problems' so that they become 'participating citizens in a developing democracy' and 'astute consumers of mathematics in the media' (Department of Basic Education, 2011b, pp. 8-10, [emphasis in original]).
On the introduction of the new curricula, fears that large numbers of students would choose Mathematical Literacy rather than Mathematics were not fully founded.The numbers who followed the new Mathematics curriculum in 2008 was just over half of all matriculants.However, the more troubling observation since then is that the number choosing Mathematics has decreased year on year.From a high of 300 000 taking Mathematics in 2008 the number had decreased by 25% in 2011(Department of Basic Education, 2011a).Although many more students are deemed to be succeeding in Mathematics, the number passing at 40% relative to those studying Mathematics have stabilised at a low of about 30% whilst the number passing at 30% has hovered between 45% to 47% (Department of Basic Education, 2011a).The new Mathematics curriculum (Department of Basic Education, 2011c), it would appear, is being differentiated by assessment rather than by content and levels of difficulty.These changes have, however, increased the numbers of students eligible for entry into university and opened another debate about their readiness to pursue and succeed in higher education programmes.
Which learners are allowed to do mathematics and the quality of mathematics education learners receive in school are shaped by many factors.Although all secondary schools must now offer Mathematical Literacy and system-wide interventions have taken place for its implementation, the same has not been obtained for the delivery of the new Mathematics curriculum, which is much more demanding than the previous SG curriculum and is much more application oriented, with several new areas and topics, compared to the former HG Mathematics curriculum.The new Mathematics curriculum has been implemented in a context in which only half of all secondary schools who previously offered mathematics offered it at the HG (Centre for Development and Enterprise, 2004).This means that the opportunity to learn mathematics is limited in real terms for those learners who find themselves in schools that do not offer the new curriculum or, where it is offered, do not have appropriately educated and trained teachers to deliver it.We could ask: In which schools are Pieter and Nthabiseng likely to find themselves and what teachers are they likely to encounter?By all accounts, teachers are critical to the delivery of any curriculum.From this perspective the success or failure of the new Mathematics curriculum hinges on the question of what further education and training provisions are being made at a system level for the substantial cohort of teachers that were teaching the 300 000 SG Mathematics students in 2007 and were then required from 2008 to deliver a new, different and more demanding Mathematics curriculum.The new official intended Mathematics curriculum has been found to compare well internationally, regarded to be as difficult as or more so than the previous HG Mathematics and as embodying best practices and knowledge about pedagogy and content that should go some way toward preparing students for the scientifically, technologically and mathematically advancing society of the 21st century.It is, however, in the implemented curriculum, at the school and at the classroom level, that the challenges are to be found.No doubt many different kinds and levels of resources and infrastructure are needed for the successful implementation of the new curricula; however, the main lever is the quantity and quality of competent and confident teachers who can deliver the new Mathematics curriculum and thereby shape South Africa's democratic ideals and contribute to its development goals.
To understand the extent of this challenge, of mathematics knowledge and skills of teachers, it is necessary to appreciate the historical legacy of mathematics teacher education from a system point of view.In South Africa, through the deliberate underdevelopment of apartheid, the education system has inherited a substantial core of teachers with diplomas as opposed to degrees, and an uneven preparation in the core content knowledge of mathematics.This legacy remains intact and must be addressed for any radical break with the past and for substantial improvements in providing learners with adequate and appropriately qualified mathematics teachers to acquire the kinds the mathematical knowledge and skills the official curriculum promises.
The magnitude of the task relates to both the issue of supply of new teachers and the continuous education of existing teachers.In the past decade, the number of students seeking to become senior secondary teachers of mathematics has not kept pace with demand as teaching is unable to compete with the status, remuneration and prestige of other expanding career options in science and technology, given the small pool of successful candidates in matric mathematics.This problem may acerbated the policies introduced in the late nineties to redistribute teachers, which resulted in a number of qualified mathematics teachers exiting the system.Furthermore, there has been no systemic state intervention for upgrading mathematical content knowledge at these higher levels, for example, systematically targeting all former mathematics teachers who were only able to teach SG mathematics.More than a decade since the first mathematics teacher audit (Arnott, Kubeka, Rice & Hall, 1997), approximately 20% of Grade 10-12 mathematics teachers are professionally unqualified and of those that are qualified, still only 21% have some university level courses (Parker, 2010).There is also evidence to suggest that qualified mathematics teachers in the system are either not teaching mathematics or not teaching it at the level at which they are qualified (Parker, 2010;Peltzer et al., 2005), for a number of different reasons, showing that a limited and scarce resource is being poorly utilised.
In South Africa, every new minister of education since 1994 has introduced curriculum reforms, resulting in several waves of curriculum changes.In this context, the fragile and weaker parts of the system are more likely to become dysfunctional.Teachers require time to be inducted into new or changing content and pedagogy.For teachers who may be struggling with mathematical content knowledge, forms of assessments and their associated pedagogical reforms, this may acerbate the problem especially in poorer and underresourced schools and classrooms, the kind of classroom in which Nthabiseng is likely to find herself.A vicious cycle persists if a curriculum reform is evaluated too soon, when it is more likely to show a dip in performance areas as the new curriculum is still bedding down; and if further changes are introduced before they are thoroughly understood and institutionalised.It is in this respect that stability in the official curriculum is crucial so that teachers are given a chance to interpret and give effect to the curriculum.In this context, it is assumed that other foundational infrastructure is in place, such as adequate and timely provision of core mathematical teaching and learning resources, for example, appropriate quality textbooks relevant to and necessary for each curriculum reform.
Attempting to increase and better distribute educational opportunities for many more learners to effectively break the glass ceiling of mathematics performance, particularly for those at the margins, in the most impoverished parts of the schooling system, requires a targeted, systemic and systematic long-term mathematics teacher development intervention, a stable curriculum policy environment, and, at the very least, a critical level of resourcing and schooling infrastructure for the mathematics education system to function.
Mathematics education pedagogy as democratic living and engagement with development
The third question is: Can democracy in mathematics education refer to the very life of a mathematics classroom, learning democratic values, democratic attitude and democratic competence in a context that recognises and seeks to address issues of development?This question is explored through a discussion on pedagogies of conflict and dialogue, and of forgiveness.
The new mathematics national curriculum policy reforms in all their different waves in post-apartheid South Africa, take as their point of departure quite explicitly the new constitution and provide the imperative for teachers to explicitly make connections between mathematics and the real world.The question of whose world gets selected, by whom and for what purpose in a mathematics classroom then becomes important.Teachers make selections of content, context and pedagogy and realise different kinds of actual and hidden curricula, for instance, in choosing to teach about mathematical modelling through the context of HIV and AIDS or some other development challenge or inequity in society.
In his study on teaching and learning mathematics for social justice in an urban Latino school, Gutstein (2003) showed how mathematics can be taught in a way that develops learners' sociopolitical consciousness and sense of agency, develops positive social and cultural identities through a classroom pedagogy that assists them to 'read the world (understand complex issues involving justice and equity) using mathematics' (p.37), develops mathematical power in the ways in which they do and think mathematically, and thereby changes their disposition and orientation toward mathematics.Much of the foundation for this kind of pedagogy was laid in the eighties and early nineties and has spawned a diverse literature in mathematics education describing and analysing activities and theoretical ideas that explored a political mathematics education (e.g.Mellin-Olsen, 1987) or critical mathematical literacy (e.g.Frankenstein, 1987).It has continued in different forms and endured to the present in debates particularly about equity, quality and social justice in mathematics education (e.g.Atweh, Graven, Secada & Valero, 2011) An overtly political approach to mathematics education also has early roots in South Africa.'People's mathematics for people's power' was a part of the broader phenomenon of the People's Education movement that arose during the apartheid era, which viewed schools and classrooms, including the mathematics classroom, as important sites for the struggle against apartheid.A number of mathematics educators engaged these early ideas in their teacher education programmes at the time (Adler, 1991;Breen, 1986;Julie, 1991).Suffice it to say here, not surprisingly there was deep contestation and resistance as any historical account of the people's mathematics movement demonstrates (Vithal, 2003).However, as a society still struggling with deep inequalities and continuing injustices, the question of whether mathematics education can participate in moving us toward more humanitarian goals -democracy, equity, social justice, non-racism, non-sexism -is as relevant today as, or perhaps more so than, it was all those years ago.The new South African mathematics curriculum provides a policy space for such engagement but the question of teachers' implementation of such a pedagogy remains open.
It was with this explicit concern and ideological orientation that I undertook my doctoral study in the mid-nineties (Vithal, 2003) and from which this section of the article draws in pointing to a pedagogy that emerged from empirical work, and which I reflect on and extend.I explored the question of what happens in mathematics classrooms when student teachers attempt to realise what could be called a social, cultural, political approach to the mathematics curriculum that integrates a critical perspective in practice.Although the student teachers were introduced to diverse practices related to this broad approach, the dominant curriculum practice engaged by them was that of project work (Vithal, 2004b(Vithal, , 2006)).The particular conception of project work employed was one that is well developed within the Scandinavian context (Olesen & Jensen, 1999) and widely implemented and researched in mathematics teaching and learning from primary to university education (Christiansen, 1996;Nielsen, Patronis, & Skovsmose, 1999;Niss, 2001;Vithal, Christiansen & Skovsmose, 1995).It specifically seeks to develop a critical perspective through an approach that is problem orientated, interdisciplinary and participant directed.By choosing exemplary problems of societal relevance to investigate, learners develop both knowledge and skills and the means for critiquing that very knowledge and skills.
An in-depth study and detailed description of how one student teacher, Sumaiya, taught mathematics in a Grade 6 mathematics classroom of African and Indian learners, in a school located in a predominantly Indian suburb, enabled an interrogation of the theory and practices advocated in the literature that seek to construe mathematics classrooms and schools as spaces for enacting democratic life.The student teacher, class teacher and learners engaged in groups a range of projects that required enacting democratic life as they dealt with issues of development -from creating a mathematics newsletter and questioning the school's use of their school funds and provision of facilities to the inherent inequalities and gendering of time for mathematics homework.Sumaiya brought a deeply reflective perspective as she grappled with introducing what were considered radical ideas about teaching mathematics in which democratic approaches were engaged in dealing with real micro local problems of development selected by learners.
The study generated a thesis of a pedagogy of conflict and dialogue embedding five dual-concept themes: freedom and structure, democracy and authority, mathematics and context, equity and differentiation, potentiality and actuality.These concept pairs capture the multifaceted and multidimensional nature of mathematics classrooms that choose to engage matters of democracy and development in this direct way.Conflict and dialogue and the dualconcepts are themselves explained as being in a relationship of complementarity, a complex relationship of cooperation and opposition.Drawing specifically on the interpretation offered by Otte (1990), complementarity allows one to see concepts (such as object/content and tool) firstly as woven together, each presupposing the other, where the one cannot be defined or described without the other, and secondly as contradictory to each other, opposing each other, where the one does not directly show itself in the other.The mathematics classroom in this framing is seen as a functional whole, not only in the school but also linked to the broader societal setting in which it is located.It is a space fraught with conflict and contradictions, but also containing all the possibilities and hope for their engagement and potential for resolution.
In a pedagogy of conflict and dialogue, in particular, not only are conflict as content and dialogue as tool in a relation of complementarity, but there are also complementarities within each.If mathematics classrooms are to be spaces not only for learning about democracy but also for enacting democratic life, then conflicts and crises of society will become part of classroom life and dialogue is needed, even as it is resisted.Dialogue as an epistemic or didactic tool (Mellin-Olsen, 1993) to deal with different kinds of (knowing/knowledge) conflicts functions to provide a better understanding not only of each other but also with each other.For Mellin-Olsen, dialogue is not a search for consensus or compromises as much as it is a search for deeper insight with the partners of the dialogue.
The disagreement or conflict has to be engaged in a way that does not destroy the dialogue, which creates a paradox: 'confrontation and disagreement … have to be developed in a context of agreement and co-operation' (Mellin-Olsen, 1993, p. 256).Hence, such a mathematics education legitimates not only engaging with different kinds of knowledge conflicts but also learning to dialogue and about dialogue as a method of confrontation and cooperation.
Both conflict and dialogue are needed in a pedagogy that attends to issues of democracy and development from a critical perspective.In South Africa, Nthabiseng and Pieter may well find themselves in the same mathematics classroom.However, they are arguably more likely to find themselves in a school where one or the other dominates: We could imagine a pedagogy of conflict without dialogue degenerating into anarchy and chaos or dictatorship.In current actual situations in South African schooling [more likely Nthabiseng's school] we have seen pupils' expression of dissatisfaction with the school, its authority, structures and dimensions of differentiation, expressed through violent means and then curbed through enacting stronger forms of autocracy rather than democracy.… Here, we can see how a pedagogy of dialogue is essential to a pedagogy of conflict especially if democracy, freedom, context and equity are to be valued in schools.
We could also imagine a pedagogy of dialogue without a pedagogy of conflict reduced to benign endless rounds of entertaining, interesting safe talk and action.Teachers in more advantaged schools [more likely Pieter's school] could talk about the inequalities and injustices brought about through apartheid in a pedagogy of dialogue without conflict.The pupils in this pedagogy could never really come to make connections between the apartheid past and the present, or question or act on the conflicts immediately around them.… A pedagogy of conflict and dialogue means therefore that each, conflict and dialogue, presuppose the other in a mathematics curriculum approach that seeks to focus on social, cultural, economic and political aspects of society.They are separate and each must be developed independently, conflict as content and dialogue as tool.But they are also connected, and therefore must be realised in relation to each other in a classroom.(Vithal, 2003, p. 356) In a mathematics education that embeds a critical perspective, there is no doubt a level of risk and uncertainty that attempting a pedagogy of conflict and dialogue invokes, especially when the inequalities and injustices do not reside in some distant place or time but are embodied in the very students and teachers in the classroom and each is somehow seen as implicated as 'victim' or 'perpetrator'.Even though not all mathematics can be taught with reference to context, given its abstract nature, creating some spaces in the curriculum for critical societal issues of development, diversity, equity and social justice must be argued for.Yet it is likely to be resisted in much the same way as People's Mathematics was, though for different reasons, by different parts of the schooling system.Schools in general and mathematics classrooms in particular need to become spaces for learning about and through democracy or we risk repeating past failures.That so many members of the White community in South Africa continue to claim today not to have known about the huge suffering of Black people perpetrated in their interest can be analysed as a most serious failure of White schooling and mathematics education.That young White learners who come through White schooling today fail to appreciate their positionality and privileged inheritance from the injustices of the past continues to be a failure of White schooling.The 'whiteness' of White schooling is to be understood in terms of not only its demographics, but also its rituals, rules and traditions that enculturate its members into a culture of 'whiteness' and has remained largely outside the mathematics educational research gaze.Much of the attention has been focused on the deficits of Black education and much less on the pathologies of 'whiteness' in how it is being reproduced or transformed in post-apartheid South Africa.Not surprisingly a pedagogy of conflict and dialogue is less likely to be enacted in diverse settings, where it can be a dangerous if not painful path to tread, where White teachers and learners cannot escape being seen as 'perpetrators' and Black teachers and learners cannot escape the feelings of 'victimhood' and suffering.
It is in this respect that a pedagogy of conflict and dialogue must also integrate a pedagogy of forgiveness (Waghid, 2005).A mathematics that reveals inequities and injustices of the past or present is likely to produce feelings of resentment and hate.In such contexts, as Waghid (2005, p. 226) notes, 'learning about forgiveness can become useful in enhancing pedagogical relation' and when teachers and learners 'cultivate forgiveness', it becomes a way to 'engender possibilities whereby people are attentive to one another' and can engage 'imaginative action' to move forward.A pedagogy of conflict and dialogue for a mathematics education for equity and social justice invariably opens wounds so that the 'truth' can be known, even relived, and understood.Each learns by being in the place and experience of the 'Other'.But if such a pedagogy is not to run the risk of deepening divides and difference then it must provide a means to heal.A pedagogy of forgiveness integrates into conflict and dialogue, a point of hope and creative action.The principle of hope, Skovsmose (1994) argues, needs to be preserved in a critical mathematics education.It is not surprising that 'forgiveness pedagogies' have emerged and are being engaged and studied within educational settings in societies that have had histories of political conflict and trauma (Zembylas & Michaelidou, 2011) as part of processes of reconciliation.Such pedagogies are needed in any mathematics education enacted with a consciousness for issues of development and deepening democracy.
Both a pedagogy of conflict and dialogue and a pedagogy of forgiveness take their bearing from South Africa's own post-apartheid processes of the Truth and Reconciliation Commission (TRC) (1998).In doing so, these pedagogies require the creation of spaces for 'truth' to be told so that reconciliation can occur, and mathematics by its power and status in society opens a unique and special way for such truths to be told.Only then can dignity be reclaimed, compassion shown and respect and friendship built.Critical, feminist and social justice mathematics pedagogies have sought to mobilise the power of mathematics knowledge and skills to overt political and social agendas.But in order for restoration, healing and peace to emerge, such pedagogies will have to attend not only to mathematics education pedagogies but also to mathematical knowledge itself.
Democracy, development and mathematics content issues
The fourth question is: Can mathematics education, democracy and development have something to do with mathematics content matters?This question is discussed by appropriating, as a metaphor, South Africa's TRC's multiple notions of truth in seeking a similar framing for linking the various forms and movements in mathematics and mathematics education that have emerged in the last few decades.
The myth that mathematics and mathematics education are neutral and value free has long been exploded.D'Ambrosio (1994) implicates mathematics in both the beauty and the devastation brought by advances in science and technology and raises serious questions for mathematics education for the 21st century.Ethnomathematics, critical mathematics education, mathematics for equity and social justice are areas of study and practice that have grown rapidly in the last few decades, forcing a re-examination of what constitutes and counts as mathematical knowledge, questioning how it has been and continues to be produced and legitimated, raising issues about who is recognised for its production and problematising mathematics curricula for their purpose, relevance and appropriateness for different groups in society they are intended to serve.
These different orientations to mathematical content, which appear discrete and disconnected, may be brought into relation with each other by appropriating, as a metaphor, a framework from South Africa's TRC (1998).The TRC enabled South Africans in post-apartheid society to confront the truth about apartheid by acknowledging, legitimating and validating multiple forms of 'truth'.It may be similarly proposed that there are different forms of 'truth' constituting mathematics and that engaging more than one 'mathematical truth' in a mathematics classroom, especially in a pedagogy of conflict and dialogue and of forgiveness, is necessary for both learning and reconciliation to occur.
The TRC dealt with the complexity about what constituted truth and whose truth by developing a conceptual framework comprising four notions of truth: factual or forensic truth, dialogue or social truth, personal or narrative truth, and healing or restorative truth.Each of these truths in turn provides a means for presenting recent challenges to mathematical content questions as multiple forms of 'mathematical truths'.This framework is useful to engage questions of what is taken to mean and count as 'mathematical truth' and whose 'mathematical truth' is privileged in mathematics education.It allows for a way of bringing divergent notions of mathematics content that have emerged into a single framing that enables these to coexist.
The first kind of truth recognised in the TRC (1998, p. 111) was forensic or factual truth based on 'objective information and evidence' that could be corroborated.This was truths that could be validated through impartial objective procedures.They were considered 'scientific truths' which utilised 'empirical processes' and were also regarded as 'legal truths'.
In mathematics education, for many there is only one objective mathematics variously described as academic mathematics or school mathematics, a canonical mathematics knowledge, free of context or social, political or cultural bias and unambiguously identifiable and articulated in the official curriculum at all levels of formal mathematics education.'Mathematical truths' are in the main those truths that can be proved and there are universally agreed ways for establishing these truths.It is a powerful mathematics underpinning and manifest in much of the material, technological, scientific and social world today.This conventional mathematics is discernible through its own signs and symbols and its own discourse even when written in different natural languages.
However, this mathematics has increasingly been challenged, paradoxically referred to as 'Western' mathematics and seen as a product largely of Western culture.It is regarded as a paradox since many nationalities and cultures have and continue to contribute to its development.For Bishop (1990, p. 51), mathematics, 'as one of the most powerful weapons in the imposition of Western culture', has participated in 'the process of cultural invasion in colonized countries' through at least three agents: trade and commerce, for example, units, numbers, currency; mechanisms of administration and government, for example, computation systems; and imported systems of education, for example, mathematical curricula for the elite few.Despite being seen as most outside the influence and realm of the social or cultural, this mathematics is deeply implicated in the distribution and enactment of political power.This is also one of the major criticisms launched by ethnomathematics.Powell and Frankenstein (1997) outline the main goal of ethnomathematics as challenging the particular ways in which Eurocentrism permeates mathematics education [in] that the academic mathematics taught in schools worldwide was created solely by European males and diffused to the Periphery; that mathematics knowledge exists outside of and unaffected by culture; and that only a narrow part of human activity is mathematical.(p. 2) Bishop (1988) concludes that mathematics must now be understood as a kind of cultural knowledge.… Just as all human cultures generate language, religious beliefs, rituals, food producing techniques, etc., so it seems do all human cultures generate mathematics.(p.180) Although mathematics can be thought of as a cultural product generated by different cultures in different social, political, economic environments, this does not mean that the forms such mathematical knowledge take are completely indistinguishable from each other.Bishop identified five fundamental activities that he argues are universal across all cultures that have been studied: counting, locating, measuring, designing, playing and explaining.All cultures have, for example, developed systems for counting, but how these are organised, the number words and symbols used differ and are tied to the contextual needs and conditions of different peoples.Numeration systems in Africa range from a few number words of some San people who live in desert areas to complex systems developed by those who have a long history of commerce, such as the Yoruba of Nigeria who have been urbanised farmers and traders for many centuries before colonialism and use a vegesimal system that requires both addition and subtraction to express a number, for example, 525 = (200 × 3) -(20 × 40) + 5.The new South African mathematics curricula explicitly recognise mathematics as a cultural product and this has relevance for how access is provided to academic mathematics and also for valuing the different mathematics that learners bring into the classroom by virtue of the knowledge and skills they acquire from their community and life experiences.D'Ambrosio (1985), as a founder of ethnomathematics, has argued that there are many mathematics, of which academics mathematics is but one, and these mathematics are developed by different sociocultural groups -from engineering mathematics to the mathematics of basket weaving.For him, 'Mathematics … are epistemological systems in their sociocultural and historical perspectives' (D'Ambrosio, 1991, p. 374): This is a very broad range of human activities which, throughout history, have been appropriated by the scholarly establishment, formalised and codified and incorporated into what we call academic mathematics.But which remain alive in culturally identifiable groups and constitute routines in their practices.(D'Ambrosio, 1985, p. 45) A mathematics education TRC, if it were to be held, would from the evidence led by ethnomathematicians point to a second kind of truth conceptualised by the TRC, that is, social or dialogue truth.Whilst 'the first (truth) is factual, verifiable and can be documented and proved', social or dialogue truth according to the TRC, is 'the truth of experience established through interaction, discussion and debate ' (p. 113).This kind of truth acknowledges the importance of participation, of listening carefully, and in which 'all possible views could be considered and weighed one against the other' (p.113).The TRC argues that social truths established through dialogue promote transparency and democracy as a basis for affirming human dignity and integrity.The process of establishing the truth is as important as the truth itself.
If mathematics is understood as a cultural activity and product then it follows that different groups in society come to develop different kinds of mathematics to deal with problems and needs they face, whether or not they refer to this as conventional mathematical knowledge.Moreover, the process by which a 'mathematical truth' comes to be established as a truth is as important as the truth itself, if we follow this notion of the TRC truth.In this regard, it is possible to refer to two broad areas: the mathematics of traditional societies, of indigenous peoples in both the developed and developing worlds, and the mathematics of different social and cultural groups in societies of today, of adults and children.These could be deemed 'dialogue or social truths' constituting a form of 'mathematical truths' that have come to be established over time by particular peoples or communities.
Although mathematics as a category is often not found in traditional or indigenous cultures, those who study mathematics in such contexts draw on a range of methodologies and disciplines, such as anthropology, archaeology, history, linguistics, economics, art, literature and oral traditions.Ongoing ethnomathematics research in particular has demonstrated that a wide variety of mathematical ideas are found in traditional cultures.These ideas have been elaborated through games, patterns, art, architecture, systems of time and money, logic, kinship relations, and practices and artefacts used in everyday and traditional life.In each culture or community, certain groups or individuals share a mathematical disposition and are in a sense custodians of mathematical ideas that evolve over time (Ascher, 1991), making comparisons across cultures difficult.It is however by linking mathematical developments to broader social, cultural, historical and political changes that descriptions of the mathematics of different groups (Joseph, 1991) and the 'mathematical truths' of different peoples may be valued and can be understood in an authentic and unprejudiced way.
A large and growing body of research has also shown that mathematical knowledge is generated by different groups of adults and children in a wide variety of contexts outside formal schooling.Studies involving dairy workers, carpenters, bookies, builders, fisherman, farmers, street vendors, shoppers, market sellers, dressmakers and many others have all been shown to develop efficient strategies for solving mathematical problems in their everyday life and work situations.Informal mathematical concepts and skills have been observed in children across nationalities, social classes and cultures.However, the mathematical understanding that children acquire has been explained to be rooted in their social and cultural experiences and may not resemble those expected or required in mathematics classrooms.This 'distance' between school culture and different groups in society has been analysed not only with respect to learners from traditional or indigenous cultures, but also with respect to other marginalised groups within Western society, such as women and the working class.Differences and similarities between school mathematics and out-of-school mathematics have been documented.For example, school mathematics is predominantly written, whilst oral forms have been found to characterise out-of-school mathematics (Nunes, Schliemann & Carraher, 1993).
The third kind of truth set out in the TRC framework, personal or narrative truth, is one in which each person is 'given a chance to say his or her truth as he or she sees it' (p.112).It is a truth based on the lived experiences of the individual who is reporting, a form of truth that reflects the 'constructed nature of meaning-making' (Dhunpath & Samuel, 2009, p. x).This form of truth was recognised as recovering national memory that had been officially ignored, a 'validation of the individual subjective experience of people who had previously been silenced or voiceless' (TRC, 2008, p. 112).
In mathematics education, it has long been recognised that each person develops mathematics ideas, knowledge and skills by virtue of their individual thinking processes and schemas.More recently, it has been recognised that this knowledge is also acquired by an individual by virtue of the community in which that person lives, works and functions.Whilst the former has derived largely from psychological perspectives, the latter has arisen from more sociological ones.
A substantial and well-established body of research in the area of (socio) constructivism demonstrates how individuals make sense and meaning of new mathematical ideas in terms of the frameworks each person has and how each develops strategies for dealing with mathematics that they are confronted by.Many of the studies of groups identified above have come out of in-depth research into how individual learners (adults and children), in varying contexts, reason and think mathematically and do mathematics.A key challenge for teachers of mathematics is to be able to discern the kinds of mathematical ideas, knowledge, skills and even attitudes each of their learners brings to school to provide access into school mathematics.Each learner develops their own 'mathematical truths', by virtue of their personal life trajectory, conditions and opportunities for learning, which have to cope with differences and conflicts between mathematical practices in school and in out-ofschool contexts, for example, when they migrate from rural to urban areas.
The official South African mathematics curricula give due recognition to the unique experiences and knowledge of the individual but the question that remains is accessing the 'mathematical truths' of learners.The challenge is to avoid stereotyping students, for instance choosing a mathematics problem involving traditional Zulu home building if there are African learners in a diverse classroom, even though none of the learners may have lived in or experienced a rural context.These personal 'mathematical truths' carried by each learner are also rendered invisible and not accessed or accommodated in national and large-scale mathematical assessments.
A fourth kind of truth in the TRC is healing and restorative truth.It is 'the kind of truth that places facts and what they mean in the context of human relationships -both amongst citizens and between the state and its citizens' (p.114).In the TRC it was not enough to establish what was the truth, as objective and factual, but it was equally important to see it as connected to how it was acquired and the purpose it was to serve.The role of 'acknowledgement' was highlighted as a form of affirmation of dignity by placing information on record and publically recognising it.
If what counts as mathematical knowledge and truth in mathematics is broadened then it should be possible to admit and accept that there are different mathematics within and across societies.However if the power of mathematics as an abstract knowledge is to be maintained and owned by all, then the relations between academic, Western or conventional mathematics and the different mathematical knowledges and practices of different groups and individuals have to be brought into dialogue with each other, to be connected and contextualised.By valuing different kinds of mathematics and ways of knowing (and doing) mathematics, different peoples are valued and respected.Notwithstanding that the playing field of the different mathematics are not level, for mathematics to have a restorative power in situations of conflict, there has to be at the very least, recognition that there are different ways of knowing the world mathematically, which may be relevant, useful and appropriate in different contexts.The enormous power of academic mathematics to cast its gaze on almost any human activity today and re-present or appropriate it through its discourse gives healing and restorative 'mathematical truths' a particularly important place in mathematics classrooms.The legacy of colonialism and apartheid, which damaged the growth of indigenous knowledge systems, must be addressed both for its own sake to reclaim lost and hidden 'mathematical truths' and also because it provides possibilities for new knowledge, even if defined in terms of academic or Western knowledge systems.
The role of 'acknowledgement' in restoring dignity lies in the recognition that different cultures on every continent, in different periods of its history, have contributed mathematical knowledge.Acknowledging multiple histories is part of healing.The hegemony of Western or academic mathematics has been challenged for the ways in which conventional histories of mathematics have ignored, marginalised, devalued or distorted the contributions of peoples and cultures outside Europe -of China, India, North Africa and the Arab world -to that mathematics that is referred to as academic or Western mathematics.Joseph (1991) points out that [s]cientific knowledge which originated in India, China and the Hellenic world was sought out by Arab scholars and then translated, refined synthesised and augmented at different centres of learning… from where this knowledge spread to Western Europe.(p.10) However, Eurocentric historiographies of mathematics have also been criticised from another perspective: for failing to acknowledge the independent histories of mathematics of peoples who have developed their own mathematics, particularly the indigenous peoples of different regions of Africa, America and Australia (Ascher, 1991).A healing and restorative mathematics would therefore be one that recognises the rich mathematical histories of peoples not only in terms of conventional mathematics but on its own terms and its own forms, which may or may not be easily distinguishable as mathematics, and would be dignified by being given a proper space and engagement in mathematical curricula.
Recognising multiple 'mathematical truths', as well as the processes by which these truths come to be constructed, allows for improved possibilities for the critique of truths in mathematics to be found within mathematics.In particular, these varied forms of 'mathematical truths' have the potential to make visible and more explicit the formatting power of mathematics, which acknowledges each 'formatter', from the constructors or producers of mathematics to the consumers and those marginalised, because each kind of personal, social or academic 'mathematical truth' is part of a network of truths in mathematics and each is seen to have value.
Within this framework of 'truths', 'mathematical truth' as factual, objective, invariant and decontexualised may be deemed but one kind of truth within a framework of 'truths' that need to get expression in a mathematics classroom.It alludes to how conflicts and dialogues that take place in such classrooms need to be handled if mathematics education is not only about increasing knowledge and awareness of inequities and injustices, but also a means for forgiveness and healing.Often mathematics is presented as a one and only truth, the most objective or neutral and this one truth is to be most valued whilst social or personal mathematical knowledge, skills and practices are subordinated or silenced.It is a healing and restorative 'mathematic truth' that gives meaning to a pedagogy of forgiveness.If these 'mathematical truths' are seen to be in a relation of complementarity with each other, then it is possible to acknowledge: firstly, that all kinds of truth in mathematics coexist, even though not all forms of mathematical truths find expression at any one moment in a classroom; secondly, that they need not be in harmony with each other because they exist in relations of cooperation and opposition; and thirdly, that this may be necessary for the growth and development of each.
Conclusion
The triad of mathematics education, democracy and development was explored with reference to the mathematisation of society through the first question.This societal focus drew attention to the formatting power of mathematics and the developmental challenges faced in a country like South Africa, pointing to the potential powerful role of mathematics education in addressing these for both Nthabiseng and Pieter.The second question threw a spotlight on the mathematics education system.The distribution of mathematics education and its associated educational possibilities was brought into sharp relief through a discussion on international studies and the national Grade 12 mathematics assessments and performance, and demonstrated how mathematics education becomes complicit in the inequities that are reproduced in society through mathematics curricula reforms and teacher education provisions.The third question moved the discussion into the school and classroom.It exemplified how mathematics classrooms can be places where democracy is learnt and development issues are engaged through a mathematics education pedagogy of conflict and dialogue that embodies forgiveness.The TRC truth framework metaphor drawn on in the fourth and final question to elaborate mathematical content matter, shows how the very mathematisation of society can recognise different forms of 'mathematical truths' that can coexist and come to constitute a mathematical knowledge and a mathematics education that can be healing and restorative of the dignity of people.Just as human beings are connected in complex relations of cooperation and contradiction, so too are our knowledge forms, including mathematics. | 16,317 | 2012-12-18T00:00:00.000 | [
"Education",
"Mathematics",
"Political Science"
] |
Comparison of the Vitreous Fluid Bacterial Microbiomes between Individuals with Post Fever Retinitis and Healthy Controls
Ocular microbiome research has gained momentum in the recent past and has provided new insights into health and disease conditions. However, studies on sight threatening intraocular inflammatory diseases have remained untouched. In the present study, we attempted to identify the bacterial microbiome associated with post fever retinitis using a metagenomic sequencing approach. For this purpose, bacterial ocular microbiomes were generated from vitreous samples collected from control individuals (VC, n = 19) and individuals with post fever retinitis (PFR, n = 9), and analysed. The results revealed 18 discriminative genera in the microbiomes of the two cohorts out of which 16 genera were enriched in VC and the remaining two in PFR group. These discriminative genera were inferred to have antimicrobial, anti-inflammatory, and probiotic function. Only two pathogenic bacteria were differentially abundant in 20% of the PFR samples. PCoA and heatmap analysis showed that the vitreous microbiomes of VC and PFR formed two distinct clusters indicating dysbiosis in the vitreous bacterial microbiomes. Functional assignments and network analysis also revealed that the vitreous bacterial microbiomes in the control group exhibited more evenness in the bacterial diversity and several bacteria had antimicrobial function compared to the PFR group.
Introduction
Several ocular manifestations like conjunctival congestion, uveitis, episcleritis, neuroretinitis, dacryoadenitis, and retinitis [1] have been reported to manifest following acute systemic febrile illness. Theses manifestations are not dependent on the etiology of the systemic fever and are not related to whether the fever was due to either bacteria, viruses, or protozoa. Postfever retinitis (PFR) is also one such retinal inflammatory disorder that usually manifests between two to four weeks post systemic fever irrespective of the etiology [1]. In PFR, posterior regions of the eye are affected and clinically patients present with focal and multifocal patches of retinitis which could be unilateral or bilateral, possible optic nerve involvement, serous detachment at the macula, macular edema and localized involvement of the retinal vessel in the form of beading of the vessel wall, tortuosity, and perivascular sheathing [2]. A small proportion of PFR cases are also infectious in etiology and are caused by bacteria ((Mycobacterium tuberculosis (Tuberculosis), Salmonella typhi (Typhoid), Leptospira spp. (Leptospirosis), Rickettsia spp. (Rickettsial retinitis)), protozoa (Toxoplasma gondii (Toxoplasmosis)), and viruses [3][4][5][6][7][8]. However, a significant number of PFR cases are due to an unknown etiology and have been attributed to the compromised immune status in the affected individuals. A characteristic feature of PFR is that it usually manifests between two and four weeks after the fever in immunocompetent patients and patients present with sudden and painless onset of diminution of vision [1,2]. Despite all these clinical manifestations, it has always been a challenge to identify causative agents if there are any associated with PFR.
Routine culture and PCR based methods have failed to identify microorganisms associated with the vitreous of PFR individuals [9]. This failure to detect microbes in the vitreous of postfever retinitis does not necessarily prove the absence of microorganisms. It could also imply that the microorganisms are present but are not amenable to the routine PCR or culture methods. Thus one may have to employ a more refined and sensitive method for detection of the microorganisms like using the advanced technique of Next Generation Sequencing (NGS). In this culture independent approach, the metagenome from the sample comprising the genomes of bacteria, fungi, and viruses would be sequenced and subjected to phylogenetic analysis to identify the diversity of microbes [10]. NGS has been used earlier to successfully identify microbes associated with diseases like endocarditis in which conventional methods failed to identify the causative agent (Streptococcus gordonii, S. oralis, S. sanguinis, S. anginosus, Coxiella burnetii, and Bartonella quintana) [11].
In the present study NGS was employed to understand the microbial composition in the vitreous of healthy controls and individuals with postfever retinitis. This approach would be helpful in identifying specific microorganisms present, if any, in the vitreous of patients and would also be useful for understanding the changes in the microbiome due to the prevailing inflammatory conditions. The present study would mainly focus on the alterations in bacterial microbiomes in the vitreous of individuals with postfever retinitis (PFR) compared to the vitreous of the control group (VC).
Sample Collection
Vitreous fluid was collected by vitreous biopsy/pars plana vitrectomy from PFR individuals (n = 9) (Table S1) following standard indications for the planned management of the patients by the ophthalmologist. Vitreous samples collected from individuals undergoing macular hole surgery or rhegmatogenous detachment (n = 19) served as controls (VC) since these individuals had no PFR and no other ocular or systemic indication (Table S1). Sample size was derived by the population proportion method. The parameters set for deriving the size included 90% confidence interval and a 5% margin of error. Participants who had inflammatory disorders of the eye, uncontrolled glaucoma, hypertension, and diabetes were excluded from the study. Approximately 300 µl of vitreous fluid was aspirated from each eye and stored at −80 • C until use. Institutional review board (LV Prasad Eye Institute, Hyderabad) approved the study (LEC 09-17-079). The study design adheres to the tenets of the Declaration of Helsinki.
DNA Extraction and Sequencing of the Samples
Vitreous fluid (200 µL) was used to extract DNA using PureLink DNA extraction kit (ThermoFisher Scientific, Mumbai, India). Quality of the DNA was judged by gel electrophoresis on a 1.0% agarose gel and the concentration was quantified using a Qubit 3.0 fluorometer (Thermo Fisher Scientific, Carlsbad, CA, USA). The extracted DNA was used for subsequent metagenome analysis. For this purpose, the total nucleic acids were amplified with random hexamers using amplification kit (SeqPlex, Sigma Aldrich Chemicals Private Limited, Bengaluru, India). The DNA was then processed for library preparation and sequenced according to the NEBNext Ultra DNA Library Prep Kit for Illumina Nextseq 500 PE sequencing protocol. Sequencing was performed on the Illumina Nextseq 500 platform using paired-end sequencing with 2 × 150 bp chemistry. In addition, at every stage of sample preparation, during DNA extraction, PCR reactions and whole genome amplification, sterile water was used as negative control to ascertain the possibility of bacterial contamination from the environment. However, PCR was consistently negative for amplification of DNA from water and the amplification mix without DNA which served as the negative control compared to samples. No sequences could be generated from the negative controls.
Whole Metagenome Analysis
The raw sequence reads from all the samples were analysed for identification of microorganisms. FASTQ files of the raw reads were analysed for quality parameters like read length, Phred score, GC content, and ambiguous bases. Subsequently, adapters were trimmed using trim-galore program (version 0.4.0; Felix Krueger and Cutadapt version 1.2). Post-trimming, reads were subjected to QC using FastQC (version 0.11.3). More than 90% of the data passed Q25 quality score, which was further used for downstream analysis. Sequences were then aligned with human reference genome (GRCh38) to remove human genome sequences and then the unaligned reads were recovered. Background sequences due to run processing were also filtered from the recovered unaligned reads. The unaligned reads were then assembled (denovo) using metaSPAdes genome assembler (Version 3.12.0) into contigs which were subsequently analysed using RAPSearch. RAPSearch is a tool for protein similarity search for short reads against nonredundant protein database from National Center for Biotechnology Information (NCBI). Meta Genome Analyser (MEGAN v 5.11.3) was used to identify the diversity for taxonomy assignment and KEGG pathway analysis.
Statistical Analysis
The vegan package in R (http://vegan.r-forge.r-project.org/) was used to rarefy the 28 bacterial microbiomes and to quantify alpha (Shannon diversity, Simpson index, and Observed number of genera) and beta diversity indices. Genera showing a mean abundance >0.002% were used for the analysis. Wilcoxon signed rank test (with p < 0.05 as significant) was carried out to identify significantly different taxa in both VC and PFR group bacterial microbiomes at phylum and genus (discriminative genera) level. In addition, linear discriminant analysis effect size (LEfSe) was applied to identify discriminant bacterial genera. For this, the genera of VC and PFR groups were subjected to a nonparametric factorial Kruskal-Wallis (KW) sum-rank test p <0.05, LDA > 2.0 (http://huttenhower.sph.harvard.edu/galaxy/). Further, the statistical significance of the differences of the discriminative genera for VC and PFR group was tested by one-way ANOVA followed by t-test (p < 0.05). In the vitreous differentially abundant KEGG, functional pathways between VC and PFR were also ascertained using Wilcoxon signed rank test (with p < 0.05 as significant). Principal Coordinate analysis (PCoA) plot was generated for the 28 bacterial microbiomes using ade4 package in R (v 3.2.5). Jensen-Shannon divergence was used as a distance metric (http://enterotyping.embl.de/enterotypes.html). VC and PFR clusters on the PCoA plot were identified by subjecting the data to K-means clustering (k = 2) [12].
Correlation Network Analysis of Bacterial Genera
CoNet [13], a Cytoscape [14] plugin, was used to detect interactive networks of the bacterial microbiome of VC and PFR groups independently. The differentially abundant genera of VC and PFR groups were used to analyse the bacteria-bacteria interactions at the genus level based on pair-wise correlations between abundances using Spearman correlation coefficient (r). To build the network, the genera were filtered with frequencies less than 0.05.
Bacterial Microbiomes in the Vitreous of Control and Postfever Retinitis Groups
Whole metagenomes were generated from the vitreous fluid of controls (VC, n = 19) and Postfever retinitis (PFR, n = 9) individuals (Table S1). A total of 126.64 million reads (Phred score > 25) were generated for all the 28 vitreous samples with an average of 4.52 million reads per sample ( Table 1). Out of these reads, a total of 1.14 million reads were assigned to bacteria and the average number of reads assigned to a bacterial microbiome was 40,753 (Table 1). Rarefaction curves were plotted for all the bacterial microbiomes and most of the samples showed tendency towards saturation, indicating that sufficient depth and coverage was achieved ( Figure S1). Alpha diversity indices (Simpson, Shannon, and Observed number of genera) showed differences between the two groups ( Figure 1). Taxonomic assignment and hierarchical classification of the reads revealed that a total of 18 bacterial phyla were detected in both VC and PFR groups ( Figure 2A). The number of phyla common to both VC and PFR groups were 12 and the number of phyla specifically detected in VC and PFR groups was 14 and 16 respectively. Unclassified reads accounted for a mean abundance of 0.29% and 0.38% in VC and PFR groups respectively (Table S2). Phyla Firmicutes and Proteobacteria were predominantly present in all the 28 samples (Table S2). The percentage mean abundance of phylum Firmicutes was 44.52 in VC and 36.3 in PFR samples. The percentage mean abundance of phylum Proteobacteria was 46.57 in VC and 50.33 in PFR samples (Table S2). The visible differences in the mean abundances of phyla Firmicutes and Proteobacteria were not significant in PFR compared to VC group (p > 0.05) (Figure 2A,B). Further, the number of reads of Bacteroidetes and Proteobacteria were slightly high in PFR samples compared to VC samples (Figure 2A,B). Whole metagenomes were generated from the vitreous fluid of controls (VC, n = 19) and Postfever retinitis (PFR, n = 9) individuals (Table S1). A total of 126.64 million reads (Phred score > 25) were generated for all the 28 vitreous samples with an average of 4.52 million reads per sample ( Table 1). Out of these reads, a total of 1.14 million reads were assigned to bacteria and the average number of reads assigned to a bacterial microbiome was 40,753 (Table 1). Rarefaction curves were plotted for all the bacterial microbiomes and most of the samples showed tendency towards saturation, indicating that sufficient depth and coverage was achieved ( Figure S1). Alpha diversity indices (Simpson, Shannon, and Observed number of genera) showed differences between the two groups ( Figure 1). Taxonomic assignment and hierarchical classification of the reads revealed that a total of 18 bacterial phyla were detected in both VC and PFR groups ( Figure 2A). The number of phyla common to both VC and PFR groups were 12 and the number of phyla specifically detected in VC and PFR groups was 14 and 16 respectively. Unclassified reads accounted for a mean abundance of 0.29% and 0.38% in VC and PFR groups respectively (Table S2). Phyla Firmicutes and Proteobacteria were predominantly present in all the 28 samples (Table S2). The percentage mean abundance of phylum Firmicutes was 44.52 in VC and 36.3 in PFR samples. The percentage mean abundance of phylum Proteobacteria was 46.57 in VC and 50.33 in PFR samples (Table S2). The visible differences in the mean abundances of phyla Firmicutes and Proteobacteria were not significant in PFR compared to VC group (p > 0.05) (Figure 2A,B). Further, the number of reads of Bacteroidetes and Proteobacteria were slightly high in PFR samples compared to VC samples (Figure 2A
Differentially Abundant Bacterial Genera in the Vitreous Fluid of Controls and Postfever Retinitis Individuals
The above reads could be assigned to 301 genera across all the microbiomes ( Figure 2C and Table S3A,B) with 291 and 281 genera in the VC and PFR microbiomes respectively. Nearly 50% of the reads were assigned to five genera (Clostridium, Enterobacter, Acinetobacter, Klebsiella, and Lachnoclostridium). It was also observed that out of the 301 genera, 18 genera were differentially abundant in PFR individuals compared to VC (p < 0.05) ( Table 2). In addition, linear discriminant analysis (LDA) combined with effect size measurements (LEfSe) analysis also showed 18 genera as significantly different between VC and PFR groups. Sixteen genera were relatively abundant in VC group and two genera were relatively more abundant in PFR group (Figure 3). A heatmap of the 18 discriminative bacterial genera separated VC and PFR vitreous fluids into two clusters. The majority of the VC (18 of 19) and PFR (7 of 9) microbiomes formed a separate cluster ( Figure 4A). Principal co-ordinate analysis of the discriminative genera separated all the VC and PFR microbiomes into two distinct clusters ( Figure 4B). It is worthwhile to mention that the PFR group also included three
Differentially Abundant Bacterial Genera in the Vitreous Fluid of Controls and Postfever Retinitis Individuals
The above reads could be assigned to 301 genera across all the microbiomes ( Figure 2C and Table S3A,B) with 291 and 281 genera in the VC and PFR microbiomes respectively. Nearly 50% of the reads were assigned to five genera (Clostridium, Enterobacter, Acinetobacter, Klebsiella, and Lachnoclostridium). It was also observed that out of the 301 genera, 18 genera were differentially abundant in PFR individuals compared to VC (p < 0.05) ( Table 2). In addition, linear discriminant analysis (LDA) combined with effect size measurements (LEfSe) analysis also showed 18 genera as significantly different between VC and PFR groups. Sixteen genera were relatively abundant in VC group and two genera were relatively more abundant in PFR group (Figure 3). A heatmap of the 18 discriminative bacterial genera separated VC and PFR vitreous fluids into two clusters. The majority of the VC (18 of 19) and PFR (7 of 9) microbiomes formed a separate cluster ( Figure 4A). Principal co-ordinate analysis of the discriminative genera separated all the VC and PFR microbiomes into two distinct clusters ( Figure 4B). It is worthwhile to mention that the PFR group also included three individuals viz., PFR07, PFR08, and PFR09 who were identified as having Retinitis and not PFR but nevertheless in the principal co-ordinate analysis they formed a single group, implying that the microbiomes of PFR and Retinitis are similar.
A correlation network was constructed using Cytoscape for the discriminative genera of VC and PFR individuals which would reveal the interconnections among the genera of the two groups. The correlation network revealed that 14 genera were common to both VC and PFR groups and included Anaerotruncus, Acetonema, Bacillus, Bdellovibrio, Geobacillus, Janthinobacterium, Mesorhizobium, Paenibacillus, Pelosinus, Sediminibacterium, Shigella, Sporomusa, and Thermosinus. The network analysis also showed that two genera, namely Arthrobacter and Shimwellia, were present only in the VC group and absent in the PFR group. Similarly, two bacterial genera viz., Pimelobacter and Tannerella were exclusively present in PFR group but absent in VC group ( Figure 4C). Box-plot analysis, a nonparametric test, was employed to interpret the differential abundance of the 18 discriminative genera in both VC and PFR groups ( Figure 5). The analysis revealed that the mean abundance of 16 discriminative genera that includes 14 genera that were common to both the groups and two genera that were unique to VC were more abundant in control group compared to PFR group. The correlation network revealed that 14 genera were common to both VC and PFR groups and included Anaerotruncus, Acetonema, Bacillus, Bdellovibrio, Geobacillus, Janthinobacterium, Mesorhizobium, Paenibacillus, Pelosinus, Sediminibacterium, Shigella, Sporomusa, and Thermosinus. The network analysis also showed that two genera, namely Arthrobacter and Shimwellia, were present only in the VC group and absent in the PFR group. Similarly, two bacterial genera viz., Pimelobacter and Tannerella were exclusively present in PFR group but absent in VC group ( Figure 4C). Box-plot analysis, a nonparametric test, was employed to interpret the differential abundance of the 18 discriminative genera in both VC and PFR groups ( Figure 5). The analysis revealed that the mean abundance of 16 discriminative genera that includes 14 genera that were common to both the groups and two genera that were unique to VC were more abundant in control group compared to PFR group.
Interactive Network of Bacterial Genera in Control and PFR Group
Co-occurrence network analysis (CoNet) was done using the discriminative genera between VC and PFR groups separately ( Figure 6A,B). In both VC and PFR networks, interactions were generated for 16 of the 18 discriminative genera. In VC group, Genus Propionibacterium which was present in 73.7% of the samples showed negative interactions with 13 other genera and positive interactions with only two genera. In contrast, the genus Sediminibacterium showed positive interactions with all the remaining 15 genera analysed (Table S4A). In contrast, in PFR group genus Paenibacillus showed maximum negative interactions (6 of the 18 genera). Five other genera viz., Acetonema, Anaerotruncus, Sediminibacterium, Sporomusa, and Thermosinus showed positive interaction with all the other genera (Table S4B). Remaining genera showed one or two negative interactions with other genera.
Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathways Analysis
KEGG pathway analysis of VC and PFR groups determined enrichment of functions related to infectious diseases, immune system, and signal transduction. Willcoxon signed rank test of the enriched KEGG pathways identified statistically significant pathways in PFR compared to VC group which included viz., ErbB signaling pathway, MAPK signalling pathway-fly, VEGF signaling pathway, PPAR signaling pathway, flavone and flavonol biosynthesis pathways, steroid degradation pathway, and intestinal immune network for IgA production (Table S5 and Figure 7). It was also observed that the amino acid metabolic pathways such as Lysine degradation, Tryptophan metabolism, beta-Alanine metabolism, Glutathione metabolism, Taurine and hypotaurine metabolism, Pentose phosphate pathway, and C5-Branched dibasic acid metabolism were enriched in VC group compared to PFR group (Figure 7). interactions with only two genera. In contrast, the genus Sediminibacterium showed positive interactions with all the remaining 15 genera analysed (Table S4A). In contrast, in PFR group genus Paenibacillus showed maximum negative interactions (6 of the 18 genera). Five other genera viz., Acetonema, Anaerotruncus, Sediminibacterium, Sporomusa, and Thermosinus showed positive interaction with all the other genera (Table S4B). Remaining genera showed one or two negative interactions with other genera.
VC PFR
A B
Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathways Analysis
KEGG pathway analysis of VC and PFR groups determined enrichment of functions related to infectious diseases, immune system, and signal transduction. Willcoxon signed rank test of the enriched KEGG pathways identified statistically significant pathways in PFR compared to VC group which included viz., ErbB signaling pathway, MAPK signalling pathway-fly, VEGF signaling pathway, PPAR signaling pathway, flavone and flavonol biosynthesis pathways, steroid degradation pathway, and intestinal immune network for IgA production (Table S5 and Figure 7). It was also observed that the amino acid metabolic pathways such as Lysine degradation, Tryptophan metabolism, beta-Alanine metabolism, Glutathione metabolism, Taurine and hypotaurine metabolism, Pentose phosphate pathway, and C5-Branched dibasic acid metabolism were enriched in VC group compared to PFR group (Figure 7).
Discussion
Ocular microbiome studies that uncover microbial communities of the eye have revealed that the eye hosts a unique assemblage of microbes [15,16]. For instance, the ocular surface microbiome has a preponderance of commensal microbes which are different from the skin and oral microbiome [17,18]. Studies on intraocular fluids like the aqueous and vitreous are scarce and these fluids have been considered to be sterile but may host pathogen invaders during disease conditions [19]. Diagnosis of these intraocular pathogen invaders is a challenge and it is further complicated by the inability to culture the microbes using conventional methods and PCR based methods are often unsuccessful. However, a possible approach to overcome this challenge would be to resort to metagenome deep sequencing of the ocular fluids. This approach has indeed demonstrated that vitreous fluids from Endophthalmitis patients [19,20] hosts a variety of microbes which were
Discussion
Ocular microbiome studies that uncover microbial communities of the eye have revealed that the eye hosts a unique assemblage of microbes [15,16]. For instance, the ocular surface microbiome has a preponderance of commensal microbes which are different from the skin and oral microbiome [17,18]. Studies on intraocular fluids like the aqueous and vitreous are scarce and these fluids have been considered to be sterile but may host pathogen invaders during disease conditions [19]. Diagnosis of these intraocular pathogen invaders is a challenge and it is further complicated by the inability to culture the microbes using conventional methods and PCR based methods are often unsuccessful. However, a possible approach to overcome this challenge would be to resort to metagenome deep sequencing of the ocular fluids. This approach has indeed demonstrated that vitreous fluids from Endophthalmitis patients [19,20] hosts a variety of microbes which were otherwise undetectable using routine diagnostic approaches. However, Deshmukh et al., [20] using V3 and V4 targeted metagenome sequencing approach could not generate bacterial microbiome in control vitreous fluids, while Kirstahler et al. [17] using whole metagenomic sequencing approach could detect bacterial microbiomes in control vitreous fluids. Such studies on the identification of intraocular microbes could form the diagnostic basis for intraocular inflammatory diseases like uveitis and Age-related Macular Degeneration (AMD) [21]. In the present study, based on metagenomic sequencing of the vitreous fluid of VC and PFR individuals we demonstrate alterations in the bacterial microbiome in the PFR individuals. These findings may have implications in disease outcome (Figure 2A-C).
Studies on gut microbiome associated with systemic human Ulcerative Colitis, systemic sclerosis [22,23], and ocular diseases such as Keratitis [12,24], Uveitis [25][26][27][28], AMD [29,30], dry eye disease in SjÖgren syndrome patients [31] consistently indicated that the overall diversity and abundance of bacterial microbiomes was decreased in the disease condition compared to control group. In contrast, the ocular microbiomes showed an increase in the diversity in the diseased condition [32]. In agreement with this, the Alpha diversity indices (Simpson and Shannon) (Figure 1) did exhibit an increased trend in PFR group compared to VC group ( Figure 1B). The Simpson index and observed number of genera indicated that the bacterial communities in VC compared to PFR group were distributed more evenly and thus could be more functionally stable compared to PFR group. This also means that the VC group shares more common taxa among the samples compared to samples within PFR group.
Gut microbiome studies in patients with ocular disease have shown that Firmicutes and Bacteroidetes were the predominant phyla irrespective of the disease condition [12,24,27,29]. In contrast, the ocular surface bacterial microbiomes showed that Firmicutes and Actinobacteria were present as the predominant phyla [33]. In comparison, the intraocular microbiomes in the present study showed phyla Firmicutes and Proteobacteria as the major phyla in both control and PFR groups (Figure 2A), thus confirming the recent findings of Deshmukh et al. [20]. Even at the genera level 10 genera viz., Actinobacter, Bacillus, Enterobacter, Escherichia, Klebsiella, Nieisseria, Paenibacillus, Pseudomonas, Staphylococcus, and Streptococcus that were shared between the control and PFR samples were also identified earlier in the endophthalmitis samples [20]. We also demonstrate that the VC and PFR groups share 272 genera ( Figure S2) suggesting that 90% of the genera are shared by both the groups and only 10% of the genera are unique.
Previous cultivation-based assessments as well as metagenomic sequencing based analysis indicated that vitreous body of the eye is sterile and contains only a few microbial cells in individuals without eye infection [19]. Therefore, the mere identification of the bacterial genera in these samples does not imply association with ocular disease and it is important to perform appropriate statistical analysis to identify the specific organisms associated with PFR of the eye. Our results indicated that the bacterial microbiomes of the vitreous fluids of control and PFR groups are significantly different (p < 0.05) ( Table 2). Further that the bacterial microbiomes of VC and PFR groups are distinct was also confirmed by correlation network and interaction network analysis of the 18 discriminative genera ( Figure 4C). Functional analysis indicated that 11 of the 18 discriminative genera could be assigned to a physiological function (Table 3). When the abundance of these 11 genera was compared it was observed that in PFR, three probiotic genera (Anaerotruncus, Arthrobacter and Shimwellia), four antimicrobial genera (Bacillus, Bdellovibrio, Janthinobacterium, and Paenibacillus), two proinflammatory genera (Propionibacterium and Shigella) and one anti-inflammatory genus (Sporomusa) were decreased in PFR compared to VC. One proinflammatory genera (Tannerella) was also increased in PFR. Decrease in genera with antimicrobial and anti-inflammatory properties and increase in proinflammatory would support PFR. What could not be explained is the increase in anti-inflammatory genera. Earlier studies had indicated that the genus Tannerella is associated with pathogenicity [34,35]. The high abundance of antimicrobial and/or anti-inflammatory organisms in intraocular control samples compared to PFR group could imply that positive interactions of these genera would help in maintaining intraocular homeostasis in the vitreous fluid of controls. Co-occurrence network analysis depicts polymicrobial interactions (both positive and negative interactions) which could help in understanding the role of microbiomes in health and disease [36]. Microbes that exhibit positive interactions are likely to be more beneficial than those that exhibit negative interaction. In the VC group all the genera except Propionibacterium, an opportunistic pathogen with proinflammatory activity [37,38], exhibited more than 13 negative interactions out of 15 possible interactions with other genera. An earlier study had also indicated that Propionibacterium was associated with intraocular inflammatory diseases like Uveitis and Endophthalmitis [39]. Therefore, increase in the abundance of Propionibacterium in VC group was not anticipated. However, it is worthwhile considering the possibility that instead of Propionibacterium negatively influencing the 13 genera it is equally possible to consider that all these 13 genera in VC were together suppressing the negative effects of Propionibacterium. This indeed may be the case since these genera also possibly interacted with each other. Thus, based on the network analysis it appears that the bacterial microbiome in control group with multifaceted interactions could substantially minimise the activity of the proinflammatory bacteria and thus would maintain the bacterial microbiome homeostasis. While in the PFR group Propionibacterium had less negative interactions with other genera compared to control group. This may suggest that the bacterial diversity in PFR group has less protective function against proinflammatory bacteria compared to control group. KEGG pathway analysis showed enrichment of 14 pathways with functions related to metabolic pathways and signal transduction in VC and PFR group. Amino acids serve as the metabolic mediators for the cross talk between host and the microbiome and thus the amino acid metabolic pathways would differ at the site of infection compared to the healthy state [48]. In PFR group the decrease in abundance of amino acid pathways such as those encoding for the metabolism of lysine involved in the survival of bacteria [49], Taurine, hypotaurine [50], glutathione [51], and tryptophan [52] involved during infection, showed low abundance in PFR group compared to control group implying the measure taken by the host cells to avoid infection. Four signalling pathways were also enriched in PFR group compared to control group (Figure 7). This enrichment was expected because these pathways are involved in infection like Vascular endothelial growth factor (VEGF) pathway which facilitates interaction between pathogens and host cells [53], the mitogen-activated protein kinase (MAPK) which is important in modulating host and pathogen interactions [54], epidermal growth factor receptor (EGFR) which modulates cell survival, proliferation, differentiation, and migration/invasion [55], and Peroxisome proliferator-activated receptor (PPAR) which gets activated during bacterial infection [53]. The enrichment of all these pathways in PFR may also be associated with the secretions of inflammatory cytokines and host cell apoptosis in response to retinitis in PFR group.
The major limitation of the study was recruiting healthy individuals as controls for the study. Therefore, vitreous fluid collected from individuals undergoing ophthalmic surgery for macular hole or rhegmatogenous detachment were used in the study. The other limiting factor is that PFR is a very rare ocular disease and therefore recruitment of patients is very time consuming and requires ethical compliance for vitreous biopsy.
Conclusions
The study confirms the presence of bacteria in the vitreous fluid.
(i) Demonstrates that the bacterial microbiomes of vitreous fluid from VC and PFR individuals could be discriminated at the genera level. (ii) The vitreous fluid of VC individuals showed increase in abundance of anti-inflammatory and antimicrobial genera and decrease in proinflammatory genera. (iii) The vitreous fluid of PFR individuals showed decrease in abundance of anti-inflammatory and increase in proinflammatory genera. (iv) KEGG pathway analysis identified significant increase in signalling pathways in PFR group that are associated with inflammatory cytokines. | 6,941.2 | 2020-05-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
α -MnO 2 Nanowires as Potential Scaffolds for a High-Performance Formaldehyde Gas Sensor Device
: Herein, we report a chemi-resistive sensing method for the detection of formaldehyde (HCHO) gas. For this, α -MnO 2 nanowires were synthesized hydrothermally and examined for ascertaining their chemical composition, crystal phase, morphology, purity, and vibrational properties. The XRD pattern confirmed the high crystallinity and purity of the α -MnO 2 nanowires. FESEM images confirmed a random orientation and smooth-surfaced wire-shaped morphologies for as-synthesized α -MnO 2 nanowires. Further, the synthesized nanowires with rounded tips had a uniform diameter throughout the length of the nanowires. The average diameter of the α -MnO 2 nanowires was found to be 62.18 nm and the average length was ~2.0 µ m. Further, at an optimized temperature of 300 ◦ C, the fabricated HCHO sensor based on α -MnO 2 nanowires demonstrated gas response, response, and recovery times of 19.37, 18, and 30 s, respectively.
Introduction
Formaldehyde (HCHO) is classified as one of the dangerous gases and is supposed to generate indoor and outdoor pollution. Formaldehyde is widely used in the chemical and textile industries, including the manufacturing of adhesives and the processing of wood products, paper, synthetic polymers, and more. It is also used as a preservative in the form of formalin for the storage of biological specimens. Even though it is used in various biological and industrial applications, long-term exposure to formaldehyde can cause cancer, asthma, leukemia, and other diseases. It has been reported that formaldehyde can cause nasal and throat irritation in a very low concentration level of 0.08 ppm [1]. The International Agency for Research on Cancer (IARC) has identified it as a first class carcinogenic substance [2,3].
The past few decades have witnessed a great exploration of gas sensors based on metal oxide semiconductors. Such semiconducting metal-oxides-based gas sensors have generated extensive research interest due to their remarkable features, such as high gas response at low working temperature, high selectivity, ease of operation, portability, biocompatibility, and low fabrication costs [4]. Additionally, new physical and chemical properties appear when the semiconducting metal oxides are reduced to nanometer scales. Extensive studies have been carried out for the synthesis of nanostructured metal oxides with customizable structure, surface area, and surface defects [5,6]. The size, morphology, surface defect density, crystallinity, and bandgap energies of metal oxide nanostructures determine the movement of electrons and holes, electronic properties, and, hence, the gas sensing responses [7].
A detailed literature survey revealed that α-MnO 2 nanowires have been rarely reported as HCHO gas sensor materials. Keeping this in mind, we hydrothermally synthesized α-MnO 2 nanowires and used them to fabricate an HCHO gas sensor. Prior to sensor fabrication, α-MnO 2 was characterized through different techniques. Finally, a gas sensing mechanism was also proposed for the α-MnO 2 nanowires-based gas sensor toward HCHO.
Synthesis of α-MnO 2 Nanowires
Entire chemicals were procured from Sigma-Aldrich (St. Louis, MO, USA) and utilized as received without being further purified. In a typical facile hydrothermal process, potassium permanganate (KMnO 4 ) and concentrated hydrochloric acid (HCl) were mixed well in 1:4 molar ratios in 50 mL deionized (DI) water. The resulting solution was then vigorously stirred (30 min) before being moved to a Teflon-lined stainless-steel autoclave and heated to 150 • C for 15 h. The autoclave was cooled to room temperature after the reaction was completed in the desired time. Black-colored precipitate was centrifuged and washed with DI water and dried in air. The dried powder was calcined at 450 • C for 5 h and finally characterized in terms of morphological, structural, compositional, and gas-sensing properties.
Characterizations of the Synthesized α-MnO 2 Nanowires
As-synthesized α-MnO 2 nanowires were analyzed through X-ray diffraction for determining the polymorphic form and crystal size (XRD; PANanalytical XpertPro., Malvern, UK, Cu-Kα; λ = 1.542 Å). Field emission scanning electron microscopy (FESEM; JEOL-JSM-Coatings 2021, 11, 860 3 of 11 7600F, Hitachi, Tokyo, Japan) combined with energy dispersive spectroscopy (EDS) analysis was conducted to elaborate the morphology and compositional analysis. The electron mapping technique associated with FESEM was analyzed for evaluating the homogeneity of the constituent elements. Transmission electron microscopy and high-resolution TEM (HRTEM; JEOL JEM JSM 2010; Hitachi, Tokyo, Japan) techniques were used to investigate the structural features and the lattice interplanar angles of the synthesized nanostructures. Vibrational and scattering properties were examined using Fourier transform infrared (FTIR; Perkin Elmer-FTIR Spectrum-100, Waltham, MA, USA) spectroscopy and Raman spectroscopy (Perkin Elmer-Raman Station-400 series, Waltham, MA, USA), respectively.
Fabrication of Formaldehyde Gas Sensor Based on α-MnO 2 Nanowires
To fabricate the working electrode, a thin α-MnO 2 nanowire paste was prepared in ethylene glycol and pasted on alumina substrate (Active surface area = 1 cm 2 ) with Platinum printer digital patterns. It was finally annealed in air at 300 • C for 2 h. The complete sensor set up consisted of an electrometer, mass-flow controllers, a gas cylinder, and a data acquisition system with a PC interface. The gas responses at different operating conditions were calculated from the ratio of the resistances of the α-MnO 2 nanowires-based sensor in the presence of HCHO (R g ) and in air (R a ).
Characterizations and Properties of Synthesized α-MnO 2 Nanowires
The XRD pattern of the synthesized MnO 2 nanowires exhibited characteristic peaks corresponding to the α-MnO 2 crystal phase. The diffraction peaks at 12. [30,31]. These patterns are in excellent agreement with those reported in the literature [16,32] and JCPDS-44-0141 ( Figure 1). The high crystallinity and purity of the α-MnO 2 nanowires were also confirmed from the sharpness of the diffraction peaks and the absence of any diffraction peak corresponding to any impurity. In Figure 2a,b, FESEM images for the hydrothermally synthesized α-MnO2 nanowires are shown. These images indicate a very high-density growth of the nanowires. The formation of α-MnO2 nanowires of variable lengths and diameters can be confirmed from FESEM images. FESEM images further confirmed a random orientation and smoothsurfaces for the α-MnO2 nanowires. The average diameter of the α-MnO2 nanowires was In Figure 2a,b, FESEM images for the hydrothermally synthesized α-MnO 2 nanowires are shown. These images indicate a very high-density growth of the nanowires. The formation of α-MnO 2 nanowires of variable lengths and diameters can be confirmed from FESEM images. FESEM images further confirmed a random orientation and smoothsurfaces for the α-MnO 2 nanowires. The average diameter of the α-MnO 2 nanowires was found to be 62.18 nm, calculated using ImageJ software. A standard deviation of 15.99 was calculated. The corresponding statistical data are shown in Table S1. An average length of 2.0 µm was observed. In Figure 2a,b, FESEM images for the hydrothermally synthesized α-MnO2 nanowires are shown. These images indicate a very high-density growth of the nanowires. The formation of α-MnO2 nanowires of variable lengths and diameters can be confirmed from FESEM images. FESEM images further confirmed a random orientation and smoothsurfaces for the α-MnO2 nanowires. The average diameter of the α-MnO2 nanowires was found to be 62.18 nm, calculated using ImageJ software. A standard deviation of 15.99 was calculated. The corresponding statistical data are shown in Table S1. An average length of ~2.0 μm was observed. The qualitative element composition and the distribution of the hydrothermally synthesized α-MnO 2 nanowires were analyzed through EDS spectrum and elemental mapping images, respectively ( Figure S1). Selected area electronic FESEM image ( Figure S1a Figure S1c,d, which indicate the homogeneous distribution of the constituent elements evenly throughout the matrix of the MnO 2 nanowires. Figure 3a-c shows the characteristic panoramic TEM images for as-synthesized α-MnO 2 nanowires. As mentioned in FESEM images, the TEM images also confirm the formation of nanowire-shaped morphologies with smooth surfaces throughout the length of the nanowires. The diameters of the α-MnO 2 nanowires ranged from~60 to 65 nm with lengths up to~2.0 µm. The HRTEM image of part of an individual rod demonstrates that the nanowire has uniform lattice fringes as shown in Figure 3d. The lattice spacing of 0.69 nm was obtained, which corresponds to the (110) diffraction plane of α-MnO 2 (JCPDS No. 44-0141). Similar lattice spacing was observed by Wang et al. [33] for single-crystal α-MnO 2 nanowires synthesized hydrothermally from single KMnO 4 under acidic conditions.
The FTIR spectrum demonstrates a broad peak at 3430 cm −1 and a short peak at 1633 cm −1 (Figure 4a). These peaks correspond to the O-H stretching and bending vibrations, respectively, for physisorbed H 2 O [34]. The weak FTIR peaks appearing at 611 and 517 cm −1 are because of the stretching vibration of the metal-oxygen (Mn-O) bond, and they confirm the formation of MnO 2 [16,35]. Raman spectrum shows three prominent peaks at 308, 370, and 651 cm −1 (Figure 4b respectively, for physisorbed H2O [34]. The weak FTIR peaks appearing at 611 and 517 cm −1 are because of the stretching vibration of the metal-oxygen (Mn-O) bond, and they confirm the formation of MnO2 [16,35]. Raman spectrum shows three prominent peaks at 308, 370, and 651 cm −1 (Figure 4b). The strongest peak at 652 cm −1 is because of the Mn-O symmetric stretching vibration bond of the MnO6 octahedron [36]. The weaker Raman peaks at 308 and 370 cm −1 are assigned to the lattice vibrations on the Mn-O bond and bending vibrations of Mn-O-Mn in MnO2, respectively [37,38].
Formaldehyde Gas Sensing Properties of Synthesized α-MnO2 Nanowires
The response of the gas sensor depends on several temperature-related factors, including the oxygen adsorption, adsorption/desorption rate of gas molecules, and the carrier concentration [39]. Thus, the response of the α-MnO2 nanowire-based gas sensor was
Formaldehyde Gas Sensing Properties of Synthesized α-MnO 2 Nanowires
The response of the gas sensor depends on several temperature-related factors, including the oxygen adsorption, adsorption/desorption rate of gas molecules, and the carrier concentration [39]. Thus, the response of the α-MnO 2 nanowire-based gas sensor was analyzed for 200 ppm HCHO to find the suitable working temperature (Figure 5a). The gas response increased as the temperature was increased from 100 to 300 • C. As the temperature is increased, thermal activation results in the movement of more carriers in the form of electrons and holes onto the surface of MnO 2 nanowire, which increases the effective adsorption and oxidation of O 2 and analyte gas molecules [40,41]. At lower temperatures (<300 • C), the reactions between the adsorbed O 2 and HCHO gas molecules were sluggish due to insufficient activation energy [42]. At very high temperatures (>300 • C), the gas response was found to decrease, which was attributed to the enhanced rate of desorption of the adsorbed O 2 and HCHO gas molecules from the surface of the gas sensor according to previously reported results [43,44]. At the optimum temperature of 300 • C, a gas response of 19.37 s was observed. In addition to the aforementioned factors, the operating temperature for a gas sensor also depends upon multiple other significant factors, such as the grain size, porosity, and surface-volume ratio of the gas sensor material. The gas diffusion rate is still another factor affecting the operating temperature [45][46][47]. Factors related to gas sensing material can be controlled by the calcination conditions during synthesis. It has been reported that sensor materials with a smaller grain size show better gas sensing performances as compared to larger-grain-sized materials [48]. Small grain size favors the formation of large number of potential barriers at the grain boundaries, which results in significantly large resistance modulations. Thus, the grain size, porosity, and crystallinity of the α-MnO 2 nanowires can be controlled by either calcination temperature or calcination time. A thorough investigation is still required to optimize the grain size, porosity, and crystallinity for efficient adsorption of the analyte gases on the surface of the sensor materials; hence, by controlling the synthetic parameters, the working temperature can be optimized as well.
The gas response of the gas sensor showed a direct correlation with the HCHO concentration, which is indicated by the high value of the determinant coefficient (R 2 = 0.99544) for the linear fit plot obtained by plotting the gas response against the concentration of the HCHO (Figure 5b). The experimental results clearly show that the α-MnO 2 nanowire-based gas sensor produces HCHO concentration-dependent gas sensing results.
Long-term stability of the as-fabricated HCHO gas sensor was adjudged from the dynamic repeatable response-recovery curves for 200 ppm HCHO gas at an optimized 300 • C temperature. When exposed to HCHO gas, the sensor response increased rapidly and dropped to the baseline value immediately after the restriction of gas supply. This further demonstrates the repeatability of the fabricated gas sensor in the form of reversible response-recovery curves after each cycle (Figure 5c). The enhanced response-recovery curve for 200 ppm HCHO gas at an optimized temperature was examined for the estimation of the response and recovery times. Very short response (τ res ) and recovery (τ rec ) times of 18 s and 30 s, respectively, were observed (Inset Figure 5c). The cutting-edge status of the present work is evidenced in the very low response and recovery times of the fabricated α-MnO 2 nanowire-based gas sensor compared to other recently reported sensors for HCHO (Table 1).
Sensing Mechanism for the Fabricated Formaldehyde Gas Sensor
The gas sensing mechanism is based on variations in the chemo-resistive properties of α-MnO2 nanowires. Initially, when the surface of the α-MnO2 nanowires is exposed to air, O2 molecules undergo adsorption followed by reduction to various ionized oxygenated species under working temperature conditions by capturing electrons from the conduction band of n-type MnO2 semiconductor (Equations (2) and (3)). As a result, an outer electron depletion layer (EDL) is formed near the surface which has lower conductivity than the core region [16,59].
In the presence of HCHO, EDL thickness is further increased because of the reducing nature of HCHO [60]. Finally, HCHO molecules are oxidized to CO2 and H2O with the help of oxygenated ionizable species (Equations (4)-(6) (Figure 6)). For potential and practical applications, the long-term stability of the gas sensors is critical. The fabricated α-MnO 2 nanowire-based gas sensor showed consistent longterm gas response stability toward 200 ppm HCHO gas at 300 • C for seven consecutive days; an insignificant change in gas response was observed (Figure 5d). Gas-sensing parameters for gas sensors based on α-MnO 2 nanowires were found to be superior to the previously reported literature (Table 1). Overall, the fabricated α-MnO 2 nanowire-based gas sensor showed high HCHO gas sensor responses compared to some recently reported sensors [49][50][51][52][53][54]. Although the sensors [55][56][57][58] showed higher gas responses compared to our sensor, they have the limitations of very high response and recovery times.
Sensing Mechanism for the Fabricated Formaldehyde Gas Sensor
The gas sensing mechanism is based on variations in the chemo-resistive properties of α-MnO 2 nanowires. Initially, when the surface of the α-MnO 2 nanowires is exposed to air, O 2 molecules undergo adsorption followed by reduction to various ionized oxygenated species under working temperature conditions by capturing electrons from the conduction band of n-type MnO 2 semiconductor (Equations (2) and (3)). As a result, an outer electron depletion layer (EDL) is formed near the surface which has lower conductivity than the core region [16,59].
In the presence of HCHO, EDL thickness is further increased because of the reducing nature of HCHO [60]. Finally, HCHO molecules are oxidized to CO 2 and H 2 O with the help of oxygenated ionizable species (Equations (4)-(6) (Figure 6)).
Conclusions
α-MnO2 nanowires were synthesized using a single-step hydrothermal method and were explored as a dynamic chemi-resistive sensor material for the sensing of HCHO gas. Gas response behavior was analyzed as a function of temperature, concentration, and time. The as-fabricated sensor proved an excellent HCHO gas sensing activity with high gas responses, low recovery and response times, excellent repeatability, and a remarkable long-term stability of 7 days to 200 ppm HCHO gas at the low operating temperature of 300 °C. These properties of the fabricated gas sensor may thus make α-MnO2 nanowires suitable candidates for the fabrication of future gas sensors toward highly toxic gases.
Conclusions
α-MnO 2 nanowires were synthesized using a single-step hydrothermal method and were explored as a dynamic chemi-resistive sensor material for the sensing of HCHO gas. Gas response behavior was analyzed as a function of temperature, concentration, and time. The as-fabricated sensor proved an excellent HCHO gas sensing activity with high gas responses, low recovery and response times, excellent repeatability, and a remarkable long-term stability of 7 days to 200 ppm HCHO gas at the low operating temperature of | 3,854.2 | 2021-07-17T00:00:00.000 | [
"Materials Science"
] |
Measuring Similarity Between Discontinuous Intervals - Challenges and Solutions
Discontinuous intervals (DIs) arise in a wide range of contexts, from real world data capture of human opinion to α-cuts of non-convex fuzzy sets. Commonly, for assessing the similarity of DIs, the latter are converted into their continuous form, followed by the application of a continuous interval (CI) compatible similarity measure. While this conversion is efficient, it involves the loss of discontinuity information and thus limits the accuracy of similarity results. Further, most similarity measures including the most popular ones, such as Jaccard and Dice, suffer from aliasing, that is, they are liable to return the same similarity for very different pairs of CIs. To address both of these challenges, this paper proposes a generalized approach for calculating the similarity of DIs which leverages the recently introduced bidirectional subsethood based similarity measure (which avoids aliasing) while accounting for all pairs of the continuous subintervals within the DIs to be compared. We provide detail of the proposed approach and demonstrate its behaviour when applying bidirectional subsethood, Jaccard and Dice as similarity measures, using different pairs of synthetic DIs. The experimental results show that the similarity outputs of the new generalized approach follow intuition for all three similarity measures; however, it is only the proposed integration with the bidirectional subsethood similarity measure which also avoids aliasing for DIs.
I. INTRODUCTION
Interval-valued data is used in many applications to model uncertain and imprecise data in a simple and efficient way.In particular, continuous intervals (CIs)-bounded by left and right endpoints [1]-are often used.Discontinuous intervals (DIs)-having a sequence of continuous subintervals [2]-can arise in many real-world situations, such as hazard detection [3], fusion of sensor data observed in a non-continuous space [4], temporal reasoning [5] [6], and expressing natural language with temporal repetition [7] where similarity between the DIs are often assessed and applied.Moreover, in fuzzy set (FS) theory, the α-cuts of non-convex FSs also result in the DIs [8].In such cases, the similarity between non-convex FSs with the α-plane decomposition is dependent on the computation of similarity of DIs such as proposed in this paper.
Many similarity measures (SMs) have been proposed for CIs where Jaccard [9] and Dice [10] are the most popular ones.However, thus far, there is no specific SM for DIs that directly assesses their similarity.Instead, DIs are commonly converted into their continuous form (CIs) using some common approaches like interval addition [11], interval union [12], or a 'convexify' function [13] [14] and then the respective CI SM is applied to compute the similarity.However, the 'DI to CI' conversion involves the loss of discontinuity information of the DIs, changing the original meaning of the data, and thus affecting the accuracy of the similarity of the DIs. Figure 1 shows an example of this, where we consider two different pairs of DIs.The use of the 'convexify' function converts both cases into the same pair of CIs as shown in Fig. 2. As a result, we receive same similarity for both pairs of DIs by the Jaccard and Dice SMs, which goes against intuition in respect to the original DIs.Here, one way to avoid this type of information loss is to consider all possible combinations of the continuous subintervals within the DIs [15].
However, a further problem, particularly, the aliasing issue with common SMs, such as Jaccard and Dice has recently been identified [16], where the same similarity is returned for very different sets of intervals.A recently introduced SM for CIs using their overlapping ratios [16], also called bidirectional subsethood [17] has been shown to avoid aliasing for CIs.
In this paper, we propose a generalized SM for DIs which combines the bidirectional subsethood based SM [16] [17] with the idea of considering all pairs of continuous subintervals within the DIs.This generalized approach maintains discontinuity information in respect to DIs and uniformly handles the similarity computation of both CIs and DIs.We explore and contrast the behaviour of the resulting SM in respect to employing the well-known Jaccard and Dice SMs as part of the same framework, highlighting that such approaches, while avoiding information loss, still suffer from aliasing.The rest of this paper is organized as follows.In Section II, we present some background facts of CIs and DIs, subsethood, two common SMs for the CIs along with the bidirectional subsethood based SM [16] [17].Section III introduces the proposed generalized SM for the DIs and discusses its properties.We demonstrate this generalized SM using a set of synthetic examples of DIs and discuss the results in Section IV.Section V concludes the paper along with future work.Table I presents a list of acronyms and notation used in this paper.
II. BACKGROUND
In this section, we first define CIs and DIs, followed by a review of subsethood, as well as the Jaccard and Dice SMs.Finally, we briefly review the bidirectional subsethood based SM for CIs [16], [17].
A. Continuous Intervals
A CI is a set of real numbers characterized by a left and a right endpoints [1].Mathematically, it is represented as a = [a − , a + ] with a − < a + [11]1 .The cardinality, or equivalently, the size or width of a CI a is |a| = |a + − a − | [18].Three common approaches for representing multiple disjoint CIs with a single CI are: two bounded, non-empty CIs, then their addition, a+b = [a − + b − , a + + b + ] is also a bounded, non-empty CI [11].
results in a single bounded, non-empty CI [12].
• A 'Convexify' function: It takes two CIs a and b as inputs and returns the smallest CI that covers both a and b [13].
B. Discontinuous Intervals
A DI consists of a number of continuous subintervals (i.e., CIs) [2] 2 .Mathematically, it is represented as [4] where a is the DI and m is the number of its CIs such that a 1 < ... < a i < ... < a m , and a i is the ith CI of a such that a − i < a + i .Alternatively, a can be presented as Subsethood is a relation that expresses the degree to which one object is a subset of the other object.For two crisp sets, a and b, the subsethood is [19] where |a ∩ b| is the cardinality of the intersection of a and b, and |a| is the cardinality of a. S h is in between 0 and 1 where Equivalently, the subsethood between two CIs a and b can be defined as where a ∩ b is the size of the intersection between a and b and a is the size of a.
For the FSs 4 A and B, the degree of subsethood is [23] S where ) is a measure of the cardinality of the intersection of membership functions of A and B, and
D. Jaccard Similarity Measure
The Jaccard SM [9] between sets a and b is defined as the ratio of the cardinality of their intersection and the cardinality of their union, Beyond sets, the Jaccard SM is used to estimate the similarity for CIs or sets of CIs such as employed for example in data fusion [24], [25] and that of fuzzy sets [26].For comparing two CIs a and b, the Jaccard SM is expressed as where a ∩ b is the size of the intersection between a and b and a ∪ b is the size of the interval segment(s) covering them.When a and b are completely overlapped, S J a, b = 1 and when they are non-overlapped, S J a, b = 0. Again, for the FSs A and B on the discrete universe of discourse X, the Jaccard similarity is extended as [27] S where μ A (x i ) and μ B (x i ) are the membership grades of x i in A and B respectively.Equation ( 6) gives 1 for identical FSs and 0 for disjoint FSs.Note that the Jaccard SM has been further extended for interval-valued [28] and type-2 fuzzy sets [29]; though, this is not discussed further here.
E. Dice Similarity Measure
The Dice SM [10] between sets a and b is the ratio of the cardinality of their intersection and the average of their cardinality, expressed as In [24], [25], the Dice similarity is used along with the Jaccard similarity for the CIs.As for sets, the Dice similarity for CIs a and b is While less frequently used for FSs than Jaccard, the Dice SM is for example used in [30], [31] for trapezoidal FSs in the context of solving multi-criteria decision-making problems.
F. Bidirectional Subsethood Based Similarity Measure for Continuous Intervals
A new SM for the CIs was introduced in [16] which uses the reciprocal subsethoods [17] or overlapping ratios [16] of a pair of CIs for capturing their similarity.This measure for two CIs a and b [16] [17] is where is a t-norm 5 .We can rewrite (9) using (2) as, This SM directly captures any changes in the size of CIs and is sensitive to the size of their intersection when one CI is a subset of another in a pair.Further, it is always within [0,1], and is bounded above and below by the Jaccard and Dice SMs respectively for the minimum t-norm.
In the next section, we introduce a generalized measure where we can apply any of the S J , S D or S S h SMs for estimating the similarity between CIs or DIs by meeting their continuity or discontinuity property.
III. PROPOSED GENERALIZED SIMILARITY MEASURE FOR DISCONTINUOUS INTERVALS
In this section, we propose a generalized SM for computing the similarity between two DIs by comparing all possible pairs of their continuous subintervals.As stated, this generalized SM is equally applicable for the CIs.First, we present the proposed generalization and then demonstrate its major properties.We note that while the proposed approach is computationally expensive, we focus on the quality of the resulting similarity assessment only in this paper.We have already made progress on making the approach computationally more efficient, but considering the constraints on manuscript size, we will focus on this in our future publication.
A. Proposed Generalized Similarity Measure
In the proposed generalization, we use the basic notion that as a DI contains one or more continuous subintervals, comparing two DIs is analogous to systematically comparing their subintervals.With this intent, we first determine all possible pairs of subintervals within the DIs and compute their similarity.Later, we aggregate all these similarities to determine overall similarity between the DIs.Equation ( 11 where S a i , b j computes the similarity between subintervals a i ∈ a and b j ∈ b of each pair {a i , b j } using any of the three SMs (S J , S D , and S S h ).max(m, n) is the maximum number of pairs that can arise from the comparison of a single subinterval.The max(m, n) operator in the normalization step guarantees a maximum similarity of 1 -achieved when two DIs are identical.While other operators could potentially be explored, the max(m, n) operator provides intuitive behaviour for the similarity measure.
Remark 1.When DIs possess only a single CI, the formulation for S at (11) will return the original formulation for CIs.
Example 1.We consider an example in Fig. 3 disjoint.Figure 3 shows the similarity for all pairs using S J , S D and S S h (with the minimum t-norm) SMs.Hence, the overall similarity between a and b using (11) along with S S h SM is, S a, b = (e) All of S J , S D and S S h measures are transitive [16], which implies that the S measure is also transitive.
IV. DEMONSTRATION
This section presents the behaviour of the proposed generalized approach based on the bidirectional subsethood based SM (S S h ) along with the Jaccard (S J ) and Dice (S D ) SMs for the DIs.Herein, we conduct two separate sets of experiments with different synthetic examples, each designed to facilitate intuitive understanding of the behaviour of the approaches.
With the first synthetic dataset, we gradually decrease overlapping between the subintervals for a pair of DIs to see how smoothly the similarity alters from 1 to 0. In particular, the change in similarity results is investigated for a gradual change in the overlapping of subintervals.With the second synthetic dataset, we change the number of subintervals and their degree of overlapping.In particular, we expect to see changes in the similarity due to a rise in the number of subintervals and their potential (lack of) overlap.In all experiments, we use the minimum t-norm for the S S h SM as it is the most common in practice.Further, all of these experiments are implemented using Java on an Intel(R) Core(TM) i3-4005U series based machine running at 1.70 GHz with 8GB RAM.
A. Synthetic Dataset-1
We consider a number of scenarios for a pair of DIs (a and b) where each of them includes two subintervals.In each scenario, we vary the degree of overlap between subintervals of a and b to explore how the generalized approach (S) responds with the respective SMs S J , S D , and S S h and how smoothly the similarity results change from 1 to 0. We keep a unchanged in all scenarios but shift the subintervals of b consecutively by a factor of 25%.In Fig. 4(b)-(e), we gradually shift the rightmost subinterval [6,8] of b by a factor of 25% till its only intersection is the right-end point of the subinterval [6,8] of a.In Fig. 4(f)-(i), we further shift the leftmost subinterval [1,3] of b by a factor of 25% until its only intersection is the right-end point of the subinterval [1,3] of a.The results in Fig. 5(b) show that the initial similarity between a and b is 1 from the S measure with all three SMs (as they are identical in Fig. 4(a)).Their similarity gradually decreases to 0.50 when the second subinterval [6,8] of b is gradually shifted by the factor of 25%.The similarity drops further and gradually reaches to 0 when the first subinterval [1,3] of b is also repeatedly shifted.Although one would intuitively expect that the similarity between the DIs should decrease proportionately as to the rate of change in their overlapping, Fig. 5(b) shows a proportionate decline in similarity results by both S S h and S D SMs, while the S J SM exhibits Scenario 2.5 -In Fig. 6(e), we add one more subinterval [9,10] to a as designed in Scenario 2.4 (Fig. 6(d)), thus setting a as [0, [3,7], [9,10] , while b remains the same.As this new subinterval [9,10] of a is disjoint from all subintervals of b, adding it should decrease the similarity between a and b as compared to the Scenario 2.4 (Fig. 6(d)).The results show that the S with all three SMs yielded expected similarity.
Scenario 2.6 -In Fig. 6(f), a remains the same but b is changed by adding one more subinterval [9.7, 10].Thus, b is now [0, 2], [5,9], [9.7, 10] .The new subinterval of b has 30% overlap with the subinterval [9, 10] of a.Therefore, the overall similarity between a and b is expected to be higher than that of the Scenario 2.5 (Fig. 6(e)).Again, we receive higher similarity results from the S with all three SMs.
In summary, the S with S S h and S D SMs follow a proportionate decline in similarity results as we gradually move the subintervals of a pair of DIs from a complete overlap to disjoint positions.Contrarily, the S with S J SM yields slightly higher than proportionate decline in the similarity results.Importantly, the S with S J and S D SMs still exhibit aliasing, whereas the S with S S h SM is sensitive to changes in overlap and thus avoid it.
V. CONCLUSION
In this paper, we have proposed a generalized approach to computing the similarity of DIs by integrating the bidirectional subsethood based SM [16] [17] with the strategy of considering the similarity of all continuous subinterval-combinations within the DIs.The new generalized SM is equally suitable for CIs and DIs.It does not require conversion/approximation of DIs to CIs, thus avoiding changes to the original data.We have compared the performance of the generalized approach using the bidirectional subsethood SM along with the Jaccard and Dice SMs for different synthetic pairs of DIs.The results show intuitive behaviour of the resulting generalized approach while highlighting that only by using the recently developed bidirectional subsethood similarity as part of the generalized approach, can avoid the aliasing issue.
In our generalized approach, we always consider all possible pairs of subintervals.As a result, an increase in the number of subintervals within the DIs leads to the increase in the number of similarity calculations.In particular where DIs have many/all disjoint subinterval pairs, such a 'brute force' approach results in substantial execution time.To mitigate this, in the future, we will integrate this generalized SM with Allen's theory [5] for reducing the number of similarity calculations and overall execution time.Further, we plan to use it for assessing similarity of non-convex FSs.We also aim to apply it in generating data-driven fuzzy measures from DIvalued data [33] and use it with fuzzy integrals for aggregation.
Fig. 3 :
Fig.3: Using S J , S D , and S S h SMs, the similarity results of all combinations of subintervals within a pair of DIs-one with three and the other with two subintervals.
1 max( 3 , 2 )
× 0.8333 = 1 3 × 0.8333 = 0.2778.In a similar manner, the total similarity between a and b with S J and S D SMs are 0.2778 and 0.3889, respectively.Theorem 1.The proposed generalized approach with S J , S D and S S h SMs satisfies all common properties of a SM for the DIs a, b, and such that: (a) 0 ≤ S a, b ≤ 1 (boundedness); S a, b = S b, a (symmetry); (c) S a, b = 1 ⇐⇒ a = b (reflexivity); (d) S a, b = 0 ⇐⇒ a b are disjoint (disjointness); (e) S a, b ≥ S a, c when a ⊆ b ⊆ c. (transitivity).Proof: Consider a = a 1 , ..., a m , b = b 1 , ..., b n , and c = c 1 , ..., c p .(a) S a, b involves the S J , S D or S S h SMs to compute similarity for all pairs of the subintervals a i ∈ a and b j ∈ b.All S J , S D and S S h SMs are bounded by 0 and 1 [16], i.e., 0 ≤ S a i , b j ≤ 1, ∀a i , b j .Hence, the mean of all such similarities is again within 0 and 1, implying S a, b ∈ [0, 1].(b) All of S J , S D and S S h measures are symmetric [16], thus making the S measure symmetric too.(c) If a = b, it means that both a and b have an equal number of m subintervals and each a i is identical to each b i , i.e., a i = b i , 1 ≤ i ≤ m.Among all subinterval pairs, m pairs have identical subintervals and the rest have disjoint subintervals.It implies that m pairs receive a similarity of 1 and the rest have a similarity of 0. Hence, the similarity between a and b is, S a, b = 1 max(m,m) × m = m m = 1.Thus, S(a, b) = 1 means that a and b are identical DIs.(d) If a and b are disjoint, it means that no subinterval of a is overlapping with any of subintervals of b, i.e., a i ∩ b j = 0, 1 ≤ i ≤ m and 1 ≤ j ≤ n.In other words, all subinterval pairs consist of disjoint subintervals, thus receiving a similarity of 0. Hence, the similarity between a and b is, S a, b = 1 max(m,n) × 0 = 0.
Figure ( 5 )
(a) presents in detail the shifting of subintervals of b for all scenarios, and Fig. (5)(b) graphically exhibits the similarity results using all three SMs.
TABLE I :
Acronyms and Notation Similarity results for the pairs of DIs with an increasing number of subintervals and varying degree of overlap.Note: SMs S J and S D return identical results for scenarios 2.3 and 2.4, i.e., they are subject to aliasing -only S S h captures the change in the respective DIs and thus avoids aliasing. | 4,814.6 | 2019-06-01T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
A Multifactorial Framework for Short-Term Load Forecasting System as Well as the Jinan’s Case Study
Accurate and reliable short-term electric load forecasting (STLF) plays a critical role in power system to enhance its routine management efficiency and reduce operational costs. However, most of the existing STLF methods suffer from lack of appropriate feature selection procedure. In this paper, a multifactorial framework (MF) possessing the potential to contribute more satisfactory forecasting results and computational speed is proposed. Moreover, a graphical tool for easy and accurate computation of day-ahead load forecast is implemented via MATLAB App Designer. Firstly, we choose the candidate feature set by analyzing the raw electricity consumption data. Then, partial mutual information is adopted as criterion to eliminate these irrelevant and redundant ones among candidate features for the purpose of reducing the input subset and retaining these most relevant. At last, the selected features are used as the input of the well-established artificial neural network (ANN) model optimized by genetic algorithm and cross validation to implement prediction. The MF is applied for the load data measured from 2016 to 2018 in Jinan, and then some competitive experiments and extensive simulations are carried out and results indicates that the ANN-based model with selected features significantly outperforms other alternative models with single features or a few of features regarding mean absolute percent error. In addition, the parallel structure of ANN and the lower dimension of the input space enable the model to achieve faster calculation speed.
I. INTRODUCTION
Short-term electric load forecasting (STLF) is an important issue for the planning and management operation of power system and as the basis of energy transaction and decision in the competitive energy market. The accuracy of forecasting result is a very crucial factor to make most predictions for future demands from energy sector [1], [2]. The generator can be run at the lowest cost when the load demand is known in advance. As said in [3], a small increase in load forecasting accuracy will save the company millions of dollars. However, load demand is also a non-linear and non-stationary process affected by various factors, which complicates forecasting work [4]. First, the load series is highly complex and exhibits several levels of seasonality: the load at a given hour is The associate editor coordinating the review of this manuscript and approving it for publication was Yang Li . dependent not only on the load at the previous hour, but also on the load at the same hour on the previous day, and on the load at the same hour on same day in the previous week. Secondly, there are many important exogenous variables that must be considered, specially weather-related variables, such as temperature and humidity.
Many well-known approaches have been proposed for STLF to continue improving forecasting performance in past decades. The existing STLF methods can be of three types, the first one is based on the statistical methodology, another involves artificial intelligence technology, and the rest is the hybrid model.
A. STATISTICAL FORECASTING MODELS
Statistical methods often use historical data to look for the correlation between exogenous factors mentioned above and electric load. In the early stages of STLF, statistical methods were extensively employed, such as regression models [5], [6], exponential smoothing [7], autoregressive moving average model (ARMA) [8] and autoregressive Integrated moving average model (ARIMA) [9]. These statistical approaches with lower calculation are relatively easy to be established and implemented. However, these approaches are difficult to achieve substantial improvements owing to their theoretical definitions, which largely limit their forecasting ability and cannot receive the expected forecasting accuracy [10].
B. ARTIFICIAL INTELLIGENCE FORECASTING MODELS
Due to the superior nonlinear computing capability, artificial intelligence (AI) techniques (e.g. artificial neural networks (ANN) [11]- [13], fuzzy logic models [14], and support vector machines (SVM) [1], [15]) have been applied to cope with the STLF problem. The most representative technique is ANN, which is suitable for STLF because of its ability of nonlinear mapping and generalization. In [13], by comparing several forecasting methods, including both large neural networks and conventional regression-based methods, it is found that good performances for the large neural networks are not only with the smallest mean absolute percentage error value (MAPE) (2.35-2.65%), but also with a lesser spreading of the errors. A combination of fuzzy time series with seasonal autoregressive fractionally integrated moving average is proposed in [14]. The analysis of the results indicate that the proposed approach presents higher accuracy than any other counterpart. Chen et al. [15] proposed a new support vector regression (SVR) based STLF approach with the ambient temperature of two hours as input variables and electric loads from four typical office buildings in China. The simulation results confirm that the newly model significantly receives the highest forecasting performance and stability. However, these proposed artificial intelligence models also have some disadvantage. It is often subjectively determined the network structure, and it is easy to fall into the local optimum during the training process.
C. HYBRIDE FORECASTING MODELS
In recent years, various hybrid or combined models have also been developed for improving the forecasting accuracy of STLF [16], such as (1) the hybridization or combination of these AI models with each other [17]; (2) the hybridization or combination of these AI models with statistical models [18]; (3) the hybridization or combination of these AI models with superior evolutionary algorithms [19], [20]. Among them, the third type is widely applied in the field of STLF. For instance, aiming at improving the accuracy and speed of STLF, bacterial colony chemotaxis is introduced to optimize the parameters of least squares support vector machine (LS-SVM) in [19]. The simulation results show that the proposed approach can achieve higher forecasting accuracy and faster speed than ANN and LS-SVM with gird search. In addition, Zhang et al. [20] propose a novel load forecasting framework by hybridizing self-recurrent SVR with the variational mode decomposition and improved cuckoo search algorithm. Two real-world datasets are used to examine that the performance of the proposed forecasting model significantly outperforms other alternative models.
D. FEATURE SELECTION
Most of the existing load forecasting work focuses on the improvement and reasonable combination of the existing models. However, the selection of input features for STLF usually depends on the daily experience and speculation of decision makers [21]. As we all known, the change of power load demand is affected by internal and external factors. on the one hand, the load series is highly complex and exhibits several levels of seasonality. On the one hand, there are many important exogenous variables that must be considered, specially weather-related variables. Feature selection is a key step in building reasonable forecasting model and has been proved its importance in many research literatures [22]- [24]. Therefore, decision makers should not only consider the selection of appropriate prediction model, but also determine important internal and external input variables [2]. Of course, the impact of these two kinds of influence factors on load varies in different areas. For example, whether atmosphere related factors have significant influence on load depends on actual regional and climatic conditions. But we all agree that temperature is the most important weather effect. However, the complexity lies in the fact that the feature algorithm is applied based on the following considerations: simplicity, stability, number of reduced features, classification accuracy, storage and computational requirements.
The existing methods of feature selection mainly include two categories: filter method and wrapper method. The filter method selects feature subsets based on evaluation criteria like mutual information (MI), correlation analysis (CA), principle component analysis, and numerical sensitivity analysis [21], [25]- [28]. As a result of MI measures the arbitrary dependence between random variables, it is suitable for the 'information content' evaluation of features in complex classification tasks. Therefore, MI is not just widely used in feature selection of load forecasting, also in various fields [29], [30]. For example, MI was adopted to select a subset of the most relevance and non-redundant inputs among the candidates for proposed neural network forecasting model in [25]. And experiments show that the neural network model based on feature selection is better than any other model. Different from filter methods, wrapper methods select the appropriate feature subset from the candidates based on the forecasting accuracy. So, meta heuristics algorithm such as the BinJaya algorithm [26], simulated rebounding algorithm [28] and simulated annealing [29] have been developed to improve the search ability, especially when there are many candidate features. For instance, a novel BinJaya algorithm with kernelized fuzzy rough sets is proposed in [26] to select an optimal feature subsets from the entire feature space constituted by a group of system-level classification VOLUME 8, 2020 features extracted from phasor measurement units data. The method can effectively solve the feature selection problem of pattern-recognition-based transient stability assessment. Recently, a hybrid filter-wrapper approach is proposed to complement wrapper methods and filter methods with their inherent advantages [27]. Firstly, the filter method is used to eliminate the irrelevant and redundant features to form an input subset of dimension reduction. Then the wrapper method is applied to the dimension reduction subset to obtain a set of small features with high prediction accuracy. Through the hybrid method, appropriate feature variables are selected as the input of SVR model. Results also confirm that the proposed hybrid filter-wrapper model has better performance than other existing models. In [36], a prediction model combined with periodic and non-periodic features is proposed and a case study is conducted in Qingdao. Some regular features are abstracted via the spectral analysis as crucial predictor variables, and weather factors are filtered out by mutual information method as two important weather factors to improve the prediction accuracy. The comparative results of five different experimental results demonstrate the model considering internal characteristics of load data and external important influences and non-periodic factors outperform the others with one single or few factors and more suitable for Qingdao.
E. CONTRIBUTIONS OF THIS PAPER
In this paper, we propose a multifactorial framework (MF) of ANN based on data analysis and the filter method, which combines feature selection procedure with forecasting model construction. Firstly, raw electric load series between 2016 and 2018 of Jinan city is analyzed in detail to developed some candidate features.Then, the partial mutual information (PMI) based filter method is applied to eliminate irrelevant and redundant features for the purpose of reducing input subset. After above two steps, the PMI values corresponding to some selected features are used as the initial weights of the input nodes of the ANN prediction model. At last, some comparative experimental study are carried out to confirm the prediction performance of the ANN-based model with selected factors in Jinan. And data 2016 to 2017 has been as the training set and data from 2018 is used to examine the performance of the model on out-of-sample data.
The leading contributions of this paper are summed up bellow: (1) A MF for STLF is proposed, which simultaneously consider feature selection and modeling procedure. The purpose of this MF is to reasonably adjust the predictors and establish forecasting models according to the actual situation of the area to be investigated, thereby achieving the satisfied forecasting accuracy and faster speed.
(2) Several detail and organized analysis forms including spectral analysis, box-plot analysis and so on are adopted for finding out internal movement law among load series and external factors influencing electric demand.
(3) To overcome the subjectivity when constructing the structure of neural network, genetic algorithm is applied to optimize initial weights and thresholds and cross-validation to determine the number of hidden layers and corresponding neurons.
(4) Five comparative experiments have been designed and implemented based on the climate, topography and economic development, the simulation results are analyzed in different forms to examined the applicability of the MF forecasting model in Jinan.
(5) A graphical tool for easy and accurate computation of day-ahead system electric load forecast with MATLAB App Designer is developed.
F. OGANIZATION OF THIS PAPER
The rest of this paper is organized as follows. In Section 2, we elaborate on the proposed multifactorial framework for load forecasting. In Section 3, details about the experimental setting such as dataset, candidate features, accuracy measures, and selected counterparts for performance testing can be found. And the experimental results is presented in Section 4. Finally, discussion and conclusions are shown in Section 5 and Section 6, respectively.
II. THE PROPOSED MULTIFACTORIAL FRAMEWORK
This section describes the proposed multifactorial framework, which is mainly composed of three parts including raw data analysis, feature selection based the filter method and take selected features as input of ANN-based forecasting method optimized by genetic algorithm and cross validation.
A. THE PMI BASED FILTER METHOD
Compared with the wrapper method [27] for feature selection, the filter method has a faster calculation speed and lower cost. Nonlinear relationship is the common problem in STLF modeling. The model based on the linear correlation between two variables almost cannot detect and quantify the nonlinear relationship well. Sharma proposed an input determination method based on the PMI to overcome the limitation of the correlation coefficient in selecting appropriate model inputs [31]. The PMI criterion is applied to identify the optimal combination of rainfall predictors among selected ENSO indices. It can be regarded as a model-free method because it can fully capture the linear or non-linear correlation between two variables and does not require any major assumptions about the basic model structure. In fact, PMI criterion is an extension of mutual information (MI) concept [32]. MI is a common criterion for measuring the correlation between variables and has widely used for input feature selection. However, a major issue of redundancy has raised because MI does not account for the interdependency among candidate variables directly.
In order to overcome the problems mentioned above, PMI is adopted to identify candidate features in this paper. PMI value between output variable Y and input variable X, for a set of pre-existing inputs Z, can be given by where E[·] denotes the expectation operation. f X , f Y and f X ,Y are respective univariate and joint probability densities estimated at the sample data points. The variables x and y only contain the residual information after the effect of the pre-existing set of inputs Z has been taken into consideration by using the conditional expectations. In feature selection based on PMI, the input variable with the highest PMI value is added as a new predictors. Detailed process of PMI can be found in [33]. Here, we briefly outline the PMI based input feature selection procedure for our proposed approach: 1) Initialize: Set X to be the candidate inputs, Z to the predictors set of inputs,Y to output; 2) Estimate the PMI scores: Compute the PMI (X,Y) between output variables Y and each of the variable in candidate set X; 3) Input select: Identify the input x with the highest PMI in step 2; If this PMI score is higher than the 95 th percentile randomized sample PMI score, add x to the predictors set, and remove it from X. If it is not significant or there is no input in X, go to Step 5.; 4) Recurrent: Return to Step 2; 5) Stop once all significant inputs have been selected.
B. ARTIFICIAL NEURAL NETWORK
Artificial neural network (ANN) are the mathematical tools inspired by the way the human brain processes information. ANN has the high degree of parallel structure and parallel implementation capabilities, and also has the ability to find optimal solutions at high speed. The basic unit of ANN is the artificial neuron, schematically represented in Figure 1. Neurons receive information from multiple input nodes and process it internally to obtain output results. This process typically consists of two phases, first combining the input information linearly and then using the result as an argument to the given activation function [25]. The activation functions represent the nonlinear relationship between the inputs and outputs, which include Sigmoid, Relu functions and so on.
The specific calculation process is as follows: where y i is output, indicates activation functions, ω denotes weight, and x i is input, b j is bias. ANN used in various fields is usually composed of many neurons, a typical 3-layer neural network with two hidden layers and one output layer shown in Figure 2. Each layer consists of a set of neurons connected by weight which are randomly initialized and then adjusted by optimization algorithms (e.g. gradient descent and levenberg-marquardt) The network iteratively adjusts its parameters to reduce the error between the predicted output and the actual output until the error is minimized. The most classic backpropagation neural network (BPNN), that is, signal forward propagation and error backward propagation is applied for forecasting in this paper. And gradient descent approach that the weights are required to be corrected in the direction of the fastest gradient drop as the algorithm of weight update among network. The general calculate steps of weight update are as follows: Step 1. The error between the predicted valueŷ and the actual value y is calculated and propagated back.
Step 2. Adjust the original weight according to the error received in the back propagation process.
Step 3. After the connection weights of each layer of neurons are modified, they enter the next cycle. Then input the next new sample and use the modified weight to forward propagation to get the predicted value. Return to step 1 until VOLUME 8, 2020 the error value reaches the specified threshold and end the cycle.
There are some problems that cannot be ignored for neural network, such as lowly training speed, easily to fall into local optimum and strong subjectivity. Therefore, on the one hand, genetic algorithm (GA) is used to optimize the initial weights and thresholds of ANN to improve the training speed and performance, as shown in Figure 3. GA is a random search method that draws on the biological evolution law of survival of the fittest. It is also a search heuristic algorithm used to solve optimization in the field of computer science artificial intelligence. On the other hand, the subjective problem of ANN construction is solved by using cross-validation to select the appropriate number of neurons and hidden layers. In the last ten years, ANN has been widely used to predict power load. It is also very well suited for it, for at least two reasons.First, it can approximate numerically any continuous function to the desired accuracy. Second, it is the data-driven approach. That is, the ANN is able to automatically map the relationship between them when given a sample of input and output vectors. However, the prediction accuracy and training speed of neural networks often depends on whether the appropriate input variables are selected. Electric load demand is affected by many factors, such as weather, economy and special days. Feature selection can reduce the dimension of the input space without sacrificing classification performance. Therefore, a lot of study work begins to focus on features selection before modeling.
III. EXPERIMENT SETTINGS A. DATA DESCRIPTION
Raw electricity load data used in this study are selected from 0:00:00 on January 1, 2016 to 23:00:00 on December 31, 2018 in Jinan, China, which are collected at hourly time interval. Data from 2016 to 2017 has been as a training set and data from 2018 is used only for forecasting to test the performance of the model on out-of-sample load data. Before modeling the dataset, some preprocessing procedures have been done to enable raw data to become more practical. For example, several missing load values are supplemented by linear interpolation because of that there is only one missing point in every breaking interval. Then we use Pauta standard [34] to identify the abnormal points and treat them as missing values. The pre-processed data is referred to as the original data in what follows, as shown in Figure 4. Figure 4 illustrates hourly loads from January 1, 2016 to December 31, 2018 in Jinan. The blue curve represents original hourly loads. And daily average load is marked by red curve. It is obvious that load demands have multiple seasonal patterns including the daily and weekly periodicity, especially daily periodicity. The weekly periodicity is only evident in March and April when the power consumption is relatively stable. At the same time, load demand decreased significantly at the weekend. In addition, load levels on national holidays which identified by green curve are lower than on weekdays. This leads us to conclude that load demands are also affected by calendar days. As we all known, holiday load forecasting is a very challenging task because these atypical load conditions are not only rare, but also load variation pattern quite different from normal working days which is caused by the great change in human activities [35]. Therefore, in this study, for the sake of simplicity, we will consider holiday to be similar to weekend, that is weekend and holiday identified by nonworking day and other days by working days. Overall, the power load increases slowly year by year with an average load of 3080.3 MW in 2016, 3096.6 MW in 2017 and 3226.9 MW in 2018, which is consistent with the economic development of Jinan in the past three years. Although the gross domestic product (GDP) of Jinan increases every year, the growth rate is not large at about 7.8%. This also shows the close relationship between the regional GDP level and electricity consumption.
1) SPECTRAL ANALYSIS
According to the research on the internal mechanism of power load data, we know that the load demand is cyclical, the power spectral density of the original load data from 2016 to 2018 is calculated by the Welch's method for determining the strength of different periodic motion in this time series and shown in Figure 5. The results show that there are three distinct peaks including diurnal, weekly and semidiurnal frequency signal. Among them, the diurnal frequency signal is the dominant component, which is also consistent with the above analysis. Besides that, weekly periodicity also exists in this power load series which is caused by the alternation from working day to nonworking day. It also can be seen from the ordinate of the graph that the intensity of the 203090 VOLUME 8, 2020 periodic motion of the week is obviously different from that of the diurnal. Consequently, the next study will focus on the primary one.
2) AVERAGED HOURLY LOAD ANALYSIS OF EACH DAY
Based on the above analysis, we notice that the daily period variation of load demand is the most remarkable. Therefore, data for each day in 2016 and 2017 are averaged and shown in Figure 6. according to the figure, it is obvious that the load varies from hour to hour following the consumers' Behavior, and the curves of load data have similar shapes and magnitudes in both years, which indicates that it is necessary to consider daily periodicity in STLF. Moreover, from the change trend of the curve, it can also be inferred that there is a certain relationship between the load of a certain hour and that of the previous several hours.
3) AVERAGED HOURLY LOAD ANALYSIS OF EACH WEEK
Since the weekly periodicity of load demand is also relatively remarkable. Thus, data for each week in 2016 and 2017 are averaged and shown in Figure 7. It is clear that power load on Saturday and Sunday is significantly less than that on weekdays, especially on Sunday. For this problem, a new input feature is added to identify whether the predicted time point belongs to a weekday or weekend, including 0 for non-working days and 1 for working days. What is surprising is that, although the lowest levels of electricity demand in two years are on Sundays, the performance on other days was somewhat different. This is obviously different from average hourly load analysis result of each day in Figure 6. On the VOLUME 8, 2020 one hand, the weak periodic motion of the cycle obtained by power spectrum analysis is further verified. On the other hand, there may exist a relationship between the load for a given hour on a given day and the load for the same hour in the previous weeks. Figure 3, Figure 8 shows the distribution of data in a more abstract way. The blue dotted line represents the average annual load value. The red '+' represents the monthly average load value. It can be seen that compared with summer, electricity consumption level in spring and autumn are more concentrated. And in July and August with the highest temperature throughout the year, the difference between the maximum and minimum load demand is the largest. The existence of this phenomenon will undoubtedly make the prediction of summer become more complicated.
5) CANDIDATE FEATURES
Considering the daily and weekly periodicity characteristics of hourly loads, the hourly load values of the hour of day, the load on the same hour in the previous seven days, previous day's average load, week of day are selected as the candidate input features of the forecasting model. For the difference between the working day and non-working day load levels, a flag indicating the if it is a weekend/weekday on forecasting time point has had been adopted. On the one hand, there is a common agreement that the temperature is regarded the most important weather influence [25]. On the other hand, due to the complex and diverse topography, which mountainous in the south and Yellow River in the north, and temperate continental monsoon climate, temperature and humidity are two essential factors for load demand in Jinan. As a result, temperature and humidity variables are added for each forecasting time interval, plus the temperature on the same hour in the previous seven days and previous day's average temperature. By this way, we can consider all the historical data that may have influence on the predicted hour t. Then, the candidate set for model input is summarized as follows: Candidate-inputs(t) where t is the time interval index. As hourly load forecasting is studied in this paper, t is on an hourly basis. Where L(t-i) and T(t-i) indicates the lagged load and temperature of time interval t-i respectively, Day(t) refers to the day of the week, which is marked by the numbers from 1 to 7. Calendar indicators of hourly are denoted by Hour(t), which is marked by the numbers from 1 to 24. W(t) denotes a flag indicating the if it is a weekend/weekday of t, including 0 for weekends and 1 for weekdays. It is noted that all the public holidays are considered as weekends marked by 0. L-Average(t) and T-Average(t) indicates the average load of previous day and temperature, respectively. In summary, there are 20 input features in candidate set Candidate-input(t). By removing the candidate features with low relationship between load demand to reduce the size of the input feature set, the prediction engine is able to better learn the input and output mapping function of the process, so as to improve the prediction accuracy and calculate speed. The correlation with candidate inputs above and outputs can be calculated by the filter method based on PMI mentioned in section 2. These candidate features which the PMI value higher than the corresponding 95th percentile value are retained. Feature subset after reduction consist of hour of day, load from the same hour in the previous day, previous day's average load, day of week, load from the same hour and same day from the previous week, a flag indicating if it is a weekend/weekday, the temperature on the forecasted day and previous day's temperature, humidity on the forecasted day. Selected inputs are shown as follows: Selected-inputs(t)
C. PERFORMANCE METRICS
In order to properly evaluate the prediction performance of the proposal, mean absolute percent error (MAPE) accuracy measures method is adopted in this study. The definitions of the method are shown as follows: where N is the forecasting horizon. This study focus on the day-ahead short-term load, then the number of forecasting periods N equals 24. And y i andŷ i represent the actual and predicted loads at period i respectively. MAPE is a widely used metric that measures the percentage error between actual and predicted values. The smaller the MAPE value, the closer the predicted value is to the actual value, that is, the better the prediction performance of the model is.
D. SELECTED COUNTERPARTS(FOR COMPARISON) AND IMPLEMENTATIONS
To confirm the prediction performance of the proposed feature selection for STLF using ANN, four comparative experiments have been used as counterparts for comparison purposes. These five counterparts are abbreviated as follows: (1) D-ANN: ANN forecasting model considering only daily periodicity.
(3) DWN-ANN: ANN forecasting model considering daily, weekly periodicity and working day/non-working days.
(5) PMI-ANN: ANN forecasting model with all selected input features.
As a basic experiment, D-ANN model only considers daily periodicity due to daily periodicity is the most remarkable for load demand in Jinan. For all the above-mentioned methods, ANN has been applied as the forecasting model. The detailed input variables and corresponding experiment names for each model are listed in Table 1.
IV. EXPERIMENT RESULTS
Electricity load data measured from 2018 is used for to test the performance of the proposed approach. All simulations are executed in MATLAB environment on the personal computer platform with 2 Intel Core dual-core CPUs (2.4 GHz) and 4 GB memory in Windows 10 environment.
A. COMPARISON FOR EACH HOUR OF THE DAY
The results of five comparative models for forecasting hourly load in 2018 are presented in Figure 9. Which model is represented by the corresponding experiment can be found in Table 1. As can be seen from the figure, the forecasted values of any model are relatively close to the actual ones, which also shows that the basic experiment selected in this paper is reasonable. However, it is undeniable that the predicted value between 3pm and 6pm is very different.
B. COMPARISON FOR EACH DAY OF THE WEEK
The comparison between actual and forecast average load using different models for each day of the week is presented in Figure 10. Among them, the black curve represents the actual load, and the meaning of the other curves is the same as that shown in Figure 9. For sake of simplicity, it will not be repeated. From the results, the following conclusions can also be drawn: (a) Compared with D-ANN and DT-ANN models without considering week periodicity, the simulation results of other models are relatively close to the actual values. (b) On Saturday and Sunday, DWN-ANN performed best among five models. In addition, it can be seen from the figure that there is a large gap between the simulation results on Saturday, which is caused by the shift from working days to non-working days. (c) It is worth noting that PMI-ANN performs better on weekdays and Sundays, and the error on Saturdays is not too large. Unlike DT-ANN, although the accuracy rate is high on non-working days, it is too low on working days to meet the daily forecast requirements of the power system. Through comprehensive consideration of the above results, PMI-ANN is much better than any other model in the actual forecasted week of 2018 in Jinan. Figure 11 shows the MAPE error of all different models for every month in 2018, it is interesting to note that DT-ANN and PMI-ANN considering temperature as a input present significantly lower MAPE, especially during hot summer and cold winter, which also shows that the temperature is more important in forecasting the load than other lagged load. This is mainly due to the climate and topography of Jinan, which will be described in detail in the discussion section.
C. COMPARISON FOR EVERY MONTH
In addition, we can conclude that PMI-ANN has a lower MAPE value in all comparison models from the observation and analysis of the annual data. And, there is the lowest error value in the eight months of the year. It is undeniable that the error value in February is higher than other models, which is caused by national holidays.
We can also observe that the MAPE error of any model in February and summer is significantly larger than that in other months. However, the errors of all models are relatively low and similar throughout the spring. As is known to all,Chinese lunar spring festival, which is stipulated as a national legal holiday and whose holiday is set at the end of January and the beginning of February, is the most important traditional holiday of the year for Chinese people. And summer is the time when most of students have their holidays. Besides, the temperature rises rapidly from spring to summer and high temperatures in summer are a unique characteristic of Jinan. Therefore, load trend of these month are slightly different to normal days and bring some difficulties for STLF.
In a word, from the MAPE value of each month in Jinan throughout the year, the PMI-ANN model performs better than other counterparts. This is because other models that consider single or a few features have larger errors in more months, while PMI-ANN only has higher error values in February. For instance, the simulation errors of D-ANN and DW-ANN are relatively high in most months of 2018. Because they consider few factors, they cannot fully capture the law of load changes. But it must be admitted that the DT-ANN model has a very low and stable simulation error value every month, second only to the PMI-ANN model. This further illustrates the important influence of temperature on electricity consumption in Jinan.
V. DISCUSSION
According to these simulation results presented in section 4, we can conclude that temperature are the most important factors for power load variation in Jinan which is mainly due to topographic and climatic factors. Jinan is surrounded by mountains on three sides, north by Yellow River, among them, the influence of mount Taishan mountain forms the foehn effect. The south wind is easy to form the sinking foehn wind, and the cold air from the north is also easy to enter. In addition to the narrow pipe effect of the terrain, the cold and hot air are not easy to dispersed. Consequently, Jinan city is a typical southerly heat island and a cold wave island. On southerly nights, Jinan's high night temperature is the most obvious in Shandong province, but when the cold air comes from the north, it will not only have no resistance but also form a accumulation, leading to colder. Furthermore, the amplitude of winter and spring air temperature in Jinan is rare in the whole country. In view of the above specific regional characteristics and experimental results, our future research work will further concentrate on the study of the huge influence of temperature to improve the prediction accuracy of a given month.
The second major factor affecting the load is the period of days, which is verified by the above experimental results. In summary, the ANN forecasting model with selected features by filter method based on PMI significantly receive the relatively high forecasting accuracy in terms of MAPE. At the same time, the forecasting performance of these model with single factors are also not as good as PMI-ANN model. This phenomenon is mainly due to few input features make model unable to accurately simulate the complex relationship between inputs and output.
Although the proposed method has only been trained and tested on the load data measured from 2016 to 2018 in Jinan, it is also applicable to the future load demand of Jinan and other areas. This is because the features selected through the feature selection are the most suitable for Jinan, and climate, topography, and economic conditions in Jinnan will not change dramatically in a short time. Therefore, the proposed approach in this paper can be used for future load demand forecasting in Jinan. For other areas, reselection of candidate features is necessary when the simulation results obtained by using the candidates extracted in this paper are not ideal. The characteristics of load changes in each region are different, but excellent results should be obtained based on the ideas in this paper. After work, we applied the proposed approach to other cities of Shandong province to verified the assumption.
VI. CONCLUSION
To simplify the learning process of forecasting models to reduce running time and better simulate of the nonlinear relationship between load and relevant factors to improve prediction accuracy, feature selection is an important stage in STLF. In this study, we newly proposed a multifactorial framework which composed of data analysis, PMI-based filter method and ANN to address the problem. At the same time, we implement a graphical tool for easy and accurate computation of day-ahead system power load forecast with MATLAB App Designer. The performance of the proposed approach is tested on data in Jinan, and the following main conclusions are drawn from the simulation results: (1) Through detailed data analysis such as power spectrum analysis, the main periodic movement of the load is found, and candidate features with good performance are extracted.
(2) The PMI-based filter which can easy to implement and fast in calculation is used for feature selection, and the most classic BP neural network is adopted to enable the simulation process to achieve faster calculation speed while maintaining accuracy.
(3) Five comparative experiments with different features are designed and implemented, and the results show that the selected features using the proposal are better than the results using single or a few of features.
(4) As analyzed by the above experimental data, it can be seen from Figure 9 that the periodic characteristics of the day have a better effect on STLF. It can also be clearly seen from Figure 11 that temperature has a greater impact on changes in load demand, so for Jinan, temperature is a factor that must be considered in load forecasting. At the same time, it further illustrates that the proposed feature selection method can accurately extract influencing factors.
The MF proposed in this paper can be used not only for electricity load demand forecasting, but also for electricity price forecasting, image recognition and so on. | 8,874.6 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Schatten classes of integration operators on Dirichlet spaces
We address the question of describing the membership to Schatten-Von Neumann ideals $\mathcal{S}_ p$ of integration operators $(T_ g f)(z)=\int_{0}^{z}f(\zeta)\,g'(\zeta)\,d\zeta$ acting on Dirichlet type spaces. We also study this problem for multiplication, Hankel and Toeplitz operators. In particular, we provide an extension of Luecking's result on Toeplitz operators.
Introduction and main results
Let where dA(z) = 1 π dx dy is the normalized area measure on D. For α ≥ 0, the weighted Dirichlet-type space D α consists of those functions f ∈ H(D) for which Note that the space D 0 is just the classical Dirichlet space and, as usual, will be simply denoted by D. The spaces D α are reproducing kernel Hilbert spaces: for each z ∈ D, there are functions K α z ∈ D α for which the reproducing formula f (z) = f, K α z Dα holds, where the inner product in D α is given by For 0 < p < ∞, we shall also write A p α for the weighted Bergman space of those g ∈ H(D) such that Here we put our attention on the study of the integration operator T g and the multiplication operator M g defined by where g is an analytic function on D. The bilinear operator (f, g) → f g ′ was introduced by A. Calderón in harmonic analysis in the 60's for his research on commutators of singular integral operators [8] (see also [25, p.1136]). After that, it and different variations going by the name of "paraproducts", have been extensively studied, becoming fundamental tools in harmonic analysis. Pommerenke was probably one of the first authors of the complex function theory community to consider the operator T g [17]. After the pioneering works of Aleman and Siskakis [4,5], the study of the operator T g on several spaces of analytic functions has attracted a lot of attention in recent years (see [2,3,14,16,22,23]).
Our main goal is to study the membership in the Schatten-Von Neumann ideals S p of the integration operator T g : D α → D α . If α > 1, D α is nothing else but A 2 α−2 and D 1 = H 2 , the classical Hardy space, so for p > 1, then T g ∈ S p (D α ) if and only if g belongs to the Besov space B p , and if 0 < p ≤ 1, then T g ∈ S p (D α ) if and only if g is constant (see [4,5]). We recall that, for p > 1, the Besov space B p is the space of all analytic functions g in D such that D |g ′ (z)| p (1 − |z| 2 ) p dλ(z) < ∞, where dλ(z) = dA(z) (1−|z| 2 ) 2 is the hyperbolic measure on D. The following result is implicit in the literature (see [27]) and can be proved by using the theory of Toeplitz operators (see Section 5).
Theorem A. Let g ∈ H(D). We have the following: (a) Let 0 < α < 1 and p > 1 with p(1 − α) < 2. Then T g ∈ S p (D α ) if and only if g belongs to B p . (b) If 0 < p ≤ 1 and 0 < α < 1, then T g ∈ S p (D α ) if and only if g is constant.
However for 0 < α < 1 and p(1 − α) ≥ 2, to the best of our knowledge, it is an open problem founding a description of those g ∈ H(D) such that T g ∈ S p (D α ). This motivation leads us to introduce for 0 ≤ α < ∞ and 1 < p < ∞, the space X p α which consists of those g ∈ H(D) such that (1.1) (1−|w| 2 ) p−2 dA(w) < ∞.
The following result gives a description of the membership in S p (D α ) in the range p > 1 and p(1 − α) < 4. Theorem 1. Let 0 < α < 1, g ∈ H(D) and p > 1 with p(1 − α) < 4. Then T g ∈ S p (D α ) if and only if g belongs to X p α . Now we are going to deal with the case of the classical Dirichlet space D. The situation here it seems to be more difficult. First of all, it is easy (and well known) to describe when the operator T g belongs to the Hilbert-Schmidt class S 2 (D). Indeed, for any orthonormal basis {e n } of the Dirichlet space, one has (see Section 2) (1.2) Therefore, the integration operator T g belongs to S 2 (D) if and only if the last integral in the previous equation is finite. The class of functions g ∈ H(D) satisfying this condition shall be denoted by DL.
If 1 < p < 2 Theorem A suggests that the membership in S p (D) of the operator T g could be described by those g being in the Besov space B p . However, since for p < 2 any operator on S p must be Hilbert-Schmidt, clearly the condition g ∈ DL is necessary for T g being in S p (D), and an easy calculation shows that the function g(z) = log log e 1−z belongs to B p for all p > 1 but g is not in DL. Thus, the condition g ∈ B p is not sufficient to assert that T g is in S p (D).
On the other hand, as in the weighted case, there are no trace class integration operators in the Dirichlet space unless g is constant.
Theorem 2. Let 0 < p ≤ 1 and g ∈ H(D). Then T g ∈ S p (D) if and only if g is constant.
For the case 1 < p < 2 we have a necessary condition and a different sufficient condition. We will see that they are sharp in a certain sense. Before that, for p > 1 and γ > 0, we consider the space B p,log γ , that consists of those functions g analytic on D such that . When one takes the monomials as the symbols, it turns out that the correct behavior of T g Sp is given by B p or X p 0 , while if one takes as a symbol to be functions of the type g a (z) = (1 −āz) −γ , the correct behavior is given by the B p,log p/2 condition (see Lemmas 4.1 and 4.2).
The case p > 2 seems to be a mystery. Let D p β denote the space of those functions f with f ′ ∈ A p β . For p > 2, the inclusion D p β ⊂ D holds if and only if β < (p − 2)/2; and D ⊂ D p β if and only if β ≥ p − 2 (see [28, p.94]). Thus, if one is looking for conditions on the integrability of g ′ , it can not be expected some necessary condition much better than B p = D p p−2 , and a sufficient condition must be stronger than g being in D p p−2 2 . We will discuss a little bit this case in Section 4.
We close this section saying that from now on the paper is organized as follows. In Section 2 we introduce several preliminary general results related on Schatten classes of operators on Dirichlet spaces. Section 3 is devoted to the proof of Theorem 1. There it will be proved directly (see Proposition 3.1 (iv)) the identity which together with Theorem 1 gives a proof of Theorem A not relying in the theory of Toeplitz operators. It is worth mentioning that the Besov space B p is rich of several characterizations (the identity (1.3) gives a new one), each of them being the appropriate tool to use in different situations (see [1], [7], or [29] for example). In Section 4 we prove Theorem 2 and Theorem 3. Also, by using some testing classes of functions, we show that those results are sharp in a certain sense. Finally, Section 5 is devoted to study the relationship of the integration operator T g with other classical operators acting on weighted Dirichlet spaces, such as Toeplitz operators, multiplication operators or big and small Hankel operators. A similar connection also happens in other contexts [18]. Indeed, the same techniques used in the proof of Theorem 1 work to demonstrate an extension for positive Borel measures of the helpful result of Luecking on Toeplitz operators [10, p. 347]). Throughout the paper, the letter C will denote a positive absolute constant whose value may change at different occurrences, and we write A ≍ B when the two quantities A and B are comparable.
Preliminary results
Let H and K be separable Hilbert spaces. Given 0 < p < ∞, let S p (H, K) denote the Schatten p-class of operators from H to K. If H = K we simply shall write S p (H). The class S p (H, K) consists of those compact operators T from H to K with its sequence of singular numbers λ n belonging to ℓ p , the p-summable sequence space. We recall that the singular numbers of a compact operator T are the square root of the eigenvalues of the positive operator T * T , where T * denotes the Hilbert adjoint of T . We remind the reader that T ∈ S p (H) if and only if T * T ∈ S p/2 (H). Also, the compact operator T admits a decomposition of the form T = n λ n ·, e n H σ n , where {λ n } are the singular numbers of T , {e n } is an orthonormal set in H, and {σ n } is an orthonormal set in K.
For p ≥ 1, the class S p (H, K) is a Banach space equipped with the norm , while for 0 < p < 1 one has the inequality S + T p Sp ≤ S p Sp + T p Sp . We refer to [21] or [30, Chapter 1] for a brief account on the theory of Schatten p-classes.
We shall write H for a Hilbert space of analytic functions in D with reproducing kernels K z . Given an operator T on H, usually the reproducing kernel functions carry a large amount of information about relevant properties of T , such as boundedness, compactness, membership in Schatten p-classes, etc. It is known that if {e n } is an orthonormal basis of a Hilbert space H of analytic functions in D with reproducing kernel K z , then for all z and ζ in D, see e.g. [30,Theorem 4.19]. We also introduce J z , the derivative of K z respect to z, that is, It follows that for any orthonormal set {e n } of H, and equality in (2.3) holds if {e n } is an orthonormal basis of H. We shall write k z and j z for the normalizations of these functions. In order to avoid some confusions when dealing with reproducing kernels of either D α or A 2 α , we use the notation B α z for the reproducing kernel of the weighted Bergman space A 2 α at the point z, and let b α z = B α z B α z A 2 α be its normalization. It is well known (see [30,Corollary 4.20]) that The reproducing kernel function for the Dirichlet type space D α is denoted by K α z , and k α z denotes the corresponding normalized reproducing kernel. Since f ∈ D α if and only if f ′ ∈ A 2 α , using the reproducing formula for the Bergman space A 2 α (see [30,Proposition 4.23]), it can be deduced the following expression of the reproducing kernel of D α (see [7] or [27]): In particular, for α = 0, Also, it is easy to see that The next two results are certainly well known to the experts (see [9] or [24] for similar results), but we find convenient for the reader to give a proof here.
(ii) For 0 < p ≤ 2, Proof. Since the operator T is compact, it admits the decomposition where {λ n } are the singular values of T , {e n } is an orthonormal set in A 2 α , and {f n } is an orthonormal set in H. Then If 0 < p ≤ 2, a similar argument, using Hölder's inequality with exponent 2/p ≥ 1, (2.3) and (2.4), gives The corresponding analogue of Proposition 2.1 for the Dirichlet type spaces D α uses the functions j α Proof. Since T is compact, it admits the decomposition where {λ n } are the singular values of T , {e n } is an orthonormal set in D α , and {f n } is an orthonormal set in H. It follows from (2.5) that J α z (0) = 0, then using (2.4), If p ≥ 2, using the identity (2.8), Hölder's inequality, (2.3) and (2.7) If 0 < p ≤ 2, since e n Dα = 1, and dA α (z) = (1 + α) J α z −2 Dα dλ(z) due to (2.7), then For the first term (I), observe that |λ n | ≤ T , and therefore For the second term (II), due to Hölder's inequality, (2.3) and the identity (2.8) Putting the estimates obtained for (I) and (II) in (2.9) we obtain part (ii). This completes the proof.
The following result will also be needed.
Proof. Let {e n } be any orthonormal basis of D α . From (2.2) and (2.5) we have and, since α ≥ 0, we obtain which gives the result for p = 1. If 1 < p < 2, using Hölder's inequality where the last inequality follows from (2.3) and (2.7). From here one obtains the corresponding inequality. The proof is complete.
We shall also use several times the following integral estimate (see [30]) that has become indispensable in this area of analysis.
The useful inequality which appears below is from [13], and can be thought as a generalized version of the previous one.
Lemma C. Let s > −1, r, t > 0, and r + t − s > 2. If t < s + 2 < r then, for a, z ∈ D, we have For z ∈ D and r > 0, let denote the hyperbolic disk with center z and radius r. Here β(z, w) is the Bergman or hyperbolic metric on D.
We also need the concept of an r-lattice in the Bergman metric. Let r > 0. A sequence {a k } of points in D is called an r-lattice, if the unit disk is covered by the Bergman metric disks {D k := D(a k , r)}, and β(a i , a j ) ≥ r/2 for all i and j with i = j. If {a k } is an r-lattice in D, then it also has the following property: for any R > 0 there exists a positive integer N (depending on r and R) such that every point in D belongs to at most N sets in {D(a k , R)}. There are elementary constructions of r-lattices in D. See [30,Chapter 4] for example.
Before embarking on the proof of Theorem 1, some preliminary results of interest on their own must be proved.
3.1.
A new class of spaces. In this subsection, we display several nesting properties of X p α and B p spaces. We offer a proof of (1.3), which gives under those restrictions an equivalent B p -norm. It is worth noticing that equivalent and useful B p -norms (see [1] and [7] for example) have been previously introduced for the study of operators on different spaces of analytic functions on D. Also, our next result proves that Proof. For a ∈ D fixed, let D(a) := z : |z − a| < 1−|a| 2 . (i) If g ∈ X p α , then the subharmonicity of |g ′ | 2 together with the fact that |1 −wz| ≍ (1 − |w| 2 ) for z ∈ D(w) implies that g ∈ B p . Also, since This shows that X p α ⊂ D α proving (i).
This gives and it follows easily that ||g|| q X q α ≤ C||g|| p X p α for q > p. (iii) follows from the inequality sup z∈D The inclusion X p α ⊂ B p follows from (i). Conversely, suppose that g ∈ B p . Assume first that p > 2. Since pα > p−2, we can choose ε > 0 with pα−(1+ε)(p−2) > 0. Then, using Hölder's inequality and Lemma B, we obtain Note that the choice of ε gives pα > β, and therefore we can use Lemma B again in order to obtain Now, passing the sum outside the integral and using Lemma B we get where the last step follows from Theorem 0 of [5] (see also [29]). This completes the proof.
3.2.
Proof of Theorem 1. The sufficiency for the case 1 < p ≤ 2, and the necessity for 2 ≤ p < ∞ is a byproduct of the following result, which also gives some information on the case p(1 − α) > 4.
, the result follows directly from Proposition 2.2.
The necessity for 1 < p < 2 follows from the next Proposition and part (iv) of Proposition 3.1.
Proof. Let 1 ≤ p < 2, and assume that T g ∈ S p (D α ). Then the positive operator T * g T g belongs to S p/2 (D α ). Without loss of generality we may assume that g ′ = 0. Suppose is the canonical decomposition of T * g T g . Then not only is {e n } an orthonormal set, it is also an orthonormal basis. Indeed, if there is an unit vector e ∈ D α such that e ⊥ e n for all n ≥ 1, then D |g ′ (z)| 2 |e(z)| 2 dA α (z) = T g e 2 Dα = T * g T g e, e Dα = 0 because T * g T g is a linear combination of the vectors e n . This would give g ′ ≡ 0.
Since {e n } is an orthonormal basis of D α , then by Lemma 2.3 which finishes the proof of (i). Furthermore, if T g ∈ S 1 (D α ), then (3.2) says that which implies that g is constant. This completes the proof.
The remaining part of the proof is more involved. It will be splitted in two cases.
Sufficiency. Case 2 < p ≤ 4. Let {e n } be any orthonormal set in D α . Then Since g ∈ X p α ⊂ D α by Lemma 3.1 and |e n (0)| ≤ 1, we clearly have In order to deal with the term I 2 , note first that e 2 n ∈ D 1+2α because for any f ∈ D α ,
Dα
(1 − |z|) α , z ∈ D. So from the reproducing formula for D 1+2α we deduce Therefore, if we use the notation Fubini's theorem and Hölder's inequality yields . Then, if p = 4, it follows from (2.3) and the fact that K α w 2 α . Now, if 2 < p < 4, notice that Hölder's inequality with exponent 4/p > 1 and (2.3) yield Dα . This together with the fact that for α > 0 we have K α w 2 Since g ∈ X p α combining the estimates for I 2 and I 1 we obtain that n T g e n p Dα ≤ C < ∞.
Sufficiency. Case 4 < p < ∞ and p(1 − α) < 4 . Proceeding as before we get α , and therefore we can assume that e n (0) = 0. Note that for β ≥ α we have This follows from the reproducing formula for D β and the fact that D α ⊂ D β if α ≤ β. Since pα > p − 4, we can take ε > 0 so that p .
The open case.
In relation with the open case p(1 − α) ≥ 4, we provide a result which can be proved following the lines of the proof of Theorem 1 (case p > 4), and therefore the proof will be omitted.
Obviously, X p α−ε X p α if (1 − α)p ≥ 2 (see Lemma 4.1 below), so Proposition 3.4 gives a sufficient but not necessary condition for T g ∈ S p (D α ), (1 − α)p ≥ 2. However, if α > 0 and 1 < p < ∞, those techniques which will be developed in the proof of Lemma 4.2, together with Lemma C, imply that for any β > 0, In particular, the previous result gives the right growth for this family of functions. Proof of Theorem 2. Since S p (D) ⊂ S 1 (D) for 0 < p ≤ 1, the result follows from part (ii) of Proposition 3.3.
Proof of Theorem 3. Part (a) follows from part (i) of Proposition 3.3, and part (c) is deduced in Proposition 3.2.
In order to prove part (b), assume that 1 < p < 2. Then, for all orthonormal sets {e n } of D, we have Thus, by [30, Theorem 1.27], we deduce that T g ∈ S p (D) with T g Sp ≤ C g Bp log p/2 .
4.2.
Testing functions for Schatten classes. Our next goal consists of proving that Theorem 3 gives the correct behavior of T g Sp , 1 < p < 2, at least for some families of functions. For the beginning, we deal with monomials.
Lemma 4.1. Asumme that 0 ≤ α < 1 and 1 < p < ∞. Let g j (z) = z j , j = 1, 2, 3 . . . . Then Proof. We shall use the inner product in D α given by for f (z) = ∞ n=0 a n z n , and g(z) = ∞ n=0 b n z n . We note that , n ∈ N, we have that {σ n } ∞ n=0 is an orthonomal basis of D α , and furthermore ∞ n=j a n−j n z n = ∞ n=j a n−j (n + 1) That is, the singular values of the integration operator T g j are Consequently, On the other hand, At this point, we use [12, Theorem 1] to obtain which together with (4.4) gives the first equivalence in (4.1). The second equivalence in (4.1) follows from an straightforward calculation according to those values of p and α.
Now we prove (4.2),
where in the last step we have used that ω(r) = (1 − r) p−2 log e 1−r p/2 is an admissible weight with distortion function equivalent to (1 − r) (see [15, p. 11]). Now, bearing in mind the properties of the Beta function, ≍ j (log(j + 1)) p/2 , so we get (4.2). The equivalence (4.3) can be proved analogously. This finishes the proof.
Lemma 4.2. Assume that p > 1 and γ > 0. Then , it follows that Therefore, joining this and Lemma B, On the other hand, taking 0 < ε < min(1, 2(p − 1)/p), and bearing in mind Lemma C, So, an application of Lemma B gives In order to prove (4.7), we first estimate the B p,log p/2 -norm of the functions g a (z) = (1 −āz) −γ . Take a ∈ D with |a| ≥ 1/2. Moreover, which together with (4.8) and (4.9) gives Furthermore, if 2 ≤ p < ∞, using again (4.10) and Proposition 4.3 below, and this completes the proof of (b).
Bearing in mind that (X p α , || · || X p α ) is a Banach space for p > 1, the closed graph theorem and Lemma 4.1 and Lemma 4.2, we deduce that X p 0 B p and is different from B p,log p/2 . In particular, Proposition 3.1 (iv) does not remain true for α = 0 and 1 < p < 2. (i) If T g ∈ S p (D) then g ∈ B p,log p/2 . (ii) If T g ∈ S p (D) then g ∈ X p 0 .
Toeplitz operators.
We recall that given a finite positive Borel measure µ on D, the Toeplitz operator Q µ on D α , α > 0 is defined by Toeplitz operators have been a key tool for studying the membership in S p of many classes of operators, such as composition operators (see [11], [10,Section 7] and [30,Chapter 11]) or integration operators (see [4,5] and [16,Chapter 6]). Indeed, the integration operator T g and the Toeplitz operator Q µ on D α are related via the identity T * g T g = Q µg , where µ g is the measure defined by dµ g (z) = |g ′ (z)| 2 dA α (z), and one can obtain a proof of Theorem A using the characterization of Schatten class Toeplitz operators obtained by D. Luecking (see (5.1) below). So, it is natural to expect that the methods used to study the membership of T g in the Schatten p-class of D α are going to work also for the Toeplitz operator Q µ on D α for a general measure µ. Before doing that, we recall Luecking's result [10] describing the membership in S p (D α ) of the Toeplitz operator Q µ for all p > 0 with p(1 − α) < 1. He shows that, for the range of p considered above, Q µ ∈ S p (D α ) if and only if, for any r-lattice {a j } with associated hyperbolic disks {D j } Given a finite positive Borel measure on D, for any −1 < α < ∞ and 0 < p < ∞ we define Here we are able to obtain a full description of the measures µ for which the Toeplitz operator Q µ belongs to S p (D α ) on the extended range of all p > 0 with p(1 − α) < 2 and 1 < p(2 + α). We remark here that, as α > 0, a complete description of the Hilbert-Schmidt Toeplitz operators on D α is obtained.
Proof. Consider the inclusion operator I µ : D α → L 2 (D, µ). It is easy to check that Q µ = I * µ I µ , and thus Q µ ∈ S p (D α ) if and only if I µ belongs to S 2p . Now, the necessity of X 2p α (µ) < ∞ for p ≥ 1 and the sufficiency for p ≤ 1 follow from Proposition 2.2. Also, by repeating the proof of the sufficiency in Theorem 1 replacing the measure |g ′ (z)| 2 dA α (z) in that proof by the measure dµ we obtain n I µ e n 2p L 2 (D,µ) ≤ C < ∞ for all orthonormal sets {e n } of D α provided p > 1 and p(1 − α) < 2. This proves the sufficiency of X 2p α (µ) < ∞ in that range. Finally, it remains to show the necessity in the case 1/(2 + α) < p < 1. Let {a j } be an r-lattice with associated hyperbolic disks {D j }. Using that |1 −wz| ≍ |1 −ā j z| for w ∈ D j and Lemma B, we deduce Thus, by Luecking's condition (5.1), if Q µ ∈ S p (D α ) then X 2p α (µ) < ∞ completing the proof of the Theorem.
We conclude this subsection mentioning that in [19] one can find a description of the membership of the Toeplitz operator Q µ in S 2k (D α ) for positive integers k in terms of some iterated integrals.
5.2.
Big and small Hankel operators. As in [26] and [20], for α ≥ 0, we consider the Sobolev space L 2 α consisting of those differentiable functions u : D → C for which the norm is finite. It is clear that D α is a closed subspace of L 2 α . Let P α be the orthogonal projection from L 2 α onto D α . The big Hankel operator H α g : D α → L 2 α and the small Hankel operator h α g : The relation between the big Hankel operator and the multiplication operator M g ′ is clear and well understood. Indeed, in [26, Corollary 1] Z. Wu shows that M g ′ : D α → A 2 α is bounded, compact, or belongs to S p with 1 < p < ∞, if and only if the same is true for the big Hankel operator H α g : D α → L 2 α . However, although Mḡ′ is related with the the small Hankel operator (see (5.4) belongs to S p if and only if h 0 g : D → L 2 0 belongs to S p (see [26,Theorem 6]. Note that, by the previous observations, we may replace H 0 g by M g ′ or T g ). The main aim of this section consists of extending Wu's result on Schatten p-classes for the small Hankel operator to all D α and to all p with 1 < p < ∞. Before that, we recall that and has the property (see [20, p.105]) that Proof. Firstly, we recall that if T g or h α g is bounded, then g ∈ D α . It is enough to consider the relationship between Mḡ′ and h α g . For this, we look at the difference of Mḡ′ and ∂ ∂w h α g . For f ∈ D α , a straightforward calculation using that g ∈ D α and (5.3) yields For 1 < p < ∞, if T g ∈ S p (D α ) or h α g ∈ S p (D α , L 2 α ) then g ∈ B p (see Propositions 3.1, 3.2, 3.3, Theorem 3 and [26, Theorem 1]), and therefore the difference considered above, as an operator acting from D α into L 2 (D, dA α ), belongs to S p , by Proposition 5.3 (which we are going to prove below). This completes the proof.
For u ∈ L 2 (D, dA α ), consider the operator For the proof of that proposition, we need the following lemma.
Lemma 5.4. Let σ > −1, and 2 + σ < b ≤ 4 + 2σ. Then for each a ∈ D and any f ∈ H(D) we have Proof. Let ϕ a (z) = a−z 1−āz , and consider the function f a = (f • ϕ a ). After the change of variables z = ϕ a (ζ), and an application of Lemma 2.1 of [7] we Finally, the change of variables ζ = ϕ a (z) gives Proof of Proposition 5.3. Firstly we deal with the case p ≥ 2. Note that, for f ∈ H ∞ (the algebra of all bounded analytic functions on D, a dense subset of D α ) and u analytic, one has ∆ u f = uf − P α (uf ), where P α denotes the Bergman projection from L 2 (D, dA α ) to A 2 α . Therefore, ∆ u f is the solution of the equation ∂v = uf ′ with minimal L 2 (D, dA α ) norm. Now, it is well known that the solution of ∂v = uf ′ given by Indeed, the estimate in question follows from Cauchy-Schwarz inequality and the fact that, for c > 0 and t > −1, the integral D |z−w| |1−wz| 1+t+c is comparable to (1 − |z| 2 ) −c (this is just a variant of Lemma B). Taking all of this into account, we obtain that From this inequality, it follows easily that the operator ∆ u is bounded (or compact) if sup z∈D (1−|z|)|u(z)| < ∞ (or if lim |z|→1 − (1−|z|)|u(z)| = 0), and it is clear that these conditions are implied by the fact that u ∈ A p p−2 . Now, let {e n } be any orthonormal set in D α . Therefore, using (5.5), Hölder's inequality, (2.3) and (2.7), we obtain A different proof for the case p = 2 (that can be adapted to the case p > 2) can be given as follows. Let {e n } be any orthonormal basis of D α . Take 0 < ε < 1. Then, Lemma 5.4 yields Therefore, using (2.3) and Lemma B, we get n ∆ u e n 2 = n D |∆ u e n (w)| 2 dA α (w) For 1 < p < 2, one has A p p−2 ⊂ A 2 . Thus, by the case we have just proved, the operator ∆ u is Hilbert-Schmidt and, in particular, compact. By Proposition 2.2, a sufficient condition for ∆ u to be in the class S p is Now, take 0 < ε < 1 with α − ε > −1 and p − εp > 1. Proceeding as in (5.6), and then using Lemma C we obtain This, together with Lemma C, gives Thus, (1 − |z| 2 ) p−2−εp dA(z).
Multiplication operators.
It is well known that the multiplication operator M g ′ : D α → A 2 α is bounded or compact if and only if M g ′′ : D α → A 2 2+α is bounded or compact. Thus, a natural question arises here: It is true that M g ′ : D α → A 2 α is in the Schatten class S p if and only if M g ′′ : D α → A 2 2+α belongs to S p ? We are going to see that this happens when p > 1, but the result is false for p = 1. Let us consider the spaceṡ A 2 α = {f ∈ A 2 α : f (0) = 0} andḊ α = {f ∈ D α : f (0) = 0}.
Theorem 5.5. Let α ≥ 0, 1 < p < ∞ and g ∈ H(D). The following are equivalent: (a) M g ′ :Ḋ α →Ȧ 2 α is in S p ; (b) M g ′′ :Ḋ α → A 2 2+α is in S p . Taking into account Theorems A and 2, the next result shows that it is no longer true that M g ′ being in the trace class S 1 is equivalent to M g ′′ being in the trace class. We recall that g ∈ B 1 if g ∈ H(D) and D |g ′′ (z)| dA(z) < ∞. Moreover, there is a function g ∈ H(D) with M g ′′ ∈ S 1 (D, A 2 2 ) such that D |g ′′ (z)| ϕ(z) dA(z) = ∞ for any function ϕ(r) increasing continuously to ∞ on (0, 1).
One should compare Theorem 5.6 with the results obtained in Theorem 8 of [6], where trace class bilinear Hankel forms on the Dirichlet space are studied.
Proof of Theorem 5.5. We recall that if (a) or (b) holds, then g ∈ B p . We first deal with the case p ≥ 2. Since f A 2 α ≍ f ′ and we deduce , and by what we have just proved (see (5.11) and the comments after that), one gets M g ′′ a S 1 (D,A 2 2 ) ≤ C(1 − |a| 2 ) −γ log e 1 − |a| 2 and putting this into (5.13) gives This shows together with part (b) that given a lacunary series g(z) = k a k z n k , the multiplication operator M g ′′ : D → A 2 2 belongs to S 1 if and only if k n k |a k | < ∞, and it is well known that this condition is equivalent to g being in B 1 [30, p. 100]. Now, given a function ϕ as described in part (d), it is straightforward to select the numbers {a k } and the sequence {n k } so that the summability condition k n k |a k | < ∞ is met, but D |g ′′ | ϕ dA = ∞. | 8,723.4 | 2013-02-11T00:00:00.000 | [
"Mathematics"
] |
Personalization-based deep hybrid E-learning model for online course recommendation system
Deep learning, a subset of artificial intelligence, gives easy way for the analytical and physical tasks to be done automatically. There is a less necessity for human intervention while performing these tasks. Deep hybrid learning is a blended approach to combine machine learning with deep learning. A hybrid deep learning (HDL) model using convolutional neural network (CNN), residual network (ResNet) and long short term memory (LSTM) is proposed for better course selection of the enrolled candidates in an online learning platform. In this work, a hybrid framework that facilitates the analysis and design of a recommendation system for course selection is developed. A student’s schedule for the next course should consist of classes in which the student has shown interest. For universities to schedule classes optimally, they need to know what courses each student wants to take before each course begins. The proposed recommendation system selects the most appropriate course that can encourage students to base their selection on informed decision making. This system will enable learners to obtain the correct choices of courses to be studied.
INTRODUCTION
The recommendation system is an emerging science brought into existence by the conjunction of several established paradigms, education, medicine, artificial intelligence, computer science, and the many derivatives of these fields.By utilizing machine learning algorithms, numerous investigations and research projects for course selection have been conducted.Deep learning algorithms are now being employed to help users select a reliable course for their skill improvement.E-learning is a method of training and education that uses digital resources.Through any of the well-known learning management systems (LMS), such as Moodle, Coursera, NPTEL, etc., users can learn at any time, anywhere.Selecting courses from an LMS is currently a difficult decision for users.As there is an enormous amount of multi-disciplinary courses, sometimes, users are unable to choose the right choices of courses based on their interests and specialization.Initially, a person can start with a low-complexity course so that he or she can get deep knowledge in the fundamental concepts of a technology/domain.If he completes the course, he is further recommended to choose the medium-level complexity course in the same discipline.In continuation to, the final stage is choosing the high-level complexity course.This will enable the learner to get very strong knowledge in the particular field as he has completed all three levels of courses.A hybrid deep learning model is developed by utilizing the architectures of convolutional neural network (CNN), residual network (ResNet) and long short term memory (LSTM) for efficient course selection.Since each problem may be solved using various methods, there must also be an infinite number of available methods.The previous studies most relevant to the selection of courses for learners are highlighted in this section.
There are different resources, such as e-learning log files, academic data from students, and virtual courses, from which educational data can be obtained.It is still a challenging factor in the educational field to predict student marks (Turabieh, 2019).During the COVID period, video conferencing platforms and learning management systems are being adopted and they are used as online learning environments (OLEs).It was suggested that learners' behavior in OLEs can be predicted using effective methods and they can be made available as supportive tools to educators (Dias et al., 2020).
Since the previous decade, the field of recommendation systems has expanded at a faster pace.Due to its enormous importance in this era, much work is being done in this area.Similar suggestions cannot be made to users in e-learning, despite their shared interests.The suggestion heavily depends on the different characteristics of the learners.It has been determined that because each learner differs in terms of prior information, learner history, learning style, and learning objectives, all learners cannot be held to the same standards for recommendations (Chaudhary & Gupta, 2017).
A collaborative recommender system is described that considers both the similarities and the differences in the interests of individual users.An association rule mining method is used as the fundamental approach so that the patterns between the different classes can be found (Al-Badarenah & Alsakran, 2016).
A collaborative recommendation system is designed by utilizing natural language processing and data mining techniques.These techniques help in the conversion of information from a format that is readable by humans to one that is readable by machines.The system allows graduate students to select subjects that are appropriate for their level of expertise (Naren, Banu & Lohavani, 2020).A recommendation system is described for selecting university elective courses.It is based on the degree to which the student's individual course templates are similar to one another.
This study makes use of two well-known algorithms, the alternating least squares and Pearson correlation coefficient, on a dataset consisting of academic records from students attending a university (Bhumichitr et al., 2017).
An interactive course recommendation system, CourseQ enables students to discover courses via the use of an innovative visual interface.It increases transparency of course suggestions as well as the level of user pleasure they provide (Ma et al., 2021).A hybrid recommender system with collaborative filtering and content-based filtering models using information on students and courses is proposed that generates consistent suggestions by making use of explicit and implicit data rather than predetermined association criteria (Alper, Okyay & Nihat, 2021).The content-based filtering algorithms make use of textual data, which are then transformed into feature vectors via the use of natural language processing techniques.Different ensembling strategies are used for prediction.
Based on score prediction, a course recommendation system is developed where the score that each student will obtain on the optional course is reliably predicted by a crossuser-domain collaborative filtering algorithm (Huang et al., 2019).The score distribution of the senior students who are most comparable to one another are used to achieve this.A recommendation system by makes use of a deep neural network is described in such a way that the suggestions are produced by gathering pertinent details as characteristics and assigning weights to them.The frequency of nodes and hidden layers has been updated by the use of feed forwarding and backpropagation information, and the neural network is formed automatically through the use of a great number of modified hidden layers (Anupama & Elayidom, 2022).The automatic construction of convolutional neural network topologies using neuro-evolution has been examined, and a unique method based on the Artificial Bee Colony and the Grey Wolf Optimizer has also been developed (Karthiga, Shanthi & Sountharrajan, 2022).
To maximize the degree of curricular system support, a mathematical model is developed that uses a two-stage improved genetic firefly algorithm.It is based on the distribution of academic work across the semesters and the Washington Accord graduate attribute (Jiang & Xiao, 2020).
Another recommendation algorithm takes into account a student's profile as well as the curricular requirements of the course they are enrolled in.Integer linear programming and graph-based heuristics are used to find a solution for course selection (Morrow, Hurson & Sarvestani, 2020).
A recommender system is intended to help the investors of the stock market find potential possibilities for-profit and to aid in improving their grasp of finding pertinent details from stock price data (Nair et al., 2017).
In an improved collaborative filtering for course selection, the user's implicit behavior is the foundation of this collaborative filtering system.The behavioral data are mined, and then using the collaborative filtering algorithm based on items, an intelligent suggestion is carried out (Zhao & Pan, 2021).
Using learning analytics, data from students' learning activities can be collected and analyzed.It is proposed that a particular computational model can be utilized to identify and give priority to students who were at risk of failing classes or dropping out altogether.The results of predictive modeling can be used to inform later interventional measures (Wong, 2017).Using LSTM network architecture, efficient online learning algorithms were introduced and they make use of the input and output covariance information.Weight matrices were assigned to the input and output covariance matrices and were learned sequentially (Mirza, Kerpicci & Kozat, 2020).A hybrid teaching mode is proposed that introduces a support vector machine to predict future learning performance.Cluster analysis is used to analyze the learner's characteristics, and the predicted results are matched with the hybrid mode for continuing the offline teaching process (Liang & Nie, 2020).
In another study, a deep neural network model, namely, the attention-based bidirectional long short-term memory (BiLSTM) network was examined to predict student performance (grades) from historical data using advanced feature classification and prediction (Yousafzai et al., 2021).A recommender system that helps the appropriate users see the correct ads.It puts this concept into practice using two cooperative algorithms CNN and LSTM (Soundappan et al., 2015).
A hybrid e-learning model was suggested as a way to apply distance learning in Iraqi universities.Seventy-five individuals from the University of Technology in Iraq took part in this study (Alani & Othman, 2014).The CNN and LSTM algorithms are outperformed in large volume of data with less computational cost (Anitha & Priya, 2022).Distinct trade-offs between large-scale and small-scale learning systems using various gradient optimization algorithms were discussed in a article.Gradient descent, stochastic gradient descent, and second order stochastic gradient descent were analyzed and it was concluded that the stochastic algorithms yield the best generalization performance (Bottou & Bousquet, 2007).With logistic outputs and MSE training for the OCR challenge, LSTM networks were unable to converge to low error rate solutions; softmax training produced the lowest error rates overall.In every experiment, standard LSTM networks without peephole connections had the best performance.The performance of the LSTM is gradually dependent on learning rates.Least square training falls short of softmax training (Breuel, 2015).
A Prediction Model for Course Selection (PMCS) system was developed using a Random Forest algorithm (Subha & Priya, 2023).As the number of decision trees in the Random Forest classifier is increased, the performance of the PMCS system improves.It is observed that the Random Forest classifier with 75 trees performs well when compared to other decision trees.Although RMSProp and ADAM are still very well-liked neural net training techniques, it is still unknown how well they converge theoretically.We demonstrate the criticality of these adaptive gradient algorithms for smooth non-convex objectives and offer limitations on the running time (De, Mukherjee & Ullah, 2018).The restricted fundamental enrolling features of Taiwan's current course selection systems prevent them from predicting performance or advising students on how to arrange their courses depending on their learning circumstances.Therefore, students select courses with a lack of awareness about it and they could not prepare for a study schedule.The learning curve is defined by analyzing the required factors and students with different background was found out initially.Then, a recommendation system is developed based on the students' grade and their learning curve (Wu & Wu, 2020).A virtual and intelligent agentbased recommendation is proposed which needs user profiles and preferences.User rating results are improved by applying machine learning techniques (Shahbazi & Byun, 2022).Existing research works did not concentrate on the hybrid model which combines the features of CNN, ResNet, and LSTM.They have focused on individual models only.
Though there are some considerable merits in a few individual models, the hybrid model efficiently outperforms those models.
In this work, the proposed recommendation system develops a hybrid approach utilizing three deep learning architectures: CNN, ResNet, and LSTM.The proposed HDL approach selects the best optimal course for the learners.The remaining sections are summarized as follows: "Materials and Methods" gives an overview of the proposed recommendation system with a brief discussion of each module.The next section goes deeper into the recommendation system's metrics and evaluates its effectiveness by using a student dataset having 750 records."Conclusion" summarizes the findings and ideas for future work.
MATERIALS AND METHODS
The framework used for recommending courses via the utilization of the hybrid technique is shown in Fig. 1.Considered to be a pattern recognition system, the recommendation system comprised three different deep learning architectures, CNN, ResNet, and LSTM, as base models.The designs of individual architecture for the proposed HDL system are discussed in the following subsections.
Preprocessing
When training a neural network, such as a CNN, ResNet, and LSTM, the input data for the classification system must be scaled.The unscaled data slows down the learning and convergence of the network as the range of values is high.They may even prevent the network from effectively learning the problem when it is fitted.Normalization and standardization are two approaches to scaling the input data.
When applying standardization, it assumes that the input data follow a Gaussian distribution, often known as a bell curve, with a mean and standard deviation that behave predictably.It is still possible to standardize data even if this assumption is not realized; however, the results you get may not be accurate.Sometimes it will be difficult to interpret the data due to loss of information.Student datasets used in the study contain different grade levels of the students.Also, the grades may be skewed towards the left, which means that there are more students with higher grades.Here the data is not evenly distributed around the mean and this is one of the major reasons for not following Gaussian distribution.Also, the data must be independent for Gaussian distribution.However the student dataset cannot be independent as there is a way to correlation of grades among the students.Therefore, normalization is employed in this study.
The process of normalization involves rescaling the data from their original range to a new range in which all of the values are contained between 0 and 1.The ability to know the lowest and maximum observable values, or to have an accurate ability to estimate them, is necessary for normalization.Deep neural networks are enhanced in accuracy and performance by stacking additional layers to solve complicated problems.The idea behind adding additional layers is that they will eventually acquire more complex features.The following deep learning models are analyzed and compared to build an efficient recommendation system.
Recommendation system with CNN architecture
CNNs are built from two basic components, which are referred to as the convolutional layers and the pooling layers.Even though they are straightforward, there are many ways to organize these layers in any computer vision system.It is possible to construct incredibly deep convolutional neural networks by making use of common patterns for configuring these layers as well as architectural improvements.Figure 2 shows the proposed system with CNN architecture for the recommendation system.
In Fig. 2, ODC represents one-dimensional convolution, MP represents max pooling with stride (S) of 2, and FC represents fully connected layers.The ODC by a kernel of size px is defined in Eq. ( 1).
where x is the input data and w is the weight of the kernel.In MP, the word "stride" refers to the total number of shifts that occur throughout the vectors.The convolution filter is relocated to two values in the input vector when using a stride value of 2.
The convolution and MP are applied repeatedly as per the design of the recommendation system with CNN architecture in Fig. 2, and the extracted deep features reach the FC layer at last.The proposed design has three FC layers, and the classification takes place at the final FC layer.The FC layer consists of a neural network architecture for prediction.It is motivated by the biological brain system's process of learning, which involves the memorization and recognition of patterns and correlations of data gleaned from prior knowledge and the compilation of information gleaned from past experiences to forecast a certain event.
The neural network architecture has no less than three layers, namely, an input layer, a hidden layer, and an output layer.Layers are built of neurons (nodes) that are coupled to one another.One of the most effective algorithms for making predictions or performing classifications is known as the feed-forward neural network.The difference that exists between the input and the weights is referred to as the error, and it is used to modify the weights to decrease the error that occurs during the prediction process.This helps to establish the most accurate output that can be accurately predicted.The proposed system with CNN architecture uses cross-entropy loss, which is defined in Eq. ( 2).
Error or loss ¼ À where SMPi represents the softmax probability and GTi is the ground truth label.The other parameter settings are presented in Table 1.
The algorithm for the recommendation system based on the CNN model 1.The results after pre-processing are considered as the input data.
5. Send the final values to the fully connected layer with sizes 512 and 3. 6. Apply the softmax function to get the corrected output.
The major advantages of using CNN are reduced overfitting, improved accuracy, and computational efficiency.Additionally, CNNs can learn useful features directly from the input data.However, the drawback of the CNN model is that CNNs require a large amount of labeled data to train effectively, which can be a challenge for tasks where labeled data are scarce or expensive to obtain.To overcome this challenge, the ResNet model has been applied to recommendation systems.Recommendation system with residual network Following the development of CNN, ResNet was established.In a deep neural network, more layers may be added to increase its accuracy and performance, which helps to tackle difficult tasks.The working hypothesis is that each successive layer would gradually learn more features for better performance.Figure 3 shows the proposed system with ResNet architecture for the recommendation system.A specific type of neural network called a residual network (ResNet) skips some layers in between and has a direct link.The core of residual blocks is known as the "skip connection."Next, this term passes through the activation function, f(), and the output H (x) is defined in Eq. ( 3).
Equation ( 4) represents the output with a skip connection.
Using this additional short-cut path for the gradient to flow through, ResNet's skip connections address the issue of vanishing gradients in deep neural networks.In this case, the higher layer will function at least as well as the lower layer, if not better.
Algorithm for the recommendation system based on the ResNet model ResNet does not have more training errors.Additionally, there is an improved accuracy and an increase in efficiency.Much deeper networks can be trained using ResNet.However, some of the pitfalls in using ResNet are the difficulty in initializing the parameters for ResNet and complexity, computational expense, and overfitting.To overcome the complexity, another deep learning model, namely, long short-term memory (LSTM), has been introduced and can be used in recommendation systems.LSTMs can model complex relationships between input and output sequences and are suitable for tasks where the output depends on long-term dependencies in the input data.
Recommendation system with LSTM architecture
LSTM is a recurrent neural network (RNN) built for solving complex problems such as in predicting sequences.An RNN is a looped feed-forward network.Figure 4 shows the proposed system with LSTM architecture.
The LSTM network's basic computational building element is referred to as the memory cell, memory block, or simply cell.The word "neuron" is so often used to refer to the computational unit of neural networks that it is also frequently used to refer to the LSTM memory cell.The weights and gates are the constituent parts of LSTM cells.
A memory cell includes weight parameters for the input, output, and an internal state.The internal state is built throughout the cell's lifetime.The gates are essential components of the memory cell.These functions, like the others, are weighted, and they further influence the flow of the cell's information.To update the state of the internal device, an input gate and a forget gate are used.The output gate is a final limiter on what the cell outputs.The recommendation system with LSTM architecture uses stacked LSTM that makes the model deeper to provide more accurate prediction results.
The representations of the input gate, false gate, and output gate of LSTM are given in Eqs. ( 5)-( 7) respectively.
where i t represents the input gate, f t represents the forget gate, it represents the output gate, Ã represents the sigmoid function, w x denotes the weight for the respective gate(x) neurons, h t-1 denotes the output of the previous LSTM block (at timestamp t-1), tx denotes the input at the current timestamp and b x denote biases for the respective gates(x).LSTMs are designed to handle sequential data, which is well suited for time series prediction.They can handle the vanishing gradient problem that occurs in RNNs on long sequences.There is an improved performance when compared to RNNs and other neural networks on a variety of sequential data tasks.However, the challenge is still the difficulty in initialization and difficulty in interpreting results.
Activation function in the deep learning models
In all the above models, the softmax activation function is used.The Softmax function can be applied to multiclass problems, and the output probability range will be from 0 to 1.The total of all probabilities will sum to 1. Therefore, this function is chosen to be used in all the architectures.The sigmoid activation function can be applied to binary classification methods only, and it is most prone to the vanishing gradient problem.Additionally, the output is not zero-centered For better computation performance and to avoid the vanishing gradient issue, a different activation function called the rectified linear unit (ReLU) can be used in the hidden layer.These are the main reasons for choosing only the softmax function in the individual models and the hybrid model.
If the above models function separately, they do not produce optimal results.Additionally, to overcome the computational complexity problem that occurred in the above three architectures, there is a need to develop a hybrid model that combines the features of CNN, ResNet and LSTM.
Challenges in other deep learning models
There are few challenges in many other deep learning architecture, such as gated recurrent units (GRUs), generative adversarial networks (GANs), and autoencoders (AEs).GRUs are difficult to interpret, as the internal workings of the model are not as transparent as traditional machine learning models.GANs can be sensitive to hyperparameters such as the size of the hidden layers, the type of activation function used, and the optimization algorithm used, which can affect the performance of the model.Autoencoders are designed to reconstruct the input data rather than learn more complex relationships between the inputs and outputs.Additionally, they are designed mainly for unsupervised learning tasks, and they cannot handle noisy data.
Hybrid system
The proposed work is aimed at developing a hybrid model that combines the three deep learning models namely CNN, ResNet, and LSTM.A hybrid model not only provides better robustness and improved accuracy; it also reduces computational complexity.An efficient course selection can be done and a better course recommendation can be given to the enrolled users with the aid of the hybrid system.The final layer of the proposed system uses architectures such as CNN, ResNet and LSTM is the softmax layer where the recommendation takes place.It transforms the scores to a normalized probability distribution, which may then be shown to a user or utilized as input by other systems.It is defined in Eq. ( 8) as where in and m are the output of the n th layer and the number of classes, respectively.To hybridize the given architectures, a score (probability) fusion technique is employed.In the probability fusion module, probabilities from multiple sources namely CNN, ResNet and LSTM are aggregated and given to the hybrid model for recommendation, thereby improving the overall prediction.Figure 5 shows the fusion module for the proposed recommendation system.In Fig. 5, P1, P2, and P3 represent the probability of class 1 to class 3, respectively.The obtained probabilities of each architecture are combined using several approaches to generate a new score, which is then used in the decision module to make the final decision.This method's effectiveness and reliability are directly correlated to the quantity and quality of the input data used in the training phase.In addition, the scores from each architecture do not have to be consistent with one another; hence, the normalization of scores is not needed.This proposed work uses the max rule for the final decision by estimating the mean of the posterior probabilities (μ1, μ2, and μ3) by the maximum value.Commonly used probability fusion techniques are weighted averaging, max/min fusion, softmax fusion, logit fusion, and rank-based fusion.Among these techniques, multiple learning algorithms are used in voting-based ensemble approaches, which strengthen the classification model.Weighted voting-based ensemble methods offer a more flexible and fine-grained manner to anticipate exact output classes when compared to unweighted (majority) voting-based ensemble methods.In this work, the weighted voting method is applied in the probability fusion module to get the mean of the posterior probabilities thereby aiming to get the actual output class.Using the weighted averaging method, the probabilities from each separate model are combined.The average probability of each model is calculated and are taken together to be used in the probability fusion module.The mean value of the posterior probabilities is calculated from the probability values of three individual models namely CNN, ResNet, and LSTM.point, which ensures that we are moving in the opposite direction as the gradient.It is defined by If the step size is not large enough, there will be very little movement in the region being searched, which will cause the process to take a very lengthy period.If the step size is too large, the search may miss the optimal solution as it bounces about the search space RMSProp is an extension of SGD that improves the capability of SGD optimization.It is created to test the hypothesis that different parameters in the search space need different α values.The partial derivative is used to advance along a dimension once a calculated α has been determined for that dimension.For each additional dimension of the search space, this procedure is repeated.The custom step size is defined by where s is the sum of the squared partial derivatives of the input variable that have been found throughout the search.Adam optimization is the combination of RMSProp and adaptive gradient.The former performs well for noisy data, whereas the latter performs well on computer vision and natural language problems.Adam utilizes the average of the second moments of the gradients in addition to the average of the first moment (the mean) when adjusting the learning rates of the parameters (the uncentered variance).It also computes an exponential moving average of the gradient and the squared gradient, with decay rates determined by the parameters β 1 and β 2 where
RESULTS
The true effectiveness of the recommendation system is evaluated using real-time samples in this section.Samples are obtained from the learners of an engineering college in Tamil Nadu with the help of the Moodle learning management system.The starting month and year of the course are given in the course presentation.Assessment type can be in two modes: Quiz or Assignment.For the proposed system to function properly, a sample dataset that includes different parameters such as student ID, course module, durations, number of clicks, gender, etc., must be provided as inputs.In the database, there are 125 male and 125 female learners in each category (low, medium, and high), and a total of 750 records are used.Complexity levels of low, medium, and high are denoted as labels 1,2 and 3 respectively.In a classification system, the "training" phase is used to train the system by making use of labeled data from the "datasets," whereas the "testing" phase is utilized to evaluate the system.The HDL model makes use of k-fold cross-validation to partition the database into k-folds.The system is evaluated k times using the training data in k-1 folds and testing the remaining data.
Then, the overall performance is determined by taking the accuracy of each round and averaging them.The following metrics are used to assess the system's performance: accuracy, precision, recall, and F1-score.To develop a definition of these terms, consideration must be given to the outcomes of the proposed system.Four variables are identified (FP-False Positive, FN-False Negative, TP-True Positive, and TN-True Negative) by counting the number of possible outcomes from the classification task.Then, it is possible to compute the aforementioned performance metrics' namely Accuracy (Acc), Recall (Re), Precision (Pr) and F1 score (F1).The performances of the individual architectures such as CNN, ResNet, and LSTM are analyzed using three optimization techniques such as SGD, RMSProp, and Adam before evaluating the performance of the HDL system.Table 2 shows the CNN-based recommendation system's performances, and its corresponding bar chart for high-, medium-and low-level courses is given in Fig. 6.It is demonstrated that the CNN + Adam system is capable of achieving a maximum accuracy of ~93%, but the CNN + SGD system is only capable of achieving an accuracy of ~84%.For the same dataset, the CNN + RMSProp system provides better performance than the CNN + SGD system with ~88% accuracy.
To obtain more accurate recommendations, residual units are introduced in the CNN architecture.The bar chart for high-, medium-and low-level courses for the ResNet-based recommendation system's performance is given in Fig. 7.The introduction of residual units increases the performance of the recommendation system by ~1% for course selection.The introduction of residual units provides better performance than CNN-based systems because more important data can be reached to the latter parts of the system by the skipping layers.
To enhance the recommendation, the LSTM architecture is introduced, and the performance metrics are calculated and compared with the other two architectures.The bar chart for the LSTM-based recommendation system's performance metrics is given in Fig. 8.It is inferred that the LSTM-based recommendation system gives more promising results than the CNN-and ResNet-based systems.In the LSTM-based system, the Adam optimizer gives the highest average accuracy of 96.87%, which is 2.35% higher than that of the ResNet system and 3.45% higher than that of the CNN-based system with the same optimization.
To further improve the accuracy of recommended course selection, the aforementioned three architectures are hybridized, and their performances are also evaluated using different optimizers.Table 3 shows the HDL-based recommendation system's performance.It is observed that the HDL system recommends the course for the learner with ~99% accuracy while using the Adam optimizer during training of the three networks such as CNN, ResNet, and LSTM.It is also noted that the performance of the optimizers is in the order of SGD < RMSProp < Adam. Figure 9 shows the performance comparisons of different architectures with different optimizers in terms of average accuracy.From the given dataset, two samples are taken for example to prove the experimental results.Also, the courses are categorized into three levels low, medium, and high, based on their complexity.Complexity level is taken into account based on the course content and the users' state of knowledge.A student with student_id 201801057 has enrolled for the online course "Programming in C" in the Moodle LMS.This course is categorized as a low-level complexity course.He has attempted the quiz and online assignments on time and he has completed the course within the stipulated period of six months.As a next level, the proposed model will recommend him for the next-level course (which is categorized as a medium-level course) on the same stream "Object Oriented Programming".If he regularly submits the assignments and quizzes, he can attempt the final exam.If he successfully clears the exam, the student will again be recommended to do the next-level complexity course "Internet Programming".This course will become easy for him to learn as he has already completed the prerequisites of this subject.Our recommendation system will give the right direction for the students to choose the courses that fall in the same domain.On the other part, if an enrolled candidate is unable to complete the course on time, he will be recommended to do the same course again within a short period.This may happen due to the late submission of assignments or forgetting to attend the quiz.In these circumstances, warning messages will be sent to the students to alert them to submit assignments/quizzes on time.This factor will motivate the students to complete the course in the future.
CONCLUSIONS
This model is aimed at designing a framework for learners using deep learning architecture to produce optimal suggestions for course selection.The proposed system, namely, the hybrid deep learning model, may let learners explore all of the available alternative courses Combination rather than imposing selections on their own.Automatic feature extraction and learning from one-dimensional sequence data may be performed extremely well using the CNN.By including a feedback loop that acts as a form of memory, RNN solves the memory-related issues of CNN.Therefore, the model retains a trace of its previous inputs.The LSTM method expands on this concept by including a long-term memory component in addition to the more conventional short-term memory mechanism.Thus, the combination of CNN and LSTM may provide an effective solution for solving real-time problems such as course selection.In terms of the accuracy of the generated suggestions, the proposed recommendation by the HDL model outperforms the baseline models.It is also noticed that the proposed system improves decision effectiveness, satisfaction, and efficiency.The decision effectiveness has been improved by calculating various metrics namely Precision, Recall, and F1-score.Efficiency can be improved by collecting feedback from the users and thereby enhancing user satisfaction while completing the enrolled course.Most importantly, it provides a much-needed foothold for learners from a vast number of available courses.
Table 1
Training parameters for the system with CNN architecture.
S et al. (2023), PeerJ Comput.Sci., DOI 10.7717/peerj-cs.1670 Student dataset consists of the following attributes: S.No., Student-Id, Gender, Region, Age, Course Code, Course Duration, Course Label, Course Presentation, Date registered, Course Start Date, Total credits, Average course feedback, No. of previous attempts, Assessment_id, Assessment type, Date of submission, No. of days viewed, No. of clicks per day, Total no. of clicks, and Total no. of exercises completed.Course Label values can be either 1,2 or 3. Courses are categorized as low, medium, and high, based on their complexity level.Value one denotes low, two denotes medium and three denotes high level course.
Table 2
CNN based recommendation system performance.
Table 3
HDL based recommendation system performance. | 7,817.6 | 2023-11-27T00:00:00.000 | [
"Computer Science",
"Education"
] |
Direct detection and complementary constraints for sub-GeV dark matter
Traditional direct searches for dark matter, looking for nuclear recoils in deep underground detectors, are challenged by an almost complete loss of sensitivity for light dark matter particles. Consequently, there is a significant effort in the community to devise new methods and experiments to overcome these difficulties, constantly pushing the limits of the lowest dark matter mass that can be probed this way. From a model-building perspective, the scattering of sub-GeV dark matter on nucleons essentially must proceed via new light mediator particles, given that collider searches place extremely stringent bounds on contact-type interactions. Here we present an updated compilation of relevant limits for the case of a scalar mediator, including a new estimate of the near-future sensitivity of the NA62 experiment as well as a detailed evaluation of the model-specific limits from Big Bang nucleosynthesis. We also derive updated and more general limits on DM particles upscattered by cosmic rays, applicable to arbitrary energy- and momentum dependences of the scattering cross section. Finally we stress that dark matter self-interactions, when evaluated beyond the common s-wave approximation, place stringent limits independently of the dark matter production mechanism. These are, for the relevant parameter space, generically comparable to those that apply in the commonly studied freeze-out case. We conclude that the combination of existing (or expected) constraints from accelerators and astrophysics, combined with cosmological requirements, puts robust limits on the maximally possible nuclear scattering rate. In most regions of parameter space these are at least competitive with the best projected limits from currently planned direct detection experiments.
Introduction
So far no unambiguous signal for new physics at the electroweak scale has been identified at the Large Hadron Collider (LHC) [1,2], despite seemingly intriguing theoretical arguments that have been brought forward why the appearance of new physics should be expected at these energies (with low-scale supersymmetry being the most popular example, see e.g. [3]). In consequence, while some natural islands remain [4,5], the experimental focus in the search for physics beyond the standard model of particle physics (SM) presently undergoes a substantial broadening in scope, both concerning energy scales and theoretical frameworks for such searches [6]. At the intensity frontier, in particular, there is a plethora of both ongoing and planned activities that aim to explore new physics in the sub-GeV JHEP03(2020)118 range. Prominent examples for the latter include, but are not limited to, planned upgrades to current experiments such as NA62 [7] and NA64 [8], the recently approved LHC add-on FASER [9][10][11][12][13] as well as dedicated new experiments like LDMX [14] and SHiP [15][16][17] that are planned to be run at the new Beam Dump Facility at CERN. Finally there are proposals for LHC based intensity frontier experiments such CODEX-b [18] and MATHUSLA [19][20][21][22].
The existence of dark matter (DM) is one of the main arguments to expect physics beyond the SM. Also in this case theoretical considerations seem to point to the electroweak scale [23], independently of the arguments mentioned above, but direct searches for DM in the form of weakly interacting massive particles (WIMPs) have started to place ever more stringent constraints on this possibility [24,25]. Significant interest, both from the experimental and theoretical perspective, has thus turned to the possibility of DM particle masses below the GeV -TeV range. Conventional direct detection experiments are essentially insensitive to such light particles -except for very large scattering cross sections, where cosmic rays can upscatter DM to relativistic energies [26] -but new methods and concepts are being developed to overcome these difficulties [27][28][29][30][31][32].
Both approaches may obviously be connected in terms of the underlying new physics, an insight which motivated a large body of phenomenological work studying possible complementary approaches to the DM puzzle (see, e.g., refs. [6,[33][34][35][36], and references therein). In particular, the same new light messengers that are being probed at the intensity frontier could mediate interactions between the DM particles [37,38], naturally leading to hidden sector freeze-out [39,40] as well as astrophysically relevant DM self-interactions [41][42][43]. Recent discussions of complementary probes with a particular focus on light dark matter include refs. [44][45][46][47][48][49][50][51][52]. One of the main goals of this article is to further explore this connection. The decisive link that allows to translate limits from searches for DM to those for new particles that directly interact with the SM, and vice versa, is cosmology. It is worth stressing that, for a given model, stringent and robust cosmological bounds can typically be derived that are much less uncertain than general prejudice, or a modelindependent assessment, would suggest. Throughout this work we therefore emphasise the need to consistently treat the non-trivial cosmological aspects appearing in scenarios with light mediators, and base our limits on such a refined treatment. In particular, we evaluate in detail the thermal evolution of the dark sector to compute the DM abundance and updated bounds from Big Bang Nucleosynthesis (BBN) -but also demonstrate that DM self-interactions lead to stringent bounds that cannot be evaded even if DM is not thermally produced via the common freeze-out mechanism. We combine these new results with various updated accelerator constraints and projections, and present them in a form directly usable by experimentalists probing the sub-GeV range.
This article is organised as follows. We start by reviewing the motivation to consider scenarios with light (scalar) mediators, in section 2, and then introduce in more detail the case of Higgs mixing that we will focus our analysis on. In section 3 we present the current situation and near-future prospects of direct detection experiments, and generalise existing calculations for cosmic-ray accelerated DM to derive bounds on scattering cross sections involving light mediators. We then discuss particle physics constraints from various existing and planned experiments, in section 4, before investigating in detail the cosmological JHEP03(2020)118 evolution of the dark sector in section 5. In that section we also derive bounds from BBN and DM self-interactions that apply to the specific scenario considered here, and mention further astrophysical bounds. In section 6 we then combine the various constraints, and compare them to (projected) bounds from direct DM detection experiments. Finally, in section 7 we discuss our results and conclude.
2 Models for light dark matter with portal couplings Light dark sector particles are required to have small couplings to SM states in order to be allowed phenomenologically and therefore naturally correspond to fields that are singlets under the SM gauge interactions. They may then directly couple to the SM via the well known portal interactions [33], i.e. gauge-invariant and renormalisable operators involving SM and dark sector fields. If the DM particle χ is stable and fermionic, as assumed in this work, no direct renormalisable interaction is available and an additional particle X mediating the interactions with the SM is required. 1 In recent years mediator searches at colliders together with complementary constraints from direct detection have therefore received a large amount of interest, both for searches at the LHC [56][57][58][59][60][61][62] and at low energy colliders [63][64][65]. When comparing the sensitivities of collider searches with direct detection experiments it is important to take into account the large difference in energy scale between the centre of mass energy at colliders and the typical momentum transfer in nuclear recoils. In particular the relative sensitivity of direct detection experiments is significantly increased for light mediators, implying that while scenarios with heavy mediators are strongly constrained by collider searches, those constraints are significantly weakened for light mediators. Another appealing feature of light mediators, adding predictivity, is that DM can be produced within the standard thermal freeze-out paradigm: 2 For sufficiently large couplings the dark sector will thermalise in the early universe and the DM relic abundance is set via annihilations into two mediators, χχ → XX, but also via annihilations into SM fermions via an s-channel mediator, χχ → X →f f , if the dark sector is not fully decoupled at the time of freeze-out. Two particularly interesting and often studied options are vector mediators kinetically mixed with the SM hypercharge gauge boson or scalar mediators with Higgs mixing.
Vector mediators
Let us start with a brief discussion of the vector mediator case. In the simplest scenario the field content consists of only a dark matter fermion charged under a dark U(1) X with kinetic mixing (see e.g. [34,39,66]). For light mediators the coupling structure will basically be that of a photon, so that X predominantly decays to charged SM fermions such as electron 1 For scalar DM, on the other hand, there is a direct (Higgs) portal term [53], constituting the most minimal DM model that is phenomenologically viable; for a recent status update see ref. [54]. Another portal term exists for a new heavy neutral lepton mixing with the SM model neutrinos; such a particle (often called 'sterile neutrino') is not stable (decaying e.g. into three SM neutrinos), but can be sufficiently long-lived to constitute DM [55]. We do not consider these options here. 2 It has been noted that for heavy mediators DM overproduction can only be avoided in rather special corners of parameter space [60][61][62].
JHEP03(2020)118
positron pairs. An important observation is that DM annihilations proceed via s-wave for both channels discussed above. If DM was ever in thermal contact with the SM (not necessarily through the kinetic mixing) such that the dark sector temperature is not much smaller than the photon temperature, there are strong constraints from Cosmic Microwave Background (CMB) observations, ruling out DM masses m χ 10 GeV [67]. In fact these bounds extend to significantly higher DM masses for mediators parametrically lighter than DM due to the Sommerfeld enhancement of the annihilation cross section [68].
There are a number of ways to evade this CMB limit, but they do involve some nonminimal component in the DM model. For instance DM may be asymmetric with only a sub-leading symmetric component such that residual annihilations during CMB are sufficiently suppressed [69]. For consistency such a setup will however require the existence of an additional dark sector state to compensate the charge of the DM (reminiscent of electrons and protons). Another possibility would be to introduce a scalar whose vacuum expectation value (vev) generates small Majorana mass terms for the DM fermion, resulting in two dark matter states with slightly different mass, coupled off-diagonally to the vector boson (this is often referred to as inelastic DM) [70]. If the heavier state decays before the time of the CMB, s-wave annihilations χ 1 χ 2 → X →f f are no longer possible and constraints are evaded. A third possibility would be to couple the vector mediator to a light hidden sector state such that the decays of X are invisible [71], in which case the CMB bounds can also be evaded. Finally, if the abundance is set via freeze-in [72] rather than freeze-out, the annihilation cross section may be sufficiently small to be in accord with observations. While all these options are viable and possess an interesting phenomenology, we wish to concentrate on a minimal setup in the current study. As discussed below, a model for light DM which still survives in its simplest form is that of a scalar mediator with Higgs mixing.
Scalar mediators with Higgs mixing
In contrast to the case of a vector mediator, DM annihilations proceed via p-wave for a scalar mediator and the setup is correspondingly much less constrained by residual annihilations during CMB times. If the dark sector was in thermal contact with the SM heat bath at even earlier times, however, dark sector masses are typically still required to be larger than m χ 10 MeV in order not to spoil the agreement between predicted and observed primordial abundances of light nuclei (we will study the relevant limits in detail below).
Let us consider a new real scalar S that mixes with the SM Higgs and further couples to a new Dirac fermion χ that can play the role of the DM particle (see e.g. ref. [45] and references therein), Here m χ is the mass of the DM fermion, H is the Higgs doublet of the SM and V (S, H) is the scalar potential. The terms involving the singlet scalar can be written as
JHEP03(2020)118
with V (S) = ξ s S + 1/2µ 2 s S 2 + 1/3A s S 3 + 1/4λ s S 4 . Without loss of generality the field S can be shifted such that it does not obtain a vev, implying ξ s = A hs v 2 /2 (where the Higgs vev is given by 246.2 GeV). After electroweak symmetry breaking the singlet S mixes with the physical component of H such that the singlet S naturally acquires a coupling to all SM fermions while the Higgs h acquires a coupling to χ, with mixing angle (2.4) The usual Higgs quartic coupling λ h is fixed in the SM via the observed Higgs mass and we are interested in the parameter region m h m S . In our convention where S does not acquire a vev the mixing angle is therefore approximately fixed by A hs . While the mixing angle clearly has a very large impact on most of the experimental observables, it does not fully specify the phenomenology of the scalar sector. For example, the decay width of the SM-like Higgs boson into two light singlets is determined by the S 2 H † H coupling, When we evaluate constraints e.g. from the Higgs signal strength we will assume that λ hs 0 to be conservative. Similarly we do not rely on λ hs for the thermalisation of the SM with the dark sector, yielding conservative limits from BBN. We also assume the trilinear coupling A s to be small, so that the 3 → 2 annihilation rate of singlet scalars is negligible and no phase of 'cannibalism' [73] occurs after freeze-out, again leading to conservative bounds.
For the calculation of DM-nucleus scattering rates we will also need the effective Yukawa coupling between a nucleon and the scalar mediator, Here the constants f q,G correspond to the quark and gluon content of the nucleon. It is well known that the couplings to protons and neutrons are very similar for Higgs exchange with g n ≈ g p ≈ 1.16 · 10 −3 sin θ, using state-of-the-art values for the f q [74].
3 Constraints from direct dark matter searches
Conventional light dark matter detection
Direct detection experiments probe the elastic scattering cross section σ SI χN between DM particles χ and nuclei N (since we only consider scalar mediators, we restrict our discussion to spin-independent scattering) at finite (spatial) momentum transfer
JHEP03(2020)118
where T N is the nuclear recoil energy. For better comparison, however, these results are typically reported in terms of the inferred cross section per nucleon, σ SI , at zero momentum transfer. An assumption that is often adopted for the sake of this translation is that of isospin-conserving couplings, which is almost perfectly satisfied for a scalar with Yukawalike coupling structure. This leads to the familiar coherent enhancement of Here µ χN and µ χp are the reduced masses of the DM/nucleus and DM/nucleon system, respectively, and A is the atomic mass number of the nucleus N . Compared to this, the cross section at finite momentum transfer is suppressed by a nuclear form factor G N , This form factor is conventionally computed as the Fourier transform of the nuclear density profile, i.e. under the assumption that the scattering on the nucleons does not induce an additional momentum dependence. For an interaction mediated by a scalar S it is straightforward to calculate the nonrelativistic scattering cross section as [45] σ SI χN (Q 2 = 0) = 4) where g N denotes the coupling between S and the nucleus, i.e. g N = Ag p if isospin is conserved. In the case of Higgs mixing, using eqs. (2.6), (3.2) and (3.4), this translates to the DM-proton scattering cross section For a heavy mediator, this expression can directly be compared to standard limits on σ SI because scattering on nucleons is essentially momentum-independent. If the mediator is light compared to the typical momentum transfer, however, the cross section probed in the detector is smaller than expected from eq. (3.3) and limits have hence to be re-evaluated taking into account all the relevant experimental information. An approximate -but still reasonably accurate -way of taking into account the momentum suppression consists in simply rescaling (see, e.g., ref. [75]) whereσ χN is the limit reported under the assumption of a constant scattering cross section -in terms of σ SI as given in eq. (3.2) -and Q 2 ref is an experiment-specific reference momentum transfer (see also ref. [76] for a discussion of how to explore light mediators with direct detection experiments).
In figure 1 we summarise the most stringent (projected) direct detection constraints at low DM masses, along with the value of Q 2 ref that we use for the corresponding experiment. The latter was either estimated by using eq. (3.1) for the minimal recoil energy adopted in the respective analysis, or by directly fitting to data provided by the experiment (for PandaX-II [25]). We note that carefully modelling inelastic scattering processes, resulting in the emission of a photon or an atomic electron, in principle allows to improve sensitivities in the few 100 MeV range [83][84][85]. There is also a number of proposed direct detection experiments, and ideas, that would probe even smaller cross sections in the mass range shown in figure 1, but the status of those is presently less certain (for a recent compilation, see refs. [35,48]).
Cosmic ray-accelerated dark matter
The right panel of figure 1 clearly illustrates the exponential loss of sensitivity of conventional direct detection experiments to sub-GeV DM, reflecting the fact that non-relativistic DM particles with such small masses do not carry enough momentum to allow for nuclear recoils above the experimental threshold. As recently pointed out, however, there is a small yet inevitable component of relativistic DM that alleviates this limitation [26]: 3 if DM can elastically scatter with nuclei, then also the well-established population of high-energy cosmic rays will scatter on DM, thus accelerating them from essentially at rest (in the galactic frame) to GeV energies and beyond -in principle for arbitrarily small DM masses.
In order to handle scattering via light mediators we extend the formalism developed in ref. [26] to allow for arbitrary relativistic scattering amplitudes (rather than only a constant σ χN as assumed there). As the derivation follows the same steps as in ref. [26], we only briefly state our results here and refer to that reference for further details (see also JHEP03(2020)118 ref. [94]). The flux of cosmic-ray accelerated DM (CRDM) before a potential attenuation in the Earth or the atmosphere is given by Here, ρ local χ and Φ LIS CR are the local interstellar DM density and the cosmic-ray flux, respectively, and T min CR is the minimal kinetic cosmic-ray energy needed to accelerate DM to kinetic energy T χ ; we take into account elastic scattering of cosmic-ray nuclei N = {p, 4 He} with DM, including in each case the same dipole form factor suppression as in ref. [26]. 4 D eff ∼ 8 kpc, finally, is an effective distance out to which we assume that the source density of CRDM is roughly the same as it is locally (which, for a standard DM distribution, corresponds to a sphere of about 10 kpc diameter). The scattering rate of relativistic CRDM particles in underground detectors is then determined as where the scattering cross section dσ χN /dT N must be evaluated for the actual DM energy T z χ at the detector's depth z (which is lower than the initial DM energy T χ due to soil absorption [95][96][97][98]), and T χ (T z,min χ ) denotes the minimal initial CRDM energy that is needed to induce a nuclear recoil of energy T N (again taking into account a potential attenuation of the flux due to the propagation of DM through the Earth and atmosphere). In order to relate T z χ to the initial DM energy T χ = T z=0 χ , we numerically solve the energy loss equation where T max N denotes the maximal recoil energy T N of nucleus N , for a given DM energy T z χ , and we sum over the 11 most abundant elements in Earth's crust. It is worth stressing that the momentum transfer in a direct detection experiment is given by eq. (3.1) also in the relativistic case. In particular, the form factor in the nuclear scattering cross section does not depend on the energy of the incoming DM particles, only on the relatively small range of Q 2 that falls inside the experimental target region. This makes it straightforward to translate direct detection limits reported in the literature for heavy DM, assuming the standard DM halo profile and velocity distribution, to a maximal count rate in the analysis window of recoil energies and in turn to limits resulting from the CRDM component discussed here [26]. The updated routines for the computation of the resulting CRDM flux and underground scattering rates have been implemented in DarkSUSY [99], which we also use to calculate the resulting limits from a corresponding re-interpretation of Xenon-1T [24] results. Figure 2. Left panel. Direct detection constraints on dark matter accelerated by cosmic rays for fixed mediator masses. Cross sections below the lower boundaries lead to recoil rates too small to be detectable, while cross sections above the upper confining boundaries prevent the dark matter particles to reach the detector, due to efficient scattering in the overburden. As a rough indication of how large cross sections are in principle possible, we also show in each case the parameter range where the couplings are well inside the perturbative regime (for a more detailed treatment, see ref. [100]). Right panel. Same, for fixed mediator to DM mass ratios.
In order to do so, we still need the full relativistic scattering cross section of DM with nuclei, mediated by a scalar particle. For fermionic nuclei we find where σ SI,NR χN is the scattering cross section in the highly non-relativistic limit, as stated in eq. (3.4), s = E 2 CM and G N (Q 2 ) is the conventional nuclear form factor. While the non-relativistic result is of course recovered for Q 2 → 0 and s → (m χ + m N ) 2 , this cross section is actually enhanced for Q 2 m 2 χ when compared to the standard estimate given in eq. (3.6). This is particularly relevant both for very light DM (m 2 χ Q 2 ref ) and the production of the CRDM component stated in eq. (3.7), for which the momentum transfer is typically much larger than expected in underground experiments.
In figure 2 we show the resulting limits from Xenon-1T on light DM. An important feature of a constant scattering cross section is that these constraints (almost) flatten for very small DM masses [26]. Compared to that, as expected from the above discussion (see also ref. [94]), we observe a significant strengthening of our constraints at fixed mediator masses. However the figure also clearly demonstrates that for light mediator masses the production of the CRDM component becomes suppressed by the mediator momentum; when considering only mediators that are lighter than the DM particle, in particular, the resulting constraints become less and less stringent. Also the behaviour of the maximal cross section (due to soil absorption) is rather instructive, as it falls into two clearly distinguishable regimes: i) for heavy (GeV-scale and above) mediators the upper boundary essentially follows that of the constant cross section case [26], roughly rescaled by an additional m 2 χ dependence (for small m χ ) with the same origin as discussed above for the lower JHEP03(2020)118 boundary; ii) for lighter mediator masses, the momentum suppression starts to become relevant, strongly favouring scattering events in the overburden with small momentum transfers -which in turn leads to a significantly increased penetration depth, and hence weaker constraints. Let us, finally, stress that the limits presented in figure 2 in principle apply to any model with scalar mediators, i.e. they are not restricted to the specific structure of the DM-nucleon coupling given in eq. (2.6).
Constraints from particle physics experiments
Let us now turn to constraints on the scalar portal model from particle physics experiments. In the following we concentrate mostly on the case m S m χ so that the annihilation channelχχ → SS is kinematically allowed in the early universe. The reason is that for m S m χ only direct annihilations into SM states via an s-channel scalar singlet are allowed,χχ → S → SM (see section 5.1 for a more detailed discussion). The corresponding annihilation rate, however, is typically constrained to be too small to allow for the observed DM relic abundance (see e.g. [45]), making this case less appealing. Note that m S m χ naturally implies that the singlet scalar S can only decay to SM states that can potentially be observed in detectors ('visible decays'). Depending on the mixing angle θ, however, the lifetime of S can be so long that the decay happens only outside of the detector and the signature is therefore identical to an invisibly decaying scalar. While we mainly concentrate on this case, we will also briefly comment on the case m S m χ .
An important property of the inherited Yukawa-like coupling structure is that the production of S may well proceed via one of the larger Yukawa couplings, while its decay is typically controlled by smaller couplings because only light states are kinematically accessible. In particular, flavour changing transitions induced at the loop level are typically very relevant (see, e.g., [101]) and lead to the production via rare meson decays such as B → KS and K → πS, which are strongly constrained by a variety of experiments [15,[102][103][104]. Constraints on light scalars as well as projected sensitivities have been evaluated frequently in the literature, with a recent compendium of limits shown e.g. in ref. [105] (see also ref. [106], pointing out that such a light scalar could even drive cosmological inflation). In addition invisible decays of the SM Higgs into DM, h →χχ can give relevant constraints on the same product of couplings, g χ · sin θ, that is relevant for direct detection. In the following we briefly summarise the limits that we use in our analysis.
Invisible Higgs decay and signal strength
Data on Higgs bosons created at the LHC in principle constrain the SM Higgs mixing angle θ in two ways. First, invisible Higgs decays are constrained as BR(h → inv.) < 0.19 (95% C.L.) [107]. Second, the observed Higgs signal strength where σ h is the Higgs boson production cross section and N exp,SM h is the number of observed and expected Higgs bosons, respectively, is constrained to be µ > 0.89 (95% C.L.) [108]. In JHEP03(2020)118 our model the latter constraint is more stringent because the Higgs boson production cross section can only be reduced compared to the SM case, thus implying BR(h → inv.) < 0.11.
Specifically there are three effects that lead to a reduction of the signal strength: 1. Reduction of production cross section and decay widths for h, due to mixing.
3. Decays into two scalars, which depletes the branching ratio in the remaining channels.
In our case the ratio of the production cross sections is simply given by and for the branching ratios we have Here Γ 0 ≈ 4.1 MeV is the total SM Higgs width (without mixing), is the partial decay width for invisible decays and Γ SS is the Higgs boson decay width into two scalars (see eq. (2.5) and related discussion). Here we conservatively assume λ hs to be negligibly small, and hence set Γ SS ≈ 0. Taken together, the limit resulting from the predicted Higgs signal strength is thus given by which for m χ m h implies This limit will soon be improved with data from the 13 TeV run (see e.g. [109]). For the high luminosity phase of the LHC the direct bound on the invisible branching ratio will become more constraining and we use as an estimate of the future bound [110], thus strengthening the bound in eq. (4.6) by a factor of about 0.21. 5 JHEP03(2020)118
Limits on sin θ from beam dumps and colliders
As already mentioned, singlet scalars S can be efficiently produced via the decay of heavy mesons which in turn are copiously produced at the collision energies available at SPS and LHC based intensity frontier experiments, see e.g. ref. [101]. 6 For scalars too heavy to be produced in B meson decays this production mechanism is not available and the most constraining limit comes from LEP [105].
For scalar masses kinematically accessible to B → KS but too large to allow for K → πS, the strongest constraints are typically obtained from experiments where both the scalars can be created and their decay products can be detected. For experiments of this type the number of detected events thus scales with sin 4 θ (at the lower bound of sensitivity). For our analysis we use the results from LHCb [112] as well as a reinterpretation [105] of the past beam-dump experiment CHARM [113]. Values of sin θ just below the current sensitivity will be tested by LHCb in the high luminosity phase and we estimate the corresponding sensitivity (see appendix A for details). For smaller values of sin θ the SHiP experiment [15,17,114] and MATHUSLA [19,21] have the best sensitivities m S 5 GeV [6]. Both experiments aim at working in the background-free regime (see refs. [16,17] for detailed simulations for SHiP and refs. [20,21,115,116] for MATHUSLA). In reality, it is very difficult however to completely exclude the possibility of residual background events to take place in the detector. In case of SHiP such events can be distinguished from the signal events by making use of the spectrometer, mass reconstruction and particle identification. These options are not available in the case of MATHUSLA, implying that it is not straight-forward to compare its reported formal sensitivity (based on 2.3 events in the detector) to the one from SHiP. For this work we will therefore concentrate on the expected bounds from SHiP.
For smaller scalar masses, m S < m K −m π ≈ 350 MeV, experiments that search for rare kaon decays are typically more sensitive. The reason is that, unlike for particles heavier than kaons, one can perform precision measurements of the final state pion energy, on an event by event basis. The number of confirmed signal events thus no longer depends on the detection of the scalar decay products and therefore scales as sin 2 θ, i.e. is much less suppressed than in the case of heavier scalars. Both experiments E949 [117] and NA62 [7] search for rare K + → π + + MET decays and give bounds on the scalar mixing through the process K + → π + S. As this is independent of the decay of S, scalars with arbitrarily small masses can be constrained. In addition to the limit from E949 we estimate the sensitivity of NA62 during LHC Run 3 (see appendix B for details); we note that the resulting sensitivity is very similar to the sensitivity of the proposed KLEVER experiment shown in ref. [6]. used for this study. 7 For comparison, we also show existing limits from astrophysics; those will be discussed in more detail in section 5.4. While we mainly concentrate on the case m S < m χ as discussed above, we will also consider parameter regions in which m S > 2m χ and therefore invisible decays of the scalar naturally occur. In this case not all collider limits shown in figure 3 directly apply. To be specific, for m S = 0.1 GeV we will use the limits from E949 and NA62 as shown in figure 3 while for m S = 1 GeV the most stringent bound comes from the BaBar measurement of BR(B + → K +ν ν) < 1.6 · 10 −5 [118]. Making use of the partial decay width B → KS (see e.g. [63]), this translates into sin θ 6 · 10 −3 for m S 4 GeV.
Cosmological evolution of the dark sector
In this section, we describe the full thermal evolution of the dark sector particles, χ and S, which can be qualitatively divided into five, partially overlapping stages.
T > T dec
At high temperatures, the dark and the visible sector can be in chemical equilibrium due to the processes χχ ↔ ff , S ↔ ff and SS ↔ ff . In that case both sectors also share the same temperature, through efficient scattering of the involved particles, so 7 Nominally, there is a small gap in projected sensitivity at around mS ≈ 1 GeV and sin θ ≈ 5 · 10 −5 between the future exclusion power of the HL LHCb and the upper range of validity of the ShiP limits. This window however, will most likely be closed by i) slightly stronger (upper) limits of FASER2 [6] compared to SHiP and ii) the fact that in addition to B + → K + µ + µ − the channel B + → K + ππ will also be analysed by LHCb. The corresponding limit is expected to be more stringent than our estimate around mS ∼ 1 GeV owing to the fact that the branching ratio of S into pions is strongly enhanced compared to the branching into muons in this mass range, see e.g. [105]. When presenting our final results for the projected sensitivity of future experiments in this mass range, we will thus just use the lower sensitivity bound of SHiP, sin θ ∼ 10 −6 .
JHEP03(2020)118
the temperature ratio is simply unity. For very small values of the mixing angle θ, however, the total interaction rate Γ DS↔SM between the two sectors is never large enough to bring them into thermal contact. Adding additional high-scale interactions to our model Lagrangian (2.1), on the other hand, would still allow to achieve chemical equilibrium for very large temperatures, without affecting the low-energy phenomenology. In this work, we will consider both of these possibilities and separately indicate the relevant parts of parameter space in our results.
T < T dec . At some temperature T dec the dark sector decouples from the visible sector.
To an approximation sufficient for our purpose, this happens when the total interaction rate equals the Hubble rate, This relation, in other words, allows to determine T dec as a function of our model parameters (m χ , m S , g χ and θ). In practice we compute Γ DS↔SM only for the three numberchanging reactions that keep up chemical equilibrium as mentioned in the previous paragraph (T > T dec ); elastic scattering Sf ↔ Sf will enforce kinetic equilibrium to be maintained slightly longer -but this is a small effect given that both scattering partners are relativistic around T dec . After decoupling the scalar mediators still retain a thermal distribution with temperature T S as long as they are relativistic (while non-relativistic scalars start to build up a chemical potential, even if a large quartic coupling λ S can keep them in local thermal equilibrium). Moreover, for sufficiently large dark couplings g χ , they are also kept in thermal equilibrium with the DM particles. Taken together, this leads to separate entropy conservation in the dark and visible sectors, and hence a temperature ratio that changes with the respective number of effective entropy degrees of freedom as It is worth stressing that this equation only holds as long as i) the scalars are still relativistic and ii) entropy is actually conserved, i.e. before S has decayed.
T > T fo . Above a certain temperature T fo , the DM particles will typically be in chemical equilibrium with the mediators and/or the SM heat bath. The former is achieved via the annihilation process χχ ↔ SS, while the latter is only relevant for T > T dec and happens dominantly through the s-channel process χχ ↔ ff . As the temperature approaches T fo , the DM number density freezes out and thereby sets the relic abundance of χ in the usual way. We numerically calculate the relic abundance with DarkSUSY [99] including the Sommerfeld enhancement of the annihilation rate, by modifying the implemented dark sector mediator model (vdSIDM) such as to fully include the temperature evolution of JHEP03(2020)118 ξ(T ) as specified in eq. (5.3); while the t/u-channel annihilation processes are identical to that model, we update the s-channel annihilation rate to the one relevant for our case, where Γ S represents the total width of S, and Γ S ( √ s) the width to SM particles assuming that S had a mass of √ s rather than m S . Assuming that DM is entirely produced via thermal freeze-out thus essentially fixes the dark coupling g χ as a function of the other model parameters. For our final results we will both use this assumption and demonstrate how the constraints on the model are affected if this assumption is relaxed (thus allowing for alternative DM production mechanisms). If m S m χ only s-channel annihilation is kinematically accessible; obtaining the correct DM abundance via thermal freeze-out would then require larger values of sin θ than compatible with the constraints presented in section 4.2 [45]. For 0.1 m S /m χ 1, on the other hand, the freeze-out process would involve two species that are no longer in chemical equilibrium, and where eq. (5.3) no longer applies. As such a situation would require a dedicated analysis, we will in the following leave this small part of the parameter space unexplored.
T < T fo . After the freeze-out of the dark matter particle, the mediator simply acts as an additional contribution to the energy density -until it decays to SM particles. Both stages have an impact on BBN, as discussed below. The corresponding lifetime of the scalar depends on the available decay channels and we adopt the results from ref. [119] in the following.
Let us, finally, stress again that, depending on the values of masses and couplings, DM freeze-out can in principle happen both before (T fo > T dec ) and after (T fo < T dec ) the decoupling of the two sectors. In this work we will restrict ourselves to light mediators with m S ≤ 0.1 m χ when discussing thermal DM production (see discussion above). In this case, taking into account the constraints on sin θ that result from direct searches for S, it turns out that we are always in the domain of T fo < T dec . Ultimately, this is a consequence of the Yukawa structure of the dark sector coupling, and the fact that we restrict our analysis to light DM.
Big Bang nucleosynthesis
In this section, we calculate BBN constraints for the scalar portal model using the formalism that was developed in [120][121][122][123], carefully taking into account the cosmological evolution of the dark sector as described in section 5.1. Specifically, in order to use the modelindependent constraints from ref. [120], we have to evaluate the values of τ S , T fo and ξ(T fo ) for every combination of m S , m χ and sin θ, thereby fixing g χ by the requirement of reproducing the observed relic abundance as described above. We then confront the predicted abundances of light nuclei in each parameter point to the following set of observed JHEP03(2020)118 primordial abundance ratios [124]: (Here Y p denotes as usual the primordial mass fraction of 4 He; we find that the requirement of obtaining the correct nuclear abundance ratio for 3 He/D leads to weaker limits than the above two constraints for all parts of parameter space.) Theoretical uncertainties associated to the nuclear rates entering our calculation are taken into account by running AlterBBN v1.4 [125,126] in three different modes corresponding to high, low and central values for the relevant rates. We then derive 95% C.L. bounds by combining the observational and theoretical errors as described in more detail in refs. [120,121].
In figure 4 we present the constraints from BBN as a function of m S and sin θ for fixed mass ratios m χ /m S (panels on the left) and as a function of m χ and sin θ for fixed mediator masses m S (panels on the right). The solid black line indicates the overall limit. In addition we also show which nuclear abundance causes an exclusion in which part of parameter space. In the pink (blue) region the 4 He abundance is too large (too small) compared to the observationally inferred values, while the constraints due to an underand overproduction of deuterium are shown in grey and purple, respectively. In addition, we show the lifetime of S as a function of sin θ for reference (green dashed lines); the fact that we identify excluded regions with τ S < 1 s implies that the often adopted 'conservative BBN limit' of τ S = 1 s is not always conservative.
It can be seen in the figure that the limits depend significantly on the value of m S as this quantity determines the possible decay channels of the mediator and therefore the lifetime, while the dependence on the dark matter mass is more indirect (but still not negligible) via its impact on the temperature ratio ξ. More concretely, we can distinguish the following regimes: • For m S > 2m µ the limit on sin θ is rather weak due to the small mediator lifetime above the muon threshold. For m S > 2m π also hadronic decays would become relevant for very small values of sin θ, see e.g. ref. [119]. 8 Overall we find that for values of sin θ relevant to this study, the mediator already decays before the onset of BBN for m S > 2m µ , therefore not causing any observable consequences for the range of direct detection cross sections we consider.
• For 2m e < m S < 2m µ the scalar can decay before, during or after BBN, depending on the value of sin θ. In this region of parameter space, the presence of the dark sector influences BBN via two different effects: (i) an increase of the Hubble rate due to the extra energy density of the dark sector and (ii) entropy injection into the SM heat bath due to scalar decays into electromagnetic ration. Both affect the synthesis of light elements as discussed in detail in ref. [120] and are fully taken into account in our evaluation. For lifetimes τ S 10 4 s photodisintegration also becomes relevant,
JHEP03(2020)118
but does not exclude any additional regions of parameter space (at least in case DM is produced via freeze-out).
• For m S < 2m e , the scalar S can decay only into photons, which leads to a drastically increased lifetime. Consequently, for comparably small values of sin θ, the mediator outlives the creation of the light elements, thereby acting as an additional relativistic degree of freedom, whose presence can be robustly excluded by current BBN constraints (even stronger constraints in the case of such late decays arise from photodisintegration of light elements [120,127,128] and the CMB [129]). For very large values of sin θ, on the other hand, S again decays during BBN -but since this case is strongly excluded by other considerations, cf. section 4, we do not perform a detailed study of BBN limits in this regime.
As discussed in section 5.1, for sufficiently small values of the mixing angle θ, the two sectors will never thermalise via the Higgs portal; this is indicated by the dashed black line in figure 4. The conservative BBN limits which do not make any additional assumptions about early universe cosmology therefore naïvely end at this line. 9 While the overall limit looks very similar for a given scalar mass m S , the thermalisation line is rather sensitive to the DM mass m χ which directly translates into rather different conservative BBN limits for the different cases. For the calculation of the BBN limits below this line we assume ξ(T → ∞) = 1, i.e. that both sectors were thermalised at very large temperatures by some additional processes that are not covered by the model Lagrangian in eq. (2.1). If the two sector never thermalised, on the other hand, the bound would depend on the initial temperature ratio ξ (or ratio of energy densities). For ξ(T → ∞) = 0 only the freeze-in contribution would remain. Nevertheless, even in this case stringent bounds from photodisintegration and the CMB are expected for sizeable regions of parameter space. To indicate this additional uncertainty the BBN limits below the thermalisation line have a lighter shading.
Dark matter self-interactions
The exchange of a scalar particle generates an attractive Yukawa potential between two DM particles, resulting in a self-interaction rate that strongly depends on the couplings g χ , the DM and mediator masses m χ and m S , and the relative velocity v of the scattering DM particles. For the range of parameters we are interested in here, in particular, these interactions typically show a characteristic resonant structure, resulting in a large enhancement or suppression of the momentum transfer cross section σ T when varying, e.g., the dark coupling g χ . In this regime, analytic expressions are available that result from restricting the analysis to s-wave scattering and approximating the Yukawa potential by a Hulthén potential [43]. While these expressions result in a reasonable estimate for height 9 The bounds may be significantly stronger taking into account an irreducible contribution from freezein production of either the DM particle or the mediator. In fact even a small abundance of mediators is constrained if the lifetime is such that photodisintegration is relevant. A detailed exploration of this parameter region is left for future work. Also, as stated before, we assume λ hs 0. Sizeable values for this quartic inter-sector coupling could further shift the region of thermalisation to smaller values of sin θ. and as a function of m χ for fixed masses m S (right panels). Below the dashed black line, the Higgs portal by itself is insufficient to ever thermalise the dark sector with the SM. In addition to the overall limit (solid black line), we also separately show the regions of parameter space which are excluded due to D underproduction (grey), D overproduction (purple) and/or 4 He underproduction (blue), 4 He overproduction (pink). For m S 0.1m χ the thermal evolution is not fully captured by our calculation and thus the limits are only approximate, as indicated by the hatch pattern.
JHEP03(2020)118
and location of resonances in σ T , we find that they significantly underestimate the numerical value of σ T in the vicinity of anti-resonances (see also [130]). In our analysis, we thus always solve the underlying Schrödinger equation for the full Yukawa potential numerically, including also higher partial waves. We do so by following the treatment in ref. [131], thus also correctly taking into account the indistinguishability of DM particles in the definition of the momentum transfer cross section.
In the cosmological concordance model, DM is successfully described as a collision-less fluid. In fact, astrophysical observations from dwarf galaxy to cluster scales stringently limit how much the properties of the putative DM particles can deviate from this simple picture (for a review, see ref. [132]). Here we adopt as a fiducial maximal value for the allowed effective momentum transfer cross section at a relative DM velocity of v rel χ = 1000 km/s. This corresponds roughly to the constraint inferred from the observation of colliding galaxy clusters [132][133][134], which are highly DMdominated systems and hence often argued to be less prone to modelling uncertainties of the baryonic component. Another advantage is that these observations more directly constrain the DM self-interaction rate, while the widely used translation of individual halo properties -like their core size -to bounds on σ T is subject to a non-negligible intrinsic modelling uncertainty (for a recent discussion, see ref. [135]). 10 We note that astrophysical observables do not depend on σ T alone, but may also depend on the frequency of scatterings [139]. For the case of a light mediator as studied here, this can be substantially different from a contact-like (isotropic) DM scattering, which is usually assumed in N -body simulations (see however refs. [140][141][142] for first studies including angular dependent and frequent scatterings). On the other hand, bounds on the self-interaction rate have been reported that are significantly stronger than the fiducial maximal value(s) of σ T that we adopt in our analysis [138,[143][144][145]. Overall, taking into account the above mentioned uncertainties, we expect that eq. (5.7) leads to realistic constraints on the dark coupling g χ .
Due to the characteristic resonant structure of σ T , the inversion of these limits to constraints on g χ is in general not unique. For given values of m χ and m S , in particular, there is always a maximal value g min χ such that eq. (5.7) is satisfied for all values of g χ < g min χ . In the resonant regime, however, it is possible to hit anti-resonances, and thus to decrease the cross section by increasing the coupling beyond g min χ . In other words, there can be further -sometimes only very narrow -parameter ranges with g χ ∈ (g min χ , g max χ ) that satisfy eq. (5.7). Here, g max χ denotes the maximal value for which the self-interaction constraint can in principle be satisfied, due to the appearance of anti-resonances in the scattering amplitude. Requiring g χ < g max χ is thus the most conservative way of implementing the self-interaction constraint, but it neglects the fact that many values of g χ < g max χ are ac- 10 Observations of dwarf galaxies, and their translation to limits on σT , are prone to much larger uncertainties [136][137][138]. On the other hand, for light mediators the effective self-interaction rate is considerably stronger in these systems than in galaxy clusters. Adopting for example σT /mχ < 100 cm 2 /g for relative DM velocities of v rel χ = 30 km/s, as a relatively conservative fiducial constraint, would lead to stronger constraints than eq. (5.7) only for mS 10 −3 mχ.
JHEP03(2020)118
tually excluded; requiring g χ < g min χ is more aggressive, but in some sense more generic (because it applies even outside the range of couplings where anti-resonances appear). To reflect this situation, we will in the following show results for both sets of constraints independently. We note that numerically it is straight-forward to determine g min χ and g max χ because the anti-resonances are much less pronounced than what the analytic approximation for s-wave scattering [43] would suggest.
Further astrophysical and cosmological bounds
Weakly coupled light particles can be copiously produced in the interior of stars or in the hot core of a supernova (SN) via their interactions with electrons or nucleons. For sufficiently weak couplings these particles escape the celestial body without further interactions and therefore constitute a new energy loss mechanism which is constrained by observations. We take the resulting limits on sin θ from red giants (RG) and horizontal branch stars (HB) from ref. [146]. The bound from SN 1987A which extends to larger masses because of the correspondingly larger core temperature we take from ref. [105], noting that this is an order of magnitude estimate with inherently large uncertainties. The lower boundary of this limit is determined by estimating the additional energy loss due to escaping scalars which would shorten the observed neutrino pulse. For m S < 2m χ the SN limit does not extend to arbitrarily large couplings due to efficient trapping of light scalars inside the SN; for larger scalar masses, on the other hand, there is no upper boundary of the limits because the scalar will decay invisibly to DM particles that escape the SN without interacting. These constraints are shown as grey areas in the sin θ − m S plane in figure 3, where we have used a hatched filling style for the SN bounds (assuming m S < 2m χ ) to stress their intrinsic uncertainty.
We remind that the usually very strong CMB bounds on annihilating light DM can be evaded in this model because the annihilation proceeds via a p-wave and is hence strongly velocity-suppressed (also, there are no remaining light degrees of freedom that would change the expansion rate at these times). While the elastic scattering of DM with SM particles is enhanced for light mediators S, the coupling of S to photons is still not sufficient to prolong kinetic decoupling until times where an appreciable cutoff in the matter power spectrum, and hence a potential imprint in e.g. Ly-α data, would be expected (see ref. [147] for a more detailed discussion).
Results
As motivated in the introduction, we now want to combine the various constraints discussed in the previous sections in the 'direct detection plane', i.e. as bounds on σ SI as a function of the DM mass. For a given value of the scalar mass m S (and fixed m χ ) only the invisible Higgs decay constrains the same combination of parameters (sin θ · g χ ) that enters the expression for σ SI , cf. eq. (3.5). In all other cases, we thus need to combine two types of observations to constrain these parameters individually. As the particle physics constraints discussed in section 4, but also the astrophysical constraints from section 5.4,
JHEP03(2020)118
are essentially insensitive to g χ , a handle on the dark coupling has to be provided by cosmology. Concretely, we will distinguish three versions of cosmological constraints (roughly ordered by decreasingly strong underlying assumptions): • Cosmo 1 ('thermal production'). The present dark matter abundance can be fully explained by the production of χ particles via freeze-out in the early universe, as detailed in section 5.1, which requires dark and visible sector to have been in thermal equilibrium at some point. No further interactions than those specified in eq. (2.1) are assumed. 11 • Cosmo 2 ('generic self-interactions'). No connection between the dark coupling g χ and the DM abundance is assumed, allowing for other DM production mechanisms. 'Generic' constraints on DM self-interactions are adopted, as detailed in section 5.3, i.e. we assume that g χ does not lie close to an anti-resonance in the elastic scattering cross section.
• Cosmo 3 ('conservative self-interactions'). As Cosmo 2, but implementing the most conservative constraints from DM self-interactions; larger values of g χ would thus violate eq. (5.7) even when finely tuned to lie on an anti-resonance.
BBN constraints are tightly coupled to the assumed thermal history, so for those we will always assume thermally produced DM ('Cosmo 1'). Finally, we always adopt a hard 'perturbative unitarity limit' of g χ < √ 4π in case the respective cosmological constraint would be weaker.
In figure 5 we show our results for selected mass ratios m S /m χ < 2 where invisible decays of the scalar are kinematically forbidden. In each case, the left column displays current limits while the right column displays projected limits. Conventional direct detection limits (rescaled from figure 1) are shown as grey areas. Current limits from cosmic-ray upscattering (figure 2) are shown in light blue; for the projected limits we take the sensitivity of DARWIN [79], based on the assumption that the recoil threshold can be lowered to 1 keV. Limits from invisible Higgs decay, cf. eqs. (4.5) and (4.7), are shown in green. In red, we combine the particle physics limits shown in figure 3, while the yellow lines show a combination of the astrophysical limits discussed in section 5.4. 12 The various ways of implementing cosmological limits are indicated with dotted lines ('Cosmo 1'), dashed lines ('Cosmo 2') and solid lines ('Cosmo 3'), respectively. Given the difficulties in accurately computing the thermal evolution of the dark sector for m S 0.1 m χ , see the discussion in 11 For mS mχ the relic density actually depends on the same combination of couplings (sin θ · gχ) as direct detection rates; as noted before, it is not possible to obtain the correct relic density and at the same time satisfy existing bounds on θ in this case. 12 From the shape of these limits, the potentially controversial part that derives from SN bounds is clearly discernible. We note that only in the case of 'Cosmo 1', which fixes gχ by the requirement of obtaining the correct relic density, the range in θ excluded in figure 3 (for a given value of mS) translates to a correspondingly excluded range of σSI. If instead there is only an upper limit on gχ, as in the case of 'Cosmo 2' and 'Cosmo 3', the direct detection cross section σSI ∝ g 2 χ sin 2 θ remains essentially unconstrained by this bound (in other words, while gχ still cannot be chosen so small that sin θ > 1, for a given value of σSI, this only results in a limit too weak to be visible in the figure). JHEP03(2020)118 section 5.1, we do not display 'Cosmo 1' limits in this regime. For the case of BBN limits (shaded blue areas), we also indicate (as in figure 4) the parameter region where additional high-scale interactions would be required to thermalise the dark and visible sectors in the very early universe; BBN limits that rest on this additional assumption are plotted with a hatched filling style. (Note that, compared to figure 4, BBN limits appear to have a stronger dependence on m χ here; this is exclusively because the relic density constraint fixes g χ as a function of m χ .) The figure nicely illustrates the complementarity of the different approaches to test models with light mediators. If DM is thermally produced, in particular, current bounds already reduce the remaining parameter space for sub-GeV DM to a relatively small region of mediator masses above a few MeV, and mass ratios 0.01 m S /m χ 0.1 (see also [148] for a discussion of BBN limits in a similar context). Here it is worth commenting that BBN limits far below the thermalisation line essentially just constrain the assumed highscale temperature ratio between the two sectors; in this sense they simply exclude this additional assumption and are completely independent of the specific model Lagrangian stated in eq. (2.1). On the other hand the robust bounds may significantly extend below the thermalisation line taking into account the irreducible contribution from freeze-in production.
Even if no assumptions about the thermal history and production of DM is made, on the other hand, the combination of the limits displayed in figure 3 with those stemming from DM self-interactions translate to highly competitive constraints on σ SI . For mediator masses close to the DM mass, in particular, those constraints already fully cover the expected reach of upcoming direct detection experiments. Interestingly, we arrive at this conclusion independently of which set of SIDM constraints we implement ('Cosmo 2' or 'Cosmo 3'); let us stress, however, that the limits presented in figure 5 indeed strongly depend on fully solving the Schrödinger equation to obtain the self-interaction cross section σ T in the resonant regime, rather than following standard practice and simply adopting analytic results for s-wave scattering. From the perspective of future direct detection experiments, the most interesting parameter range to be probed -fully orthogonal to what can be tested by particle physics experiments -is the sub-GeV DM range combined with scalar masses significantly lighter than DM (but heavier than about 0.2 MeV, where astrophysical limits start to dominate).
In order to add a slightly different angle to the above discussion, we show in figure 6 the same constraints for selected fixed scalar masses m S instead. This includes kinematical situations with m S > 2m χ where the scalar can decay very efficiently to two DM particles, i.e. through an invisible channel. As discussed in section 4.2, the particle physics constraints hence need to be adapted correspondingly, and we thus only keep those limits shown in figure 3 that are still relevant in this situation (and add that from BaBar [118] for the case of m S = 1 GeV). 13 For invisible decays, furthermore, there is also no upper boundary to the area excluded by energy loss arguments in supernovae (as in figure 3). This implies that for JHEP03(2020)118 Figure 5. Current (left) and projected (right) limits on the elastic scattering cross section with nucleons in the zero-momentum transfer limit, for fixed scalar to DM mass ratios m S /m χ that do not allow invisible decays of S. For astrophysical and particle physics limits combined with cosmological limits, dotted lines assume thermal DM production via freeze-out ('Cosmo 1'), dashed lines instead implement generic DM self-interaction constraints ('Cosmo 2') while solid lines result from tuning g χ such as to resonantly suppress the DM self-scattering rate ('Cosmo 3'). See text for further details.
JHEP03(2020)118
2m χ < m S 0.2 GeV, unlike the situation in figure 5 for visible decays, the combination of SN bounds and SIDM constraints indeed does combine to a very competitive bound on σ SI (though, as discussed in section 5.4, SN limits should be interpreted with care).
While the limits from cosmic-ray upscattered DM now become more visible, it is clear that they are never competitive to other limits in this model. Also limits from invisible Higgs decay, while significantly more stringent, are rarely strong enough to be competitive; this would change only with a dedicated Higgs factory like the ILC. In general, one can say that astrophysical, particle physics and direct detection limits probe the parameter space from rather orthogonal directions. While astrophysical constraints are most relevant for small DM (or, rather, mediator) masses, direct detection experiments place the strongest limits for large DM masses. The m χ -dependence of constraints on σ SI stemming from particle physics, on the other hand, is somewhat weaker. Consequently, particle physics (combined with cosmological input) tends to place the most relevant constraints on the model at intermediate DM masses (for the sub-GeV range that we consider here), and the most promising avenue for direct DM searches appears to lie in lowering the detection threshold, even slightly, in a way that compromises the overall sensitivity as little as possible (this, in other words, will test more of the so far unprobed parameter space than focussing on very low thresholds at the expense of overall sensitivity).
Discussion and conclusions
In this work we have considered the prospects of future direct detection experiments to test uncharted parameter space for light (sub-GeV) dark matter. It is natural in this context to expect additional light particles mediating the interactions between dark matter and the target nuclei in order to achieve a sufficiently large scattering cross section. To alleviate the strong cosmological bounds from the CMB we have concentrated on a scenario in which dark matter couples via a scalar mediator (with coupling g χ ) such that dark matter annihilations proceed via p-wave and are therefore strongly suppressed at the time of the CMB. This allows the dark matter relic abundance to be set via thermal freeze-out, although other production mechanisms are possible, and our bounds also apply to more general cases. We assume that couplings to Standard Model states are induced by the well-known Higgs portal with mixing angle θ.
The DM scattering cross section off nuclei is then proportional to the product of couplings, g 2 χ · sin 2 θ. To map out the available parameter space we evaluate and compile the relevant limits both on sin θ and on g χ from current and near future particle physics experiments, BBN, astrophysics and cosmological considerations. We also show limits on light DM particles upscattered by cosmic rays, which turn out never to be competitive in the model considered here. In our analysis we paid special attention to cosmological constraints which, while requiring certain assumptions, cannot be avoided altogether in a given model. Indeed, they provide quite in general the missing link to translate a variety of constraints on portal models to constraints on the scattering cross section relevant for direct dark matter searches.
JHEP03(2020)118
In our analysis we update and carefully extend previous recent work similar in spirit [45,46,48,49]. Concretely, we re-scale direct detection limits according to the full Q 2 dependence in section 3, and present a genuinely new treatment of cosmic-ray up-scattered DM for such a case -which, as we stress, is applicable also to other scenarios that invoke Q 2 -dependent scattering. We present updated invisible Higgs decay constraints on this specific model (in section 4.1), and add genuinely new estimates of the future LHCb and NA62 sensitivities (see appendix) to our compilation of up-to-date particle physics constraints and projected sensitivities in section 4.2. A significant refinement compared to what is typically done in the literature is further that we consistently implement the cosmological evolution of this scenario in full detail (section 5.1), which we use both for precision calculations of the relic density and a careful evaluation of BBN constraints (going well beyond the standard procedure of simply translating model-independent constraints to the model at hand). We also point out that the self-interaction cross section (section 5.3) has to take into account the (in)distinguishability of external particles and needs to be evaluated beyond the s-wave approximation to avoid the appearance of artificially deep anti-resonances and hence overly weak constraints; correcting for this, we instead find that DM self-interactions generically lead to limits comparable to those resulting from the assumption of DM thermally produced via the freeze-out mechanism.
Overall we find that, almost independently of the dark matter production mechanism, strong bounds on the maximally possible nuclear scattering rate exist for large regions of parameter space. Nevertheless, some regions remain safe from the combination of existing (or expected) constraints from accelerators, astrophysics and cosmology, motivating the development and construction of future direct detection experiments which could explore these regions. This not only requires low thresholds for the recoil energies, but at the same time sensitivities better than what is presently achievable at dark matter masses around 1 GeV. Figure 7. Lifetime of the scalar particle S as a function of its mass, thereby fixing sin θ to the lower bound of the current LHCb sensitivity as shown in figure 3 (blue line). The black dashed lines correspond to lifetimes of 1 ps and 10 ps, thus indicating the borders between the prompt region (τ S < 1 ps), the intermediate region (1 ps < τ S < 10 ps) and the large displacement region (τ S > 10 ps). See text for further details.
for a dataset with collision energy √ s = 7 and 8 TeV and integrated luminosity L 0 = 3 fb −1 . For this analysis, the parameter space of the scalar was divided into (i) a prompt region with the lifetime of the scalar τ S < 1 ps, (ii) an intermediate region with 1 pc < τ S < 10 pc and (iii) a large displacement region for τ S > 10 pc. Background events were expected in the first two regions, while the last region was background free. In figure 7 we show τ S , fixing sin θ to the lower bound of the current sensitivity of the LHCb experiment. We conclude that no background is expected for m S < 3.7 GeV (which is the region of interest to us), while for higher masses we need to consider a non-zero background contribution.
To estimate the sensitivity of a similar analysis in the high-luminosity (HL) phase of the LHC, we assume the total integrated luminosity of LHCb to be L HL = 300 fb −1 and the centre-of-mass energy to be √ s = 13 TeV. The corresponding increase of the number of produced B mesons in the direction of LHCb can be estimated as R = L HL · σ 13 (pp → B + + X) L 0 · σ 8 (pp → B + + X) ≈ 162.2 , (A.1) where σ 13/8 (pp → B + + X) is the production cross section of B + mesons which fly into direction of the LHCb detector for energies 13 and 8 TeV respectively. We estimated these cross sections using FONNL (Fixed Order + Next-to-Leading Logarithms) -a model for calculating the single inclusive heavy quark production cross section, see [149][150][151][152] for details. For the region in which background events are expected, we assume for simplicity that the number of background events also increases by the factor R. We estimate the future sensitivity as For the case of large displacements, while no background events are expected, we need to take into account the probability of the scalar to decay inside the region where displaced JHEP03(2020)118 vertices can be observed, l min ≤ l decay ≤ l max with l min = 3 mm and l max = 0.6 m, [112]. This probability can be written as P decay (θ) = e −l min /l decay (θ) − e −lmax/l decay (θ) , (A. 3) where l decay = cγ S τ S is decay length of the scalar in the lab frame and γ S is the corresponding Lorentz factor. We estimate the average Lorentz factor of the scalar to S be (see appendix C in [114]) where γ S,rest is the Lorentz factor of S in the rest frame of the B-meson. The average energy of the B-mesons in the direction of LHCb we take from FONNL, E B ≈ 80 GeV for both centre-of-mass energies. Taking everything together, we estimate the future sensitivity in this regime as θ 2 future (m S )P decay (θ future (m S )) = 1 R θ 2 current (m S )P decay (θ current (m S )) .
B Estimate of future NA62 sensitivity
In this appendix we briefly describe how we estimate the sensitivity of the NA62 experiment with respect to light scalars produced in K + → π + S. 14 One of the main physics goals of NA62 is the measurement of the rare decay K + → π + νν, allowing for a direct determination of the V td CKM matrix element [7]. The observed final state is a π + plus missing momentum. If the scalar S is sufficiently long-lived to decay outside of the detector it would contribute to the same final state and can therefore be constrained with this search mode. A crucial difference between the expected signal from K + → π + S compared to the SM process K + → π + νν is the distribution of the 'invisible mass', which in the case of decays into S peaks at the mass m S while for the SM process (as well as other SM backgrounds which contribute to this final state) the distribution is rather flat, 15 see e.g. [155]. The number of kaons expected during LHC Run 3 [6] is estimated to be N K 10 13 , which (scaling up the results from [155]) corresponds to about 35 events in the signal region with a rather flat distribution in the missing mass.
To compare this to the expected signal from K + → π + S we have to take into account the corresponding branching ratio as well as the total selection efficiency for this process, which in general will depend on m S . The expected number of events is then N obs S = N K · BR(K + → π + S) · . (B.1) For our analysis we approximate the total efficiency as = 0.3 [156]. The relevant branching ratio is given by (see e.g. [104]) BR(K + → π + S) 1.85 · 10 −3 sin 2 θ 1 − (m S + m π ) 2 m 2 14 For a sensitivity estimate of NA62 to light scalars with different coupling structure see e.g. [153]. 15 Due to large backgrounds from K + → π + π 0 the mass range 130 MeV < mS < 150 MeV should be excluded from the analysis [154].
JHEP03(2020)118
Taking into account that the experimental resolution of the missing mass is about 1/35 of the signal region, we expect about 1 background event from SM processes after all cuts. The 95% CL upper limit on the scalar would then correspond to ∼ 5 events, which is what has been required for our result shown in figure 3.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 17,120.6 | 2020-03-01T00:00:00.000 | [
"Physics"
] |
Recognition and Analysis of Image Patterns Based on Latin Squares by Means of Computational Algebraic Geometry
: With the particular interest of being implemented in cryptography, the recognition and analysis of image patterns based on Latin squares has recently arisen as an efficient new approach for classifying partial Latin squares into isomorphism classes. This paper shows how the use of a Computer Algebra System (CAS) becomes necessary to delve into this aspect. Thus, the recognition and analysis of image patterns based on these combinatorial structures benefits from the use of computational algebraic geometry to determine whether two given partial Latin squares describe the same affine algebraic set. This paper delves into this topic by focusing on the use of a CAS to characterize when two partial Latin squares are either partial transpose or partial isotopic. isotopic
Introduction
An n × n array is said to be a partial Latin square of order n if each one of its cells either is empty or contains an element of a finite set of n symbols so that each symbol occurs at most once in each row and at most once in each column. If there are no empty cells, then the array is called a Latin square. Latin squares play a relevant role in cryptography [1][2][3]. Of particular interest for the aim of this paper, the generation of scramblers in symmetric cryptography by means of encryption-decryption processes having Latin squares as cryptographic keys is remarkable [4][5][6][7][8]. The exponential growth of Latin squares [9][10][11] ensures the robustness of these symmetric-key algorithms against brute force and statistical attacks. In addition, appropriate choices of Latin squares produce effective symmetric-key algorithms with high period growths [12]. In this regard, the distribution of Latin squares into isomorphism classes play a fundamental role. It is so that pseudo-random sequences derived from non-isomorphic Latin squares give rise to certain image patterns [13,14], whose algebraic and geometrical properties enable one to distinguish between fractal and non-fractal Latin squares [15,16].
Distributing (partial) Latin squares into isomorphism classes indeed constitutes a main problem in the theory of (partial) Latin squares. Currently, only the number of isomorphism classes of Latin squares of order n ≤ 11 is known [9][10][11], as well as that of partial Latin squares of order n ≤ 6 [17,18]. To obtain these last values, the computation of reduced Gröbner bases of ideals associated to partial Latin squares has played a relevant role. Such a computation is, however, extremely sensitive to the number of involved variables and the length and degree of the corresponding generators [19][20][21][22]. Thus , although distinct techniques concerning computational algebraic geometry have been implemented since the original work of Bayer [23] for solving the classical problems of counting, enumerating and classifying partial Latin squares [17,18,[24][25][26][27][28][29] and solving related problems such as completing sudokus [30][31][32], their computational cost makes it very difficult to deal with partial Latin squares of high orders.
The study of new invariants concerning partial Latin square isomorphisms has turned out to be an optimal approach to reduce this computational cost [33][34][35]. This paper delves in particular into those invariants that are related to affine algebraic sets associated to the partial Latin squares under consideration. In this regard, let us recall that the affine algebraic set in a multivariate polynomial ring K[{x 1 , . . . , x n }] of a partial Latin square L = (l i,j ) of order n, with set of symbols [n] := {1, . . . , n}, was defined by Falcón, R.M. et al [36] as the set of zeros of the binomial ideal I(L) := x i x j − x l i,j : 1 ≤ i, j, l i,j ≤ n . (1) Isomorphic partial Latin squares give rise to isomorphic affine algebraic sets. Thus, Gröbner bases have played a relevant role for distinguishing, in a computationally fast way, image patterns arisen from non-isomorphic Latin squares. In any case, the study of affine algebraic sets associated to Latin squares is still in the very initial stage. Thus, for instance, their distribution into isomorphism classes is only known for n ≤ 3 [36]. To deal with higher orders, it is necessary to delve into two new ways of classifying partial Latin squares. More specifically, it is necessary to characterize when two partial Latin squares are partial transpose and/or partial isotopic. In both cases, the partial Latin squares under consideration may give rise to the same affine algebraic set.
The paper is organized as follows. Section 2 deals with some preliminary concepts and results on partial Latin squares and computational algebraic geometry that are used throughout the paper. In Section 3, the notion of standard set of image patterns associated to a Latin square is introduced, which may constitute a fast computational way for distinguishing non-isomorphic Latin squares. To this end, the use of a new affine algebraic set associated to each of these image patterns is proposed. Finally, two new ideals are described in Section 4, whose respective affine algebraic sets are uniquely identified with the set of partial Latin squares that are partial transpose of another given partial Latin square, and the set of partial isotopisms between two given partial Latin squares of the same order and weight.
Preliminaries
Let us review some basic concepts and results on partial Latin squares and computational algebraic geometry that are used throughout the paper. We refer the reader to the monographs of [37] and [38] for more details on both topics.
Partial Latin Squares
Let L n be the set of partial Latin squares of order n having the already mentioned set [n] as set of symbols. Every partial Latin square L = (l i,j ) ∈ L n is determined by its set of entries Therefore, the cardinality of this set coincides with the number of non-empty cells within the partial Latin square L. It is termed the weight of L. From here on, let L n;m denote the subset of partial Latin squares in the set L n of weight m.
Let S n denote the symmetric group on the set [n]. Every triple θ = ( f , g, h) ∈ S 3 n preserves the set L n;m . It constitutes an isotopism of partial Latin squares, where the bijections f , g and h correspond, respectively, to a permutation of rows, a permutation of columns and a permutation of symbols of the partial Latin square under consideration. More specifically, the isotopism θ acts on any given partial Latin square L ∈ L n;m by giving rise to its isotopic partial Latin square L θ ∈ L n;m , where then the isotopism θ constitutes an isomorphism. In such a case, the partial Latin squares L and L θ are said to be isomorphic. Throughout this paper, the computation of isotopisms and isomorphisms among partial Latin squares is done by making respective use of the procedures isot and isom of the library pls.lib, available online at http://personales.us.es/raufalgan/LS/pls.lib (accessed on 28 February 2021), for the open Computer Algebra System (CAS) for polynomial computations SINGULAR [39].
Isotopic and isomorphic are equivalence relations among partial Latin squares. The distribution into such classes is known for order n ≤ 11 in the case of dealing with Latin squares [9][10][11] and for order n ≤ 6 in the case of dealing with partial Latin squares [17,18]. Partial transpose are partial isotopic are two other binary relations among partial Latin squares of the same order and weight, whose study is still in teh very initial stage. Although the original definitions were established by Falcón, R.M. et al [36] as equivalence relations, it is not so. In what follows, we particularize both definitions in order to obtain two new equivalence relations among partial Latin squares of the same order and weight. To this end, let us consider two partial Latin squares L 1 and L 2 in L n;m .
We say that L 2 is partial transpose of L 1 if and only if the following two conditions hold.
2.
For each entry (i, j, k) ∈ Ent(L 1 ) \ Ent(L 2 ), we have that (j, i, k) ∈ Ent(L 2 ). Being partial transpose generalizes, therefore, the classical concept of being transpose. Notice that the second condition that we have just described was not explicitly indicated by Falcón, R.M. et al [36]. Nevertheless, as is illustrated in the following example, this condition is mandatory in order to obtain a symmetric relation. both of them in L 4;8 , satisfy the first described condition in order to ensure that L 2 is partial transpose of L 1 . Nevertheless, such a condition is not satisfied for ensuring that L 1 is partial transpose of L 2 , because the entry (2, 4, 3) ∈ Ent(L 1 ) \ Ent(L 2 ) and (4, 2, 3) ∈ Ent(L 2 ). Now, let P ⊆ Ent(L 1 ) ∩ Ent(L 2 ). We say that L 2 is P-partial isotopic to L 1 if there exists an isotopism ( f , g, h) ∈ S 3 n such that In such a case, we say that the triple ( f , g, h) is a P-partial isotopism (a partial isotopism, when there is no place for confusion) from L 1 to L 2 . The just described binary relation of being P-partial isotopic constitutes an equivalence relation among partial Latin squares of the same order and weight, as well as containing the subset P in their respective sets of entries. Further, if two partial Latin squares are isotopic, then they are ∅-partial isotopic. More specifically, every isotopism from L 1 to L 2 is also a partial isotopism between such partial Latin squares.
The subset P ⊆ Ent(L 1 ) ∩ Ent(L 2 ) was not established as an essential part of the original definition of being partial isotopic introduced by Falcón, R.M. et al [36]. Nevertheless, if it is not considered as such, then the resulting binary relation is intransitive. The following example illustrates this fact. In this example, and from now on, to make much clearer the understanding of the subset P associated to any given partial isotopism in S 3 n , we denote L(P) the partial Latin square of order n satisfying that Ent(L(P)) = P. are isotopic (and, hence, ∅-partial isotopic) by means of the isotopism ((34), Id, Id) ∈ S 3 4 , that is, by switching their third and fourth rows. It is readily verified that the Latin square L 2 is P-partial isotopic to To this end, it is enough to consider, for instance, the partial isotopism ((23), Id, Id) ∈ S 3 4 . Nevertheless, as shown in Example 8, no partial isotopism exists from L 1 to L 3 .
The following example illustrates all the previous definitions in case of dealing with partial Latin squares with empty cells. The second condition holds similarly. In addition, one can find a partial isotopism between both partial Latin squares L 1 and L 2 . To this end, it is enough to consider the isotopism θ = ((1423), (1324), Id) ∈ S 3 and the subset P = Ent(L 1 ) ∩ Ent(L 2 ). That is, In particular, Notice that both partial Latin squares L 1 and L 2 are neither transpose nor isotopic of each other.
Let us finish this subsection by focusing on the description of the image patterns based on a given Latin square. To this end, let us recall that every Latin square in the set L n;n 2 constitutes the multiplication table of a quasigroup ([n], ·), where the set [n] is endowed with a binary operation · so that the equations a · x = b and y · a = b have unique solutions for x and y in [n], for all a, b ∈ [n]. Equivalently, the set [n] is endowed with a left-division \ and a right-division /, so that x = a\b in the first equation, and y = b/a in the second one.
Let T = t 1 t 2 . . . t m be a plaintext, with t i ∈ [n], for all i ≤ m. For each positive integer s ≤ n, it is defined [6][7][8] the encrypted string The resulting string can be desencrypted by means of a decryption map D s based on the already mentioned left-division. More specifically, The sequential implementation of the just described encryption may give rise to image patterns with certain fractal properties [13][14][15][16]. More specifically, if S = (s 1 , . . . , s r−1 ) is an (r − 1)-tuple of positive integers such that s i ≤ n, for all i < r, then the r × m image pattern P S,T (L) = (p i,j ) is defined as the r × m array satisfying that, for each positive integer j ≤ m, we have that The image patterns arisen from the set of Latin squares of a given order only depend on the distribution of the latter into isomorphism classes. More specifically, the following result is known. Lemma 1 ([36]). Let L 1 and L 2 be two Latin squares in L n;n 2 that are isomorphic by means of an isomorphism ( f , f , f ) ∈ S 3 n . Let S = (s 1 , . . . , s r−1 ) and f (S) = ( f (s 1 ), . . . , f (s r−1 )) be two (r − 1)-tuples of positive integers, such that s i ≤ n, for all i < r, and let T = t 1 . . . t m and f (T) = f (t 1 ) . . . f (t m ) be two plaintexts, with t i ∈ [n], for all i ≤ m. Then, the r × m image patterns P S,T (L 1 ) = (p i,j ) and P f (S), f (T) (L 2 ) = (p i,j ) coincide up to permutation of symbols. More specifically, p i,j = f (p i,j ), for all positive integers i ≤ r and j ≤ m.
Computational Algebraic Geometry
From here on, let K[X] denote the multivariate polynomial ring over a field K that is defined on a finite set X of n variables. A point P ∈ K n is a zero of a set S ⊆ K[X] if f (P) = 0, for all f ∈ S. The set of all these zeros constitutes an affine algebraic set in K n . It is irreducible if it cannot be decomposed into two nonempty proper affine algebraic sets. The dimension dim(V) of an affine algebraic set V is the maximal number of its irreducible components minus one. Further, two affine algebraic sets V 1 and V 2 in K n are isomorphic if there exists a bijective map φ : V 1 → V 2 such that φ(P) = ( f 1 (P), . . . , f n (P)) and φ −1 (Q) = (g 1 (Q), . . . , g n (Q)), for all (P, , for all i ≤ n. The map φ constitutes an isomorphism from V 1 to V 2 . An isomorphism invariant of affine algebraic sets is any property of the latter that is preserved by isomorphisms.
An ideal of the multivariate polynomial ring K[X] is a subset I ⊆ K[X] such that 0 ∈ I; p + q ∈ I, for all p, q ∈ I; and p · q ∈ I, for all p ∈ I and q ∈ K[X]. It is said to be generated by a set of polynomials {p 1 , . . . , p k } ⊆ K[X] if it is defined as
It is
binomial if all its generators are. Further, it is radical if p ∈ I, for all p ∈ K[X] such that p m ∈ I, for some positive integer m. Finally, it is zero-dimensional if dim(V K (I)) = 0, where V K (I) is the affine algebraic set in K n formed by all the zeros of the polynomials within I. This dimension can be obtained from the reduced Gröbner basis of the ideal I [40][41][42]. Let us recall in this regard that the leading monomial of a polynomial is its largest monomial with respect to a given multiplicative well-ordering whose smallest element is the constant monomial 1. Then, a Gröbner basis of an ideal I is any subset within I whose leading monomials generate the so-called initial ideal, which is generated in turn by all the leading monomials of the non-zero polynomials of I. If the ideal I is zero-dimensional and radical, then the number of monomials that are not contained in its initial ideal coincides with the cardinality of V K (I). Further, a Gröbner basis is reduced if all its polynomials are monic and no monomial of its polynomials is generated by the leading monomials of the rest of polynomials. The reduced Gröbner basis of an ideal is unique. It can always be computed from Buchberger's algorithm [43]. Arisen from this algorithm, one can find the more efficient direct methods described by the algorithms F 4 and F 5 [44,45] and the algorithm slimgb [46].
Throughout this paper, all the computations concerning Gröbner bases are carried out on an Intel Core i7-8750H CPU (6 cores), with a 2.2 GHz processor and 8 GB of RAM, with a maximum running time of less than 1 s. All of them are done by making use of the algorithm slimgb that is implemented in the CAS SINGULAR. As multiplicative wellordering, it has been chosen the degree reverse lexicographical ordering. Finally, all the computations are done on either the field Q of rational numbers or the field C of complex numbers. In the first case, the following result holds. Theorem 1 ([22]). The arithmetic complexity of computing the reduced Gröbner basis of a zero-dimensional ideal I = p 1 , . . . , p m ⊂ Q[{x 1 , . . . , x n }] is bounded above by the value where h i denotes the maximum size of the coefficients of the generator p i , and d i denotes its maximum degree, for all i ≤ m.
The following example focuses on the computation of the affine algebraic set of a partial Latin square, which is described in the Introduction.
Example 4.
Let us consider the partial Latin square L 1 ∈ L 4;8 described in Example 3. To compute the affine algebraic set in the multivariate polynomial ring Q[{x 1 , x 2 , x 3 , x 4 }] of L 1 , we obtain from Definition 1 the binomial ideal I(L 1 ) defined as A reduced Gröbner basis of this binomial ideal is the subset Hence, the ideal I(L 1 ) is zero-dimensional. Its associated affine algebraic set is Being partial transpose and being P-partial isotopic, for some , are two equivalence relations in the set L n;m that give rise to identical affine algebraic sets. Concerning the first of them, the following direct result is known.
Lemma 2 ([36]
). If two partial Latin squares of the same order and weight are partial transpose of each other, then their related affine algebraic sets coincide.
Nevertheless, some assumptions are required for the second equivalence relation. In this regard, let us recall that the binomial ideal I(L) associated to a partial Latin square L ∈ L n determines the following partition of the set [n].
Then, the following result holds.
Example 5. Let us consider again the multivariate polynomial ring
Since the partial Latin squares L 1 and L 2 described in Example 3 are partial transpose of each other, Lemma 2 implies that V Q (I(L 2 )) = V Q (I(L 1 )), which is described in Example 4. Now, let us consider the following three partial Latin squares in L 4;8 .
Proposition 1 also enables us to ensure that V Q (I(L 2 )) = V Q (I(L 1 )), but, here, the reduced Gröbner basis of the binomial ideal I(L 2 ) differs from that one of I(L 1 ). More specifically, the reduced Gröbner basis of I(L 2 ) is the subset Finally, the reduced Gröbner basis of the binomial ideal I(L 2 ) is the subset
Standard Image Patterns Associated to Latin Squares
Lemma 1 establishes the existing relationship among image patterns arisen from Latin squares and the distribution into isomorphism classes of the latter. This section focuses on a particular subset of image patterns, which may enable one to determine, even from a visual way, whether two Latin squares are not isomorphic. In this regard, let m, n, r and s be four positive integers such that s ≤ n. We define the s-standard r × m image pattern associated to a Latin square L ∈ L n;n 2 as P r,m;s (L) := P S,T (L), where S is the constant (r − 1)-tuple (s, . . . , s) and T is the constant plaintext s . . . s of length m. We call it constant if all its entries coincide. In addition, we term the set {P r,m;s : s ∈ [n]} the standard set of r × m image patterns associated to L. From Lemma 1, if the standard sets of r × m image patterns associated to two Latin squares do not coincide up to permutation of symbols, then these Latin squares are not isomorphic. As such, the analysis of standard sets turns out to be of particular interest for distinguishing non-isomorphic Latin squares even from a simple visual way.
To illustrate this fact, let us focus on the standard 90 × 90 image patterns associated to each one of the 35 isomorphism classes in which the set of Latin squares of order n = 4 is distributed. (The case n = 3 was already analyzed by Falcón, R.M. et al [36].) A representative of each one of these classes is described in Figure 1. Their respective standard image patterns are shown in Figure 2. It is formed by four collages in form of 7 × 5 arrays. They were created by means of the commands Colorize and ImageAssemble in WOLFRAM MATHEMATICA [47]. Each standard image pattern is represented as a pixel array so that each symbol is uniquely replaced by a color within a given palette of four colors. Each cell within any of these arrays constitutes the corresponding standard image pattern that is associated to the Latin square described at the same position within Figure 1. The union of the four standard images patterns associated to such a Latin square constitutes its standard set of 90 × 90 image patterns.
These standard sets can be distributed according to the following classification.
Constant standard image patterns.
A simple observation of the monochromatic cells in Figure 2 enables us to determine this type of standard image patterns. Notice that the s-standard image pattern of a Latin square L = (l i,j ) is constant if and only if l s,s = s.
2.
Fractal standard image patterns. From a simple visual inspection, one can observe that some of the cells in Figure 2 have a fractal character. It is the case, for instance, of the 2-standard image pattern associated to the Latin square L 4.1 .
3.
Non-fractal standard image patterns. The remaining cells do not have a clear fractal character. Their spectrum goes from what one may label as a chaotic behavior (see, for instance, the 2-standard image pattern associated to L 4.30 ) to a shadow of fractal behavior (see, for instance, the 2-standard image pattern related to L 4.11 ). In any case, we do not distinguish in this paper the fractal gradation of the image patterns under consideration. 1 2 4 3 3 4 2 1 2 3 1 4 4 1 3 2 1 2 4 3 3 4 2 1 4 1 3 2 2 3 1 4 1 2 4 3 3 4 2 1 4 3 1 2 2 1 3 4 1 3 4 2 2 1 3 4 3 4 2 1 4 2 1 3 1 3 4 2 2 1 3 4 4 2 1 Table 1 shows the values cs i and fs i corresponding, respectively, to the number of constant and fractal standard image patterns within the standard set of 90 × 90 image patterns of the Latin square L 4.i in Figure 1, for every positive integer i ≤ 35. As introduced above, the number of its non-fractal standard image patterns would be, therefore, 4 − cs i − fs i . Notice that the first parameter characterizes the isomorphism classes having L 4.17 and L 4.24 as representatives, which are the only ones containing respectively three and four constant standard image patterns.
In addition, the representative L 4.11 is the only one that is associated to two constant and two non-fractal standard image patterns. The remaining standard sets are not easy to distinguish visually, particularly those ones containing non-fractal standard image patterns. An alternative approach to deal with these cases consists of making use of different techniques concerning computational algebraic geometry [36]. 1 1 3 13 1 0 25 0 0 2 1 3 14 2 1 26 0 0 3 1 0 15 2 2 27 0 0 4 1 1 16 1 0 28 0 4 5 1 3 17 3 0 29 0 0 6 1 3 18 2 2 30 0 0 7 1 3 19 1 1 31 0 4 8 1 3 20 1 0 32 0 4 9 1 3 21 1 3 33 0 4 10 1 0 22 2 2 34 1 3 11 2 0 23 1 3 35 1 3 12 2 1 24 4 0 Let us define the affine algebraic set associated to the s-standard r × m image pattern P r,m;s (L) = (p i,j ) of a Latin square L ∈ L n;n 2 in the multivariate polynomial ring Q[{x 1 , . . . , x n }] as the set of zeros of the binomial ideal From (1) and (2), it is I(P r,m;s (L)) ⊆ I(L) and hence, V Q (I(L)) ⊆ V Q (I(P r,m;s (L))). Example 6. Let us consider the Latin square L 4.12 described in Figure 1 and the multivariate polynomial ring C[{x 1 , x 2 , x 3 , x 4 }]. The reduced Gröbner basis associated to the binomial ideal I(L 4.12 ) is the subset whereas that one associated to the ideal I(P 90,90;4 (L 4.12 )) is the subset Of course, if the standard sets of r × m image patterns associated to two Latin squares of the same order n coincide, up to permutation of symbols, then the multisets formed by the respective cardinalities of each one of the n affine algebraic sets related to their standard image patterns must also coincide. In particular, from Lemma 1, these multisets coincide for any two isomorphic Latin squares. To illustrate these aspects, Table 2 shows all these cardinalities for the standard image patterns described in Figure 2. It is so that there exist ten isomorphism classes of Latin squares of order four that are related to the multiset {2, 2, 2, ∞}, nine classes to {2, 2, 2, 2}, six classes to {3, 3, ∞, ∞}, four classes to {2, 2, ∞, ∞}, two classes to {2, 3, ∞, ∞} and another two classes to {∞, ∞, ∞, ∞}. Moreover, there are two isomorphism classes that are characterized by their respective multisets. Their representatives are the Latin squares L 4.17 and L 4.35 , which are, respectively, associated to the multisets {2, ∞, ∞, ∞} and {5, 5, ∞, ∞}. In addition, notice that the combination of Tables 1 and 2 characterizes the isomorphism class having the Latin square L 4.34 as its representative. Table 2. Cardinalities ( V i,s ) of the affine algebraic set V Q (I(P 90,90;s (L 4.i ))), for all positive integers i ≤ 35 and s ≤ 4.
To facilitate the recognition and analysis of similar standard sets for distinguishing non-isomorphic Latin squares of the same order n, even from a simple visual observation, one may focus on those ones having exactly the same positive number of fractal standard image patterns, as well as the same number of constant standard image patterns and the same multisets of cardinalities of their related affine algebraic sets. Let us illustrate this fact with the case n = 4, by means of the values in Tables 1 and 2 A more interesting case is that one concerning the eleven isomorphism classes whose respective standard sets contain exactly one constant and three fractal standard image patterns. Table 2 partitions them into four disjoint subsets. Two of them have already been characterized by these parameters. They correspond to the Latin squares L 4.34 and L 4. 35 . The other two subsets are the following ones. Figure 2 is not so evident. It is so that all their 2-standard image patterns coincide and a much more detailed observation of their 3-and 4-standard sets is required for ensuring that their standard sets are pairwise distinct.
• A detailed observation is also required for distinguishing visually the standard sets of the Latin squares L 4.12 and L 4.14 , both of them containing exactly two constant and one fractal standard image pattern. More specifically, it may be checked (either visually or by making use of Definition 2) that the second row of the fractal standard image pattern of L 4.12 contains all the four colors or symbols under consideration, whereas that one of L 4.14 only contains two of them.
None of the standard sets of the remaining ten isomorphism classes contains fractal standard image patterns, which makes much more difficult their visual distinction. According to their respective parameters, they can be partitioned into the following two sets. A possible approach to analyze the non-fractal standard image patterns of both subsets is reducing their dimension, which is equivalent to zoom in to the left upper corner of the original standard image patterns. Based on this approach, it is simply verified from the results in Figures 3 and 4 that the standard sets of 3 × 3 image patterns associated to these two subsets are pairwise distinct, even allowing a possible permutation of symbols.
A Computational Algebraic Geometry Approach to Deal with Being Either Partial Transpose or Partial Isotopic
We show in Section 3 how the computation of isomorphism invariants concerning affine algebraic sets based on a set of Latin squares plays a fundamental role in the recognition and analysis of their related image patterns. Apart from these invariants, the existence of certain equivalence relations among partial Latin squares of the same order and weight is also known [36], which give rise to the same or isomorphic affine algebraic sets. It is the case of being partial transpose and being P-partial isotopic, for some subset P ⊂ [n] × [n] × [n] (see Lemma 2 and Proposition 1). Let us finish this paper by showing how computational algebraic geometry is also an interesting approach to deal with both equivalence relations. To this end, let us introduce a pair of ideals within a multivariate polynomial ring, whose respective affine algebraic sets are respectively identified with the set of partial Latin squares that are partial transpose of another given partial Latin square, and the set of partial isotopisms between two partial Latin squares.
Firstly, for each positive integer n, we consider the set of n 3 variables Then, for each positive integer m ≤ n 2 , it is known [18] (see also [27,29] for a pair of first approaches in this regard) that the set L n;m is uniquely identified with the set of zeros of the affine algebraic set of the following ideal in Q[X PT n ].
Let us recall here that the sum of two ideals I and J is the ideal I + J = {i + j : i ∈ I, j ∈ J}. Each addend constitutes a subideal of the resulting ideal. Hence, our ideal I n is the sum of four subideals. The first one implies that any zero of I n is of the form (a 111 , . . . , a nnn ) ∈ {0, 1} n 3 . The remaining subideals imply that this zero is uniquely identified with a partial Latin square L = (l i,j ) ∈ L n;m such that l i,j = k ∈ [n] if and only if a ijk = 1. Now, for each partial Latin square L ∈ L n;m , let us define the following ideal in the multivariate polynomial ring Q[X PT n ].
Lemma 3. The set of partial Latin squares that are partial transpose to a partial Latin square L ∈ L n;m is uniquely identified with the affine algebraic set of the ideal I PT (L).
Proof. Let (a 111 , . . . , a nnn ) ∈ {0, 1} n 3 be a zero of the ideal I PT (L). In particular, it must be a zero of the ideal I n;m and, hence, it is uniquely identified with a partial Latin square L ∈ L n;m such that (i, j, k) ∈ Ent(L ) if and only if a ijk = 1. From the subideal we have that, if (i, j, k) ∈ Ent(L), then either a ijk = 1 or a jik = 1. As a consequence, if (i, j, k) ∈ Ent(L) \ Ent(L ), then (j, i, k) ∈ Ent(L ). Now, let (i, j, k) ∈ Ent(L ) \ Ent(L). In particular, it must be a ijk = 1. If (j, i, k) ∈ Ent(L), then the last subideal describing I PT (L) implies that a ijk = 0, which is a contradiction. Hence, (j, i, k) ∈ Ent(L). Therefore, the partial Latin squares L and L are partial transpose of each other. The reduced Gröbner basis of the ideal I PT (L) ⊂ Q[X PT 3 ] is the subset Hence, the affine algebraic set of the ideal I PT (L) is formed by four points that are uniquely associated to L and the following three partial Latin squares. From Lemma 3, computational algebraic geometry can be used for distributing partial Latin squares of the same order n according to the equivalence relation of being partial transpose. The following result establishes the computational cost that is required to this end in case of being n ≥ 2. (The case n = 1 is trivial.) Theorem 2. Let L ∈ L n , with n ≥ 2. The arithmetic complexity of computing the reduced Gröbner basis of the ideal I PT (L) over the field Q is bounded above by where Thus, the result holds from Theorem 1 and the generators of this ideal, whose coefficients have all of them size one. To see it, Table 3 shows the maximum degree of each one of these generators, together with the number of generators of each type. Then, the result follows because, from Theorem 1, the required arithmetic complexity is bounded above by the maximum value between n 3 3n 3 − 3n 2 + 9n + |Ent(L)| n 3 + 2 n 3 + (α 1 + 2α 2 + 1)(n 3 + 1) and 2 3n 3 − 3n 2 + 9n + |Ent(L)| + α 1 + 2α 2 + 1 3n 3 − 3n 2 + 9n + |Ent(L)| + α 1 + 2α 2 + 1 n 3 . Table 3. Study of the generators of the ideal I PT (L).
Generator Type Maximum Degree Number of Generators
A computational algebraic geometry approach can also be described in the case of dealing with the equivalence relation of being P-partial isotopic, for some subset P ⊂ [n] × [n] × [n]. It follows similarly to the approach concerning the equivalence relation of being isotopic, which was described in (Theorem 13, [18]). To this end, for each positive integer n, let us consider the set of 3n 2 variables X PI n := x ij , y ij , z ij : 1 ≤ i, j ≤ n . Then, for each pair of partial Latin squares L 1 = (l i,j ) and L 2 = (l i,j ) in the set L n;m , let us define the following ideal in the multivariate polynomial ring Q[X PI n ]. Proof. Let us suppose the existence of a zero (a 11 , . . . , a nn , b 11 , . . . , b nn , c 11 , . . . , c nn ) ∈ {0, 1} 3n 2 of the ideal I PI (L 1 , L 2 ). The first three subideals describing this ideal imply that this zero is uniquely related to an isotopism ( f , g, h) ∈ S 3 n such that, for each pair of positive integers i, j ≤ n, the following assertions hold. The fourth subideal implies that this isotopism constitutes a one-to-one map from Ent(L 1 ) \ Ent(L 2 ) to Ent(L 2 ). The fifth one implies that, if the cell (i, j) is empty in L 1 but not in L 2 , then the former cannot be mapped to a non-empty cell in L 2 . Further, the sixth subideal implies that, if the cell (i, j) is empty in both L 1 and L 2 , then it cannot be mapped to a non-empty cell in L 2 . Finally, the last subideal implies that, if the cell (i, j) contains distinct symbols in L 1 and L 2 , then it cannot be mapped to an empty cell in L 2 . Under such assumptions, it is readily verified that the zero under consideration is uniquely identified with a P-partial isotopism from L 1 to L 2 , where P ⊆ Ent(L 1 ) ∩ Ent(L 2 ). L 3 ). Thus, the related affine algebraic set is empty and hence, no partial isotopism exists between L 1 and L 3 . Hence, the affine algebraic set of the ideal I PI (L 1 , L 2 ) is formed by two points that are uniquely associated to the isotopisms ((1423), (1324), (12)) and ((1423), (1324), Id) in S 3 4 . Both of them constitute P-partial isotopisms from L 1 to L 2 , where P = Ent(L 1 ) ∩ Ent(L 2 ).
From Lemma 4, computational algebraic geometry can be used for distributing partial Latin squares according to the equivalence relation of being P-partial isotopic, for some subset P ⊂ [n] × [n] × [n]. The following result establishes the computational cost that is required to this end in case of being n > 2. (The case n = 1 is trivial.) Theorem 3. Let L 1 and L 2 be two partial Latin squares in L n , with n ≥ 2. The arithmetic complexity of computing the reduced Gröbner basis of the ideal I PI (L 1 , L 2 ) over the field Q is bounded above by where Proof. The ideal I PI (L 1 , L 2 ) is zero-dimensional, because V Q (I PI (L)) ⊂ {0, 1} 3n 2 . Then, similarly to the proof of Theorem 2, the result holds from Theorem 1 and the generators of this ideal, whose coefficients have all of them size one. To see it, Table 4 shows the maximum degree of each one of these generators, together with the number of generators of each type. Then, the result holds because, from Theorem 2, the required arithmetic complexity is bounded above by the maximum value between 3n 2 β 1 3n 2 + 3 3n 2 + (3n 2 + β 2 + β 3 + β 4 ) 3n 2 + 2 3n 2 + 6n(3n 2 + 1) . and 3β 1 + 2(3n 2 + β 2 + β 3 + β 4 ) + 6n Table 4. Study of the generators of the ideal I PI (L 1 , L 2 ).
Conclusions and Further Work
In this paper, we show the relevant role that computational algebraic geometry plays in the recognition and analysis of image patterns associated to Latin squares. To this end, we introduce the concepts of standard image pattern and standard set of a given Latin square. Moreover, a new affine algebraic set associated to any such image pattern is described, whose isomorphism invariants can be used for distinguishing different standard sets and hence, for determining in a computationally fast way (even visually) whether two Latin squares are not isomorphic.
The main limitation of the methodology here proposed is the exponential complexity for computing Gröbner bases, which is highly dependent on the number of underlying variables. This number coincides in our case with the order of the Latin square under consideration. Due to it, this limitation is not an inconvenience at all for dealing with the smallest orders for which no results on the distribution of Latin squares into isomorphism classes is known (n ≥ 12). In fact, this computational approach turns out to be an efficient way for dealing with Latin squares of much higher orders. To illustrate this fact, let us consider the Latin square of order 256 that is represented by colors in Figure 5. It was randomly constructed by following Algorithm 1 in [48], which gives rise to random Latin squares with possible implementation in cryptography. Notice in any case that every Latin square generated in this way is isotopic to a diagonally cyclic Latin square [49]. Figure 6 shows the running time required by an Intel Core i7-8750H CPU (6 cores), with a 2.2 GHz processor and 8 GB of RAM for computing both the reduced Gröbner basis of the binomial ideal associated to the s-standard 90 × 90 image pattern of this Latin square described in Figure 5, for every positive integer s ≤ 256, together with the cardinality of its related affine algebraic set. The maximum running time was 3.45 s, which is reached for s = 101. It is remarkable that the methodology here described may be particularized in order to make more efficient the computation of group isomorphisms. Let us recall in this regard that every associative quasigroup constitutes a group. To illustrate this fact, let us consider both the dihedral and the abelian groups of order six, whose respective multiplication tables are the Latin squares Their respective standard sets of 90 × 90 image patterns are shown in the 2 × 6 collage of Figure 7. Both standard sets are formed by a constant and five fractal standard image patterns. It is readily verified from a visual way that the standard set of the dihedral group (top row of the collage) does not coincide with the standard set of the abelian group (bottom row of the collage), even allowing a possible permutation of symbols. In this simple way, we may ensure that these two groups are not isomorphic. A similar conclusion arises from the computation of the reduced Gröbner bases concerning both types of affine algebraic sets associated to the dihedral and the abelian group of order six. From this computation, we have that |V C (I(D 6 ))| = 3 and |V C (I(Z 6 ))| = 7.
Further, the methodology here described can be generalized for other types of arrays non-subjected to the Latin square condition. In this regard, it would be interesting to delve, for instance, into the study of standard sets of image patterns associated to (partial) semigroups or, more generally, to (partial) magmas. Even if they may not be endowed with a left or right division (as quasigroups are), their multiplication tables enable us to define r × m image patterns based on these algebraic structures by making use to this end of the corresponding conditions described in (2) (see [50], for a first approach in this regard in case of dealing with magmas).
These conditions may be taken into account to deal also with arrays related to other types of mathematical structures, not only algebraic ones. To illustrate this aspect, let us focus on the classical problem in graph theory of determining whether two given graphs are isomorphic or not. Every adjacency matrix of a given simple graph of order n is a binary symmetric n × n array with main diagonal of zeros. It may be considered the multiplication table of a finite magma of set of symbols {0, 1, . . . , n − 1} from which one could define r × m image patterns satisfying the corresponding conditions in (2). Then, standard sets of s-standard image patterns, with s ∈ {0, 1 . . . , n − 1}, could be defined similarly to those ones described in Section 3. In this way, the standard set of two isomorphic regular graphs would always coincide, up to permutations of symbols. This fact may therefore be used for distinguishing non-isomorphic regular graphs, even from a visual way. Thus, for instance, the following two arrays constitute respective incidence matrices of the complete graph K 4 and the cycle C 4 . The standard sets of the 90 × 90 image patterns associated to both graphs are shown in the 2 × 4 collage of Figure 8. Notice that the standard set of the complete graph K 4 (top row of the collage) is formed by one constant and three fractal image patterns, whereas that one of the cycle C 4 is formed by one constant, two fractal and one almost constant (except for its first row, the 3-standard image pattern is monochromatic) image patterns. Hence, these two regular graphs are not isomorphic. These examples illustrate the relevance that standard sets of image patterns may have for distributing distinct types of algebraic and combinatorial structures into isomorphism classes. A much more comprehensive study dealing with their recognition and analysis is required in any case. It is established as further work. Similar to the methodology here implemented, computational algebraic geometry may be an interesting approach to this end. Furthermore, notice that this paper has not dealt with the fractal gradation of the image patterns under consideration. A comprehensive analysis of their fractal dimensions is of particular interest in order to improve the efficiency of this computational approach.
This paper also focuses on the possible use of computational algebraic geometry for dealing with the distribution of partial Latin squares according to the equivalence relations of being either partial transpose or P-partial isotopic, for some subset P ⊂ [n] × [n] × [n]. An exhaustive enumeration of these classes is also established as further work. Concerning the distribution into P-partial isotopism classes, it is required to delve into the study of P-partial autotopisms (that is, P-partial isotopisms from a partial Latin square to itself) and make use of the Orbit-Stabilizer Theorem in a similar way to the already studied distribution of partial Latin squares into isotopism classes [18].
Again, the main limitation of the methodology here introduced is the high dependence on the number of variables required by each one of the affine algebraic sets under consideration. To see it, it has been made use of the already mentioned Algorithm 1 described in [48] in order to obtain random Latin squares on which the computational efficiency of using Gröbner bases has been checked for determining both the set of Latin squares that are partial transpose of another given Latin square, and the set of partial autotopisms of a Latin square. Figure 9 shows the running time required by our computer system for computing both the reduced Gröbner basis of the corresponding ideal, together with the cardinality of its related affine algebraic set. Notice that only the relationship of being partial transpose seems to be useful for dealing by itself with the smallest orders for which no results on the distribution of Latin squares into isomorphism classes is known (n ≥ 12). It is not the case of the equivalence relation of being P-partial isotopic, for some subset P ⊂ [n] × [n] × [n], whose exponential growth starts visibly much before, even from order n = 5. It agrees with the fact that this equivalence relation comprehends that one of being isotopic, for which previous studies [18] have already revealed the advantages of using some extra Latin square isomorphism invariant for reducing the computational cost of an analogous algebraic geometry approach. Similar studies concerning this new equivalence relation are, therefore, required and established as further work. In this regard, the joint use of the Latin square isomorphism invariants recently introduced in [34,35] may be of particular interest. It is also interesting to illustrate the computational efficiency of these two approaches in case of dealing with partial Latin squares with empty cells, whose distribution into isomorphism classes is only known [17,18] for order n ≤ 6. Firstly, let us focus on the use of Gröbner bases for determining the set of partial Latin squares that are partial transpose of another given partial Latin square. To this end, a partial Latin square in the set L 10;m was randomly constructed, for each positive integer m ≤ 100, by means of Method (A) described in [34]. The latter consists of adding sequentially a set of feasible random entries to an empty partial Latin square until the desired weight is reached. Figure 10 shows the running time required by our computer system for determining both the reduced Gröbner basis of the corresponding ideals and the cardinalities of their related affine algebraic sets. The maximum running time was 13.99 s, which is reached for m = 50. It is remarkable the slightly decreasing tendency of this running time with respect to the weight of the partial Latin square under consideration. Figure 10. Running time (in seconds) required for computing the cardinality of the set V Q (I PT (L)), for random partial Latin squares L ∈ L 10;m , with 1 ≤ m ≤ 100. Now, to illustrate the computational efficiency of using Gröbner bases for determining the set of P-partial isotopisms between two given partial Latin squares, for some subset P ⊂ [n] × [n] × [n], the mentioned method of adding random entries has been used to construct a pair of random partial Latin squares in the set L 7;m , for each positive integer m ≤ 49. (Recall that n = 7 is the first order for which no result on the distribution into isotopism classes is known.) Figure 11 shows the running time required by our computer system for determining both the reduced Gröbner basis of the corresponding ideals and the cardinalities of their related affine algebraic sets. The maximum running time has been 102.43 s, which is reached for m = 49 (the Latin square case). The fast exponential growth of running time is remarkable for dense partial Latin squares. Partial Latin squares with either only one filled cell of with more or less the same number of empty and filled cells seem also to require more running time. All these cases turned out to be related to a high number of partial isotopisms. In any case, a much more comprehensive computational analysis concerning orders, weights and particular isomorphism classes of partial Latin squares is required for distinguishing potential bottlenecks in the computation of related Gröbner bases. Figure 11. Running time (in seconds) required for computing the cardinality of the set V Q (I PI (L, L)), for random partial Latin squares L ∈ L 7;m , with 1 ≤ m ≤ 49.
Let us finish this section by establishing the following open problems to deal also with as further work on this topic.
Problem 1.
What are the minimum and maximum numbers of partial Latin squares that are partial transpose of a partial Latin square in L n;m ? Problem 2. What are the minimum and the maximum numbers of distinct partial Latin squares for which there is at least one P-partial isotopism to a partial Latin square in L n;m , for some subset P ⊂ [n] × [n] × [n]? Problem 3. What is the maximum cardinality of a subset P ⊂ [n] × [n] × [n] for which a P-partial isotopism exists between two distinct partial Latin squares in L n;m ?
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable. | 12,352.2 | 2021-03-21T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Wigner Function Non-Classicality Induced in a Charge Qubit Interacting with a Dissipative Field Cavity
We explore a superconducting charge qubit interacting with a dissipative microwave cavity field. Wigner distribution and its non-classicality are investigated analytically under the effects of the qubit–cavity interaction, the qubit–cavity detuning, and the dissipation. As the microwave cavity field is initially in an even coherent state, we investigate the non-classicality of the Wigner distributions. Partially and maximally frozen entanglement are produced by the qubit–cavity interaction, depending on detuning and cavity dissipation. It is found that the amplitudes and frequency of the Wigner distribution can be controlled by the phase space parameters, the qubit–cavity interaction and the detuning, as well as by the dissipation. The cavity dissipation reduces the non-classicality; this process can be accelerated by the detuning.
Introduction
The decoherence and dissipation issues taint the dynamics of every quantum system. These effects reduce, distort or destroy the quantum phenomena, [1,2] such as: quantum coherence, squeezing, and quantum correlation. Moreover, decoherence is the most significant characteristic of an open quantum system. This quantum effect destroys the nonclassical correlations. Decoherence affects the entangled states and transforms them to mixed states [3]. Decoherence usually occurs as the system's constituents interact with the environment [4,5]. The effects of decoherence and dissipations on the dynamical features have been investigated in various quantum systems [6][7][8][9]. The interaction between the quantum systems and the environment usually leads to a decoherence/dissipation process, which reduces the quantum phenomena [10].
The decoherence and the dissipation effects might lead to spontaneous symmetry breaking or phase transition phenomena [11][12][13][14][15], which may occur in several dissipative quantum systems [16][17][18][19]. These effects erase the quantum information resources. In general, the decoherence and the dissipation effects can be investigated by various types of master equation [20][21][22][23][24] which can be employed to analyze the quantum dynamics of the systems.
To characterize quantum states and present valuable quantum information about the system states, quasi-probability distributions were introduced [25]. Wigner distribu-tion (WD) is the first quasi-probability distribution that was introduced to determine the quantum corrections [26]. WD is an important tool to explore the non-classicality via its negative values [27][28][29][30][31][32]. There is a link between the WD negativity and the entanglement [33][34][35][36]; however, the negativity of the Wigner distribution is not sufficient to guaranty the non-classicality [37]. The negativity of the generalized Wigner function was used as an entanglement witness for hybrid bipartite states [38]. The non-Gaussinity of the Wigner function could be detected by its representation in the phase space. Based on the link between the WD negativity and the entanglement entropy, the non-gaussian nature and entanglement of spontaneous parametric nondegenerate triple-photon generation were investigated [39,40]. Experimentally, the Wigner function of a single photon is used to demonstrate non-classicality properties specific to non-Gaussian states [41]. It is found that a negative value of the Wigner function is a sufficient condition for non-Gaussinity of two-photon states [42].
Superconducting (SC) qubits or two-level systems of Josephson junctions are promising candidates for realizing quantum computation [43][44][45][46]. Recently, researchers have achieved significant progress in conceiving the quantum regime in these systems. It was reported that these qubits can be strongly coupled to a single-microwave photon [47,48]. Superconducting circuits present several potential applications, such as: realizing Fock states [49], implementing quantum algorithms [50], encoding [51], and realizing entanglement [52].
The decoherence and the dissipation effects on the WD nonclassicality were investigated in [53,54]. These studies were limited [55][56][57]. The WD non-classicality was explored for a cavity QED containing a high optical nonlinear medium and a quantum well [55], for weak dissipation rates. In [56], the effect of intrinsic decoherence on the WD dynamics of a cavity interacting resonantly with two coupled qubits was investigated. Under the phase-cavity-damping effect, the WD nonclassicality of a cavity field interacting with a qubit with a specific value of applied magnetic flux (half of the applied flux quantum) was studied [57].
In this paper, we explore the Wigner distribution non-classicality for a microwave cavity field interacting with a superconducting charge qubit. The consider system is an open quantum system interacting with the environment through cavity dissipation (the system energy is not conserved). The method used in this paper can be used to investigate the dynamics of quantum information resources of the Wigner distribution or a quasiprobability distribution in other qubit-cavity systems.
The paper is organized as follows: In Section 2, we present the physical scheme for a qubit-cavity system with cavity-damping effect. The dynamics and properties of the WD will be investigated in Section 3. Finally, in Section 4, we conclude our results.
Dissipative Qubit-Cavity System
We consider a charged qubit system that is described by a Cooper-pair box, containing two identical Josephson junctions, and placed into a microwave cavity. The general Hamiltonian for this system is given by [48,58,59] where φ = πΦ c Φ 0 and ω represent the frequency cavity field that has the creation operators to whichψ † , and E z denote the qubit charging energy. E J is the coupling-energy Josephson junctions. Φ c represents the applied classical magnetic field and Φ 0 is the applied flux quantum.σ z andσ x are the Pauli matrix operators, which are represented in the basis formed by the excited |e and ground |g states. The constant η has units of the magnetic flux and depends on the geometrical design of the SC cavity.
Here, the Cooper-pair box works as a qubit in the microwave region where (1) the Cooper-pair box is in the middle of the microwave cavity. (2) The microwave cavity field is not too strong, such that all higher orders of πη Φ 0 are neglected except the first order. (3) We use the following operators: Consequently, the Hamiltonian of Equation (2) can be written aŝ The operatorsΛ k (k = x, y, z) satisfy the following properties If the charge qubit-cavity system is interacting with the surrounding environment, different types of decoherence and dissipation affect the qubit-cavity system. To study the effect of the cavity dissipation on the time-dependent density matrix of the system, we consider the master equation, where γ represent dissipation. To solve Equation (5), we used two canonical transformations. We transform the states |e and |g to the states |1 and |0 , respectively, [60] as: where By using the above transforms and the rotating wave approximation (Ĥ andρ(t) changes toĤ χ andR(t), respectively), Equation (5) is rewritten as master equation, withĤ represents the qubit frequency, which shifts the atomic energy levels to ± 1 The operatorsσ s in terms the rotating operatorsχ s are given byσ After that, the second canonical transformation Z(t) = e iĤt R(t)e −iĤt (that changesR(t) to Z(t)) is used with the secular approximation and the dressed-states (DS) method [61,62] for the case of a high−Q cavity. In the DS method, the microwave cavity field operators are rewritten in terms of the eigenstates' complete set of the HamiltonianĤ χ , and the oscillatory terms will be neglected. The eigenstates and eigenvalue of Hamiltonian:Ĥ χ are given by |ϕ ± n = a ± n |1, n ± a ∓ n |0, n + 1 (n = 0, 1, 2, ...), with a ± n = 1 The dynamics of the density matrix Z(t) is given bẏ The off-diagonal elements (m = n) of the matrix Z(t) are given by while the diagonal elements of the Z(t) satisfy the differential equationṡ where Here, we assumed that the SC-qubit initial density matrix is the density matrix |1 1|, while the cavity is considered initially in an even, coherent state, with the photon distribution function p n , Therefore, in the space staes {|1 , |0 }, the density matrix R = [∑ mn R mn ij ], R mn ij are given by [a +2 n x n + a −2 n y n + 2p * n p n a +2 n a +2 n e −γ(2n+1)t cos(2η n t)]|m n|, m = n.
Wigner Distribution (WD)
The phase space Wigner distribution for the quantum stateρ(t) is defined by [25,63,64]: µ = p + iq is the parameter of the intensity of the coherent field |µ, n = e p(ψ + −ψ)+iq(ψ + +ψ) |n . WD is a good indicator of the phase space information and non-classicality of a quantum state, based on its density matrix. For the reduced density matrix of the cavity field, ρ f = ∑ mn ρ f mn , the WD is given by [25,63,64] L m−n n (p 2 + q 2 ) is the associated Laguerre formula. The WD positivity is an indicator of the classicality and the minimization uncertainty, while the WD negativity indicates the nonclassicality [56,57]. In Figures 1-6, the WD W(p + iq) and its partial distributions (as: W(p), W(q) and W(t)) are plotted to display the effects of the qubit-cavity interaction and the dissipation in the resonance and off-resonance cases. Figure 1a, displays the behavior of W(p + iq) when the microwave field cavity is initially in an even coherent state, 1 A [|α + | − α ], in the phase space: p ∈ [−2π, 2π] and q ∈ [0, π]. The WD has symmetrical interference peaks and bottoms around the two original peaks, which their heights and depths represent the positive and negative values of the WD. The interference peaks and bottoms in the behavior of the WD is due to the superposition of the even coherent state. The classicality of the positive parts, and the non-classicality of the negative parts of the WD are clearly distinguishable and are a natural signature of the properties of the initial, even coherent state. To investigate the time evolution of the WD, we illustrate it at different times. Based on the negativity entanglement between the SC-qubit and the coherent cavity field, the WD will be shown at the times of a partial and maximal qubit-cavity entanglement. The simplest definition of the negativity entanglement N(t) is the sum of negative eigenvalues of the partial transposition matrix for the qubit-cavity density matrix [65]. The qubit-cavity system has a maximally entangled state when N(t) = 0.5, and is in a disentangled state for N(t) = 0. Otherwise, the system has a partial entanglement. Figure 1b, shows the dynamics of the negativity entanglement N(t) under the effect of the unitary qubit-cavity interaction (solid curve), the detuning (dashed curve) and the dissipation (dash-dot curve). By starting the qubit-cavity interaction, the negativity grows and oscillates to show the generated partial and maximal entanglement between the charge-qubit and the coherent cavity field. In some time intervals, the negativity stabilizes into maximal entanglement, i.e., the qubit-cavity entanglement, which may be frozen in these intervals (referring to the phenomenon of frozen maximal entanglement). The dashed curve shows that the non-zero detuning leads to the reduction in the amplitudes and minima of the negativity as well as to the increase in the frozen negativity entanglement time windows. Dash-dot curve illustrates the effect of the dissipation that deteriorates the generated qubit-cavity entanglement, which completely vanishes after a particular time. The charge-qubit and the coherent cavity field are then in a disentangled state. In Figure 2, the Wigner distribution W(p + iq) is shown in the region p ∈ [0, 2π] and q ∈ [0, 1.5π] for two different normalized times, λt = π in (a), at which the qubit-cavity state is in a maximally entangled state, and λt = 2.069π in (b), which corresponds to a partial entanglement. We note that the qubit-cavity interaction leads to notable changes in the distribution of the positive and the negative regions of the WD. The distribution amplitudes of the symmetrical interference peaks and bottoms depend on the considered time λt. For the case λt = π, the main interference peaks and bottoms are around p = ±π. While for the case λt = 2.069π, the centers of the main interference peaks and bottoms are at the axis p = 0. Figure 3. The WD at the scaled times λt = π in (a) and at λt = 2.069π in (b) for α = 4, γ = 0 and the off-resonant case δ = 6λ. Figure 3, exhibits the effect of the detuning between the SC-qubit and the coherent cavity field on the WD W(p + iq) for δ = 6λ. For this off-resonance case, the symmetric distribution of the interference peaks and bottoms disappears. The dependence of the peak and bottom distribution (their amplitudes, places, interference and frequency) and on the phase space parameters p and q is affected by the detuning. By comparing the resonance and off-resonance cases, we find that the increase in the detuning leads to the reduction in the amplitudes, interference and frequency of the peaks as well as the bottoms of the Wigner distribution. The effect of the coupling of the surrounding environment is shown in Figure 4 for the same parameter set of Figure 2b, but with non-zero damping values γ. We note that the increase in the dissipation leads to the reduction in the heights and depths of the peaks and bottoms of the symmetric WD. For large value of the dissipation γ = 0.05, the classicality (positive parts) and the non-classicality (negative parts) of the WD are approximately disappeared. We can deduce that the dissipation reduces the positive and the negative regions of the Wigner distribution. Figures 5 and 6 illustrates the effects of the dissipation and the qubit-field detuning on the dynamics of the non-classicality of the W(t) for the fixed point in the (p, q)-phase space, µ max = (p, q) = (0.009296π, −0.06127π), which corresponds to the largest positive value of the initial WD (see Figure 1a). For the resonance case δ = 0 without the dissipation effect γ = 0.0, the Wigner distribution oscillates between its positive and negative values, showing that the qubit-cavity interaction generates classicality and non-classicality information. The Wigner distribution oscillates. It also illustrates collapses and revival phenomena. In the collapse intervals (W(t) = 0), the WD has no classical or quantum information. By comparing the results of the negativity entanglement N(t) and the time evolution of the negativity of W(t), we observe that they have similar dynamical behavior, where: (1) The frozen maximal entanglement intervals N(t) = 0.5 corresponds to the collapse intervals of the WD W(t) = 0. (2) The minima of the oscillatory behaviors of the N(t) and W(t) occur at the same times. The relationship between the negativity entanglement and the negativity of the WD confirms that the WD can be an indicator of the entanglement. Dashed and dash-dot curves show the effect of the dissipation on the dynamics of the WD W(t). The amplitudes of the oscillations are reduced by the enhancement of the dissipation; therefore, the classical and quantum information of the Wigner distribution is completely erased. In Figure 5b, the dynamics of the largest positive value of the WD is shown for the off-resonance case δ = 6λ. We note that the detuning between the charge-qubit and the coherent cavity field enhances the oscillation frequency and the non-classicality of the WD. We also observe collapse intervals of the WD (W(t) = 0). In addition, the detuning accelerates the effect of the dissipation, i.e., it accelerates the erasing of the classical and quantum information of the WD. The non-classicality dynamics of the W(t) for the fixed point in the phase space µ min = 0, which corresponds to the largest negative-value for the initial WD is displayed in Figure 6a. For the resonance case δ = 0, we have the same behavior as the previous case of Figure 5a. While for the off-resonance case δ = 6λ, we observe that the detuning leads to a downshift in the average of the Wigner distribution, from W(t) = 0 to W(t) = −0.2. This means that the detuning increases the non-classicality of the Wigner distribution, and accelerates the erasing of the its classical and quantum information due to the dissipation.
Conclusions
In this contribution, we have analytically analyzed the entanglement and the nonclassicality for a superconducting Cooper-pair box that contains two identical Josephson junctions, interacting with an open microwave cavity field. Our investigation is based on the effects of the qubit-cavity interaction, the resonance/off-resonance case and the coupling to the external environment. When the microwave cavity field is initially in an even coherent state, the link between the negativity entanglement and the non-classicality of the Wigner function is investigated. Without the dissipation effect, the negativity oscillates and presents frozen maximal entanglement phenomenon, which are affected by the detuning and reduced by the dissipation. The dependence of the amplitudes, interference and frequency of the Wigner distribution on the phase space parameters present notable changes due to the qubit-cavity interaction, the detuning and the cavity damping. The amplitudes, interference and frequency of the Wigner distribution crucially depend on the increase in the detuning. The detuning reshapes the non-classicality dynamics. Furthermore, it speeds up the erasure of the classical and quantum correlation of the Wigner distribution. | 3,960.6 | 2021-05-04T00:00:00.000 | [
"Physics"
] |
Transient-axial-chirality controlled asymmetric rhodium-carbene C(sp2)-H functionalization for the synthesis of chiral fluorenes
In catalytic asymmetric reactions, the formation of chiral molecules generally relies on a direct chirality transfer (point or axial chirality) from a chiral catalyst to products in the stereo-determining step. Herein, we disclose a transient-axial-chirality transfer strategy to achieve asymmetric reaction. This method relies on transferring point chirality from the catalyst to a dirhodium carbene intermediate with axial chirality, namely a transient-axial-chirality since this species is an intermediate of the reaction. The transient chirality is then transferred to the final product by C(sp2)-H functionalization reaction with exceptionally high enantioselectivity. We also generalize this strategy for the asymmetric cascade reaction involving dual carbene/alkyne metathesis (CAM), a transition-metal-catalyzed method to access chiral 9-aryl fluorene frameworks in high yields with up to 99% ee. Detailed DFT calculations shed light on the mode of the transient-axial-chirality transfer and the detailed mechanism of the CAM reaction.
General Information
All reactions were performed in 10 ml oven-dried glassware under atmosphere of argon.
Analytical thin-layer chromatography was performed using glass plates pre-coated with 200-300 mesh silica gel impregnated with a fluorescent indicator (254 nm). Flash column chromatography was performed using silica gel (300-400 mesh). 1 H NMR and 13C NMR spectra were recorded in CDCl3 or DMSO-6d on a 400 MHz spectrometer; chemical shifts are reported in ppm with the solvent signals as reference, and coupling constants (J) are given in Hertz. The peak information is described as: br = broad, s = singlet, d = doublet, t = triplet, q = quartet, m = multiplet, comp = composite. Enantioselectivity was determined on HPLC using Chiralpak IA-3 and IB-3 column.
Synthesis of S-6 6 : To a 50-mL oven-dried flask containing a magnetic stirring bar and compound S-5 (2.0 mmol) in THF (5.0 ml), was added 15% NaOH (10 ml). The solution was stirred at room temperature for 5 h. After consumption of the material (monitored by TLC), the mixture was acidified with 1N HCl solution (to PH~3.0), The mixture was extracted with DCM (10 mL X 2) and the combined organic extracts was dried over Na2SO4, and solvent was evaporated in vacuo after filtration to give a pale yellow solid, this solid was directly used for the next step without purification.
To a 50-mL oven-dried flask containing a magnetic stirring bar, the above obtained acid, propargyl alcohol (2.4 mmol), and DMAP (4-dimethylaminopyridine, 24.4 mg, 0.2 mmol) in DCM (10 mL), was added DCC (dicyclohexylcarbodiimide, 0.63 g, 2.4 mmol) in batches at 0 o C, and the reaction mixture was stirred at room temperature overnight. After that, the reaction mixture was filtered through Celtie and rinsed with EtOAc (10 mL), and the filtrates were combined. After evaporating the solvents, the residue was purified by column chromatography on silica gel (Hexanes:EtOAc = 20:1) to provide the corresponding esters S-6 as white solid (> 90% yield).
General Procedure for the Asymmetric C-H Functionalization
give the desired polycyclic products 4. All DFT calculations were performed with the Gaussian 09 software package. 10 For racemic reaction, geometry optimizations of all the minima and transition states involved were carried out using the pure functional PBE. 11,12 The SDD basis set 13
f) Reactivity discussion of substrate 3h:
For the substrate 3h, a higher temperature is need for C-H insertion reaction due to the steric repulsion between the methyl group and lactone ring in the C-H insertion transition state, as suggested by DFT calculations shown below. The C-H insertion step has activation barrier about 5 kcal/mol than that for substrate 3a when we used model catalyst (we speculated that, if real catalyst was used, this energy difference should become lower about 2 to 3 kcal/mol). We hypothesized that the reaction of 3h initially gave higher ee, because this was set up in the first step of the catalytic cycle and there was no difference from substrate 3a. But the product could then undergo racemization at 60 °C. One support for this hypothesis is that product 4c heating at 80 °C for 12 hours underwent racemization from 90% ee to 30% ee.
g) Estimation of the rotation barrier of axial chiral intermediate c-INT3
Considering that the C-H insertion step is the rate-determine step in this tandem reaction, we In the rotation transition state structure for intermediate c-INT3, remarkable distortion of chiral dirhodium catalyst and substrate part can be observed due to the strong repulsion of two aryl rings, which is shown below. | 1,036 | 2020-05-12T00:00:00.000 | [
"Chemistry",
"Biology"
] |
CT-guided 125I brachytherapy on pulmonary metastases after resection of colorectal cancer: A report of six cases
Colorectal cancer (CRC) is one of the most common malignancies in the world and distant metastasis is the main cause of cancer-related mortality. Percutaneous computed tomography (CT) guided radioactive 125I seed implantation (CTRISI) is a minimally invasive technique used to treat pulmonary metastases in CRC patients. In the present study, following colorectal cancer resection, six patients with pulmonary metastases were treated with computed tomography (CT)-guided percutaneous implantation of radioactive 125I seeds. At six months following seed implantation, CT examination was performed and compared with the images captured prior to the treatment. Of the total 13 lesions, four had disappeared, eight were reduced by >50% and one was enlarged, indicating that the local control rate was 92.3% (12/13). Overall, two patients developed intraoperative pneumothorax and one experienced hemoptysis subsequent to the procedure. Following a median follow-up period of 31 months, no local recurrence was observed in 12 of the metastatic lesions. The mean survival time was 32.7 months and the median survival time was 31 months.
Introduction
Colorectal cancer (CRC) is the third most common malignancy in Western countries and is one of the leading causes of cancer-related mortality in China (1,2). In total, approximately 10-25% of patients with CRC develop pulmonary metastases (3). As no effective chemotherapy regimen has been developed for the treatment of pulmonary metastases of colorectal origin, surgery is the only potentially curative treatment option. However, only 2-4% of pulmonary metastases can be treated surgically and others require external beam radiotherapy and chemotherapy (4). Increasing the therapeutic doses of traditional external beam radiotherapy is challenging due to the severe side-effects. Although three-dimensional conformal radiation therapy (3D-CRT) and stereotactic external beam radiotherapy can administer tumoricidal doses, the side-effect of lung tissue damage remains a problem (5).
Percutaneous computed tomography (CT)-guided radioactive 125 I seed implantation (CTRISI) is a minimally invasive modality. This brachytherapy is less time consuming and less traumatic, compared with the aforementioned treatments, and the side-effect of radiation damage is minimal (6). Patients are also more likely to accept this therapy due the minimally invasive nature of the technique. CTRISI has been used for the treatment of non-small cell lung cancer (NSCLC) (7,8). However, there have been few radioactive seed implantations for pulmonary metastases following resection of CRC and the efficiency of CTRISI has not been determined. The present study reports the preliminary results of six patients with pulmonary metastases following resection, who could not tolerate a surgical procedure and therefore, underwent CT-guided 125 I brachytherapy.
Materials and methods
In total, six patients, three males and three females, with an ages range of 68-86 years (mean ± standard deviation, 76.0±7.6 years), with pulmonary metastases following colon cancer resection, were treated with percutaneous CTRISI at the Department of Thoracic Surgery, Second Hospital of Tianjin Medical University (Tianjin, China) between November 2002 and May 2010. Informed consent was obtained from the subjects and the present study was approved by the Ethics Committee of Tianjin Medical University. The patient characteristics are shown in Table I. Of the total 13 metastatic lesions, eight were located in the left lung and five in the right lung. In total, 10 were located in the lung and three were located beneath the hilum of the lung. The average diameter was 2.8±1.5 cm (range, 1-6 cm) and the average volume was 29.5±29.4 cm 3 . A complication of right supraclavicular lymph node metastasis was observed in one case and subsequently received seed implantation.
The CT-guided brachytherapy procedure was carried out as previously described (9). Prior to the procedure, a treatment plan was prepared for each patient using a computerized treatment planning system (TPS; Prowess Panther, Prowess Inc., Concord, CA, USA) based on the CT images of the patients. The TPS generated a dose-volume histogram (DVH) and isodose curves of various percentages, and calculated the position (coordinates) of the brachytherapy applicator, dose and number of implanted seeds (Table II) (Table III).
Results
The brachytherapy catheters and 125 I seeds were satisfactorily placed in all patients. Of the six patients, three developed pneumothorax during the procedure. These patients subsequently received chest-tube drainage as a curative treatment for the pneumothorax, and two to three days following this, the condition was resolved. Hemoptysis (~20 ml) was observed in one patient; this ceased two days following the oral administration of carbazochrome salicylate (5 mg three times a day, for three days). Compared with the CT images captured prior to the procedure, the CT images obtained at the six-month follow-up revealed that four masses had been completely removed by the treatment and eight masses had been reduced in size by >50%. The CT images of one 86-year-old male patient prior to and following the procedure are shown in Figs. 1-5. Overall, only one mass was enlarged, indicating that the local control rate was 92.3% (12/13). None of the patients developed radioactive pneumonia or reduction in peripheral-blood granulocytes. Following a median follow-up period of 31 months (32.7±16.6 months; range, 8-53 months), no local recurrence was observed for the 12 metastatic lesions. Of the two patients with poorly-differentiated adenocarcinoma, one suffered from pulmonary metastases, complicated by right supraclavicular lymph node metastasis six months following radical resection. One of these patients succumbed to the disease eight months following brachytherapy and the other succumbed 29 months following brachytherapy. The four patients with well-differentiated adenocarcinoma succumbed to the disease 49, 53, 33 and 24 months following brachytherapy. The mean survival time was 32.7 months and median survival time was 31 months.
Discussion
The present study demonstrates that percutaneous CTRISI is a feasible and promising, minimally invasive modality for controlling the growth of pulmonary metastases following CRC resection, particularly in the 12 months following surgery. Although one patient experienced hemoptysis and three patients suffered pneumothorax, these side-effects could be controlled, indicating that CTRISI remains a safe treatment method in this patient population.
The target area may tolerate sustained, closer high-dose irradiation through the conformal implantation of seeds into the interior of the tumor, overcoming target volume motion, so that the local control rate can be elevated. Independent or separate entity radiotherapy studies have demonstrated that local control of the tumor body is likely to be markedly intensified following irradiation with a bioeffect dose of 90-100 Gy. Martínez-Monge et al used CT-guided permanent brachytherapy to treat seven patients with early-stage T1N0M0 NSCLC. The median dose was 144 Gy. This study found that one patient developed a focal pneumonitis three months following the treatment, and no patients developed local or regional failure within a 13 month follow-up period (7). In the present study, the mean tolerance dose of PTV (gross tumor volume + 0.5 cm) was 157.3 Gy, with a median dose of 152.4 Gy. The lesions were irradiated with a dose of approximately twice the PD and a local control rate of 92.3% was achieved, which is similar to the effective rate of 93.8% (10) for seed implantation for pulmonary metastases and 87% (11) for 3D conformal external beam radiotherapy, as previously reported in the literature, demonstrating good therapeutic effects. This may be associated with the high-dose irradiation of the small target area as well as with the sensitivity of well-differentiated adenocarcinoma to γ-ray radiation. The seeds can be accurately and evenly implanted into the target area under CT guidance. Patients with the target area located at the tumor center should undergo stereotactic puncture following contrast-enhanced CT scanning of blood vessels. The puncture needlepoint can be close to the heart and great vessels without injury to them. In the present study, D 90 88.4 Gy (>PD) revealed a uniform and reasonable dose distribution.
The 125 I-ray seed ray decays in an exponential manner with distance, which can reduce damage to tissues around the target area. The occurrence rate of radioactive pneumonia has been reported to be 44% (12) when the average therapeutic dose of the 3D conformal radiotherapy is 60 Gy. In the present study, no radioactive pneumonia was observed at PD 80 Gy, suggesting that seed implantation causes less damage as a result of radioactivity than conventional radiotherapy. Puncture-induced pneumothorax and hemoptysis can be observed on CT imaging and can be treated using conventional methods.
In conclusion, CTRISI is a safe and minimally invasive treatment modality for metastases from CRC that may aid in prolonging the survival rate in patients who cannot undergo pulmonary resection for metastases. While these results are promising, future studies including an increased number of cases, are required to gain further information with regard to CTRISI for the treatment of pulmonary metastases following CRC resection. | 2,000.6 | 2014-10-30T00:00:00.000 | [
"Engineering",
"Medicine"
] |
A Dexterity Comparison for 6 DOF Hybrid Robots
. In this paper, new hybrid robots are suggested which divided the task into a position and orientation tasks. The position mechanism controls the position whereas the orientation one manipulates the orientation of the end effector. These robots consist of a translational parallel manipulator and a rotational serial or parallel mechanism. The 3UPU or Tricept parallel manipulator and a three-axis gimbaled system or parallel shoulder manipulator are chosen for translational and rotational movements, respectively. The main goal of this paper is analyzing the development and combination of serial and parallel manipulators in order to increase their features. According to this purpose, serial and parallel mechanisms with three DOF are combined in a way to encompass six DOF space. It is shown hybrid mechanisms with less coupling between their subsystems are capable of increasing robot characteristics.
Introduction
Usually, industrial robots are made in accordance with serial manipulator architecture.Their main advantage is their large workspace with respect to their own volume and occupied floor space, easy forward kinematics and have a wide applicability in industry.Their main disadvantages are the low stiffness inherent to an open kinematic structure, errors are accumulated and amplified from link to link the fact that they have low effective load because of carrying the large weight of manipulators.Parallel manipulators have designed in a way that they have high structural stiffness.Their major drawback is their limited workspace (Campos et al., 2008;Yeshmukhametov et al., 2017).The hybrid inherit higher rigidity and higher loading capacity from serial and larger workspace and higher dexterity from parallel manipulators.The publication (Carbone & Ceccarelli, 2004) proposed a general method for analyzing stiffness of parallel-serial manipulators.This method is applied by Yang et al. (2008) to design and analyze a modular hybrid parallel-serial manipulator for robotised deburring of large jet engine components which consists of a 3-DOF (degree-of-freedom) planar parallel platform and a 3-DOF serial robotic arm.Yun and Li (2010) presents the design and modeling of a hybrid 6-DOF 8-PSS/SPS compliant dual redundant parallel robot with wide-range flexure hinges with high accurate positioning or rough positioning as well as a 6-DOF active vibration isolation and excitation to the payload placed on the moving platform.They also improved the structure and control algorithm optimization for a dual redundant parallel mechanism in order to achieve the feature of larger workspace, higher motion precision and better dynamic characteristics.
Most of the design methods of hybrid robots are based on the decomposition of translatory and rotary motions.Zeng et al. (2011) proposed a 4-DOF hybrid manipulator that allows for the realization of two translatory and two rotary output motions.Rahmani et al. (2014) composed two modules which consist of elementary manipulators with the parallel structure of Stewart Platform and shows that the proposed robots performed with high accuracy.A new reconfigurable parallel robotic manipulator is studied in the study of Coppola et al. (2014) and Unique characteristics are revealed and studied as a case study.It also solved a multi-objective optimization problem to carry out weighted stiffness, dexterity and workspace volume as the performance indices.Tian et al. (2016) presented a hybrid robot which includes two Stewart mechanisms in serial form, known as 2-(6UPS).It also addressed forward kinematic solution of 2-(6UPS) in order to develop a manipulator with additional constraints.The motion of spatial mechanisms with coupling chains (Tian et al., 2016) is two rotations and two translations (2R2T).Rastegarpanah et al. (2018) proposed a hybrid robot for ankle rehabilitation, which takes advantage of two important characteristics of parallel robots: stiffness and workspace.Rastegarpanah et al. (2018) also determined an optimum path based on maximum stiffness in the workspace of a 9 DoF hybrid parallel mechanism, which consists 9 parallel linear actuators and 2 serial moving platforms.
Recent researches refers to the efficient control techniques in order to improve the performances of the hybrid robots.Zhao et al. (2019) presents a novel 3 DOF serial-parallel hybrid leg which has improved the performances and reduced the manufacturing cost of the legs for quadruped robots.He et al. (2019) propose a control scheme for a hybrid manipulator which is used for the capturing mission in space.Liu and Yao (2019) proposed a serial-parallel hybrid worm-like robot based on two 3-RPS PMs with expandable platforms.Linearized error model of a 6-DOF polishing hybrid robot is formulated by Huang et al. (2019).The authors of the publication (Huang et al., 2019) considered all possible geometric source errors at link level.Kinematics and dynamics model of a 6-DOF serial-parallel hybrid humanoid arm is considered by Y. Li et al. (2020).Y. Li et al. (2020) has solved an optimal objective function to minimize the components coupling characteristics.He et al. (2020) presented a redundant hybrid finger mechanism actuated by flexible actuators and considered dexterity, velocity, and stiffness.J. Li et al. (2020) analyzed the correlation laws between optimization parameters and objectives of a 2UPR-RPS-RR hybridstructure robot.
In this paper, typical 3DOF serial and parallel mechanisms are studied in order to develop 6DOF hybrid manipulator.The 3UPU, Tricept, Gimbaled and Shoulder manipulators are reviewed respectively.Finally, we consider different arrangement of these mechanisms and analyze the features of their characteristics.Some quantitative metrics are applied to compare between proposed arrangements.
Optimization criteria
It is necessary to have some quantitative metrics to compare hybrid robots.Dexterity is the most important kinematic metric of the parallel and serial manipulators.The dexterity of a robotic manipulator can be described as its ability to perform small displacements in arbitrary directions as easily as possible in its workspace.It is based on the condition number ( ) of the Jacobian matrix.The condition number of a full rank matrix J can be defined as: (1993).
where J ~ is the scaled Jacobian, and V is a the overall workspace volume.* V is a centralized subregion of the overall workspace volume.In order to analyze the velocity and position of the hybrid robot, it is necessary first to treat them separately and thereafter link the individual steps into an integral procedure for the combined hybrid system.In this section, the structural characteristic of 3UPU, Tricept, Gimbaled and Shoulder manipulators are investigated.Then, we consider different arrangement of these mechanisms.
Translational Manipulator
This section aims to consider two translational parallel manipulators.
3UPU manipulator
The architecture of a 3UPU parallel manipulator is shown in Fig. 1 (a).Joshi and Tsai (2003) solved inverse and forward kinematics of a 3UPU and also explained the conditions which keep the moving platform from changing its orientation.The Jacobian of a 3UPU parallel manipulator is calculated as following: where the velocity vectors of the end effector and actuator joint rates, respectively, are denoted by
, is used to define the position of the moving platform.
A rotation matrix is defined as following.
where 1 and 2 denote two successive rotations of the moving frame and are calculated as follows.
) , ( tan ), ( cos The Jacobian matrix of a 3-DOF position mechanism relates the linear velocity of the moving platform,
T
x , to the vector of actuated joint rates, J is a 3x6 matrix called the Jacobian of the actuated legs.Joshi and Tsai (2003) presents an equation which calculates Complementing Eq. ( 8) with the identity transformation , Consequently where
Rotational Manipulator
This section aims to consider a serial and a parallel rotational manipulator.
Gimbaled mechanism
The three axes gimbal configuration is used in many systems and can be regarded as the archetype for other configurations, such as a roll-pitch-yaw, mirror stabilization or tracking mechanisms.The dynamics and kinematics of the two axes gimbals configuration is analyzed by Ekstrand (2001).In order to achieve the Jacobian of the 3 gimbaled mechanism, four reference about the y -axis.
3.
The frame P is carried into coincidence with the R frame by a positive angle of rotation R about the x -axis.According to the Fig. 2 (a), we have the following transformations: where q p, and r are the angular velocities components in each frame.The Jacobian of the rotational mechanism is calculated by mapping the angular velocities of the pitch gimbal into the fixed frame.The actuator joint rates of a rotating body as the set of Euler angles are: Determine the gimbaled J matrix which relates Euler angles to the angular velocities: The Jacobian of the actuated and passive limbs of the shoulder manipulator can be derived by using the technique in [17].The result is given below: where 1 , and 3 are pitch, roll and yaw angles defined for a moving object in space.
Finally, the Jacobian of the shoulder manipulator was derived.
Optimization
The objective of optimization is to determine the values of the manipulator design variables which derived from a minimum dexterity index.The objective and design variables were scaled so that the optimization was performed with respect to dimensionless parameters.The design variables with units of length were divided by the radius of the base to obtain nondimensional design variables.The design vector for the 3UPU manipulator is The results of the optimization criteria are shown in tables [1].Be in accordance with the main purpose of this paper, we study the various arrangements of combined manipulators.In this case, some of the typical rotational and translational manipulators were studied.The serial Gimbaled mechanism and the parallel shoulder manipulator are examples of rotational mechanism; furthermore, the 3UPU and Tricept manipulators are examples of parallel translational mechanisms.The first developed hybrid manipulator which is studied contains the 3UPU and Gimbaled manipulators.
6.1 3UPU-Gimbaled manipulator This arrangement is similar to two separate robots; because, there is no interaction between these mechanisms.The linear velocity of the center of the frame M is equal to the linear velocities of the center of the roll gimbal.Indeed, the terms of velocity such as Coriolis effect does not appear in the linear velocity of the end-effector as compared to the fixed frame.Thus, the Jacobian matrix of the Hybrid robot can be derived as follows: Eq. 24 illustrates that the Jacobian matrix is diagonal.Consequently, the rotational and translational movements are decoupled from each other.
Tricept-Gimbaled manipulator
It is clear that translational movement of the Tricept manipulator makes the end effector to rotate.The angular velocity of the hybrid robot consists of an angular velocity of the Tricept mechanism and the Gimbaled mechanism.
where Gimbaled system is equal to zero, the linear velocity of the end effector will be produced by the Tricept manipulator as following: To be considered that there is one way interaction between subsystems of this type of hybrid manipulator.The Tricept affect on the angular movement of the end effector; on the other hand, the Gimbaled has no effect on the translational movement.The dexterity index of the Tricept-Gimbaled manipulator is shown in Table 1.
Tricept-Shoulder manipulator
Based on this fact that both of the Tricept and shoulder manipulators make the end effector have translational and rotational motion, analysis of this robot is complex.The mapping between actuator's rate velocities of the shoulder mechanism and angular velocity of the end effector platform, with respect to H , is illustrated in equation ( 27).
The end effector position vector is given by: The equations ( 29) and (30) yields Thus, 11 12 The design variables for the Tricept-Gimbaled hybrid robot is shown in table 1.The dexterity index of the Tricept-Shoulder hybrid robot is calculated in two cases.In the first case, the optimized design variables of the Tricept and shoulder manipulators are used; while, in the second case, the design variables are optimized by minimizing the dexterity index of the hybrid robot.The result shows that the interactions between Tricept and shoulder mechanism increase the dexterity index of the robot.It seems that the interactions make the robot less dexterous.
Stiffness analysis
When a manipulator performs a given task, the end effector exerts force and/or moment onto its environment.The reaction force and/or moment will cause the end-effector to be deflected away from its desired location.Thus, the stiffness of a manipulator has a direct impact on its position accuracy.Let n q q q q be the corresponding vector of joint deflections., as follows: The end-effector force F is related to the end effector displacement, x , by the transpose of the stiffness matrix.
For serial and parallel manipulators, the stiffness matrix can be respectively obtained as follows.
1 , In this section, a formulation is proposed for a stiffness performance index by using the obtained stiffness matrix.A numerical investigation has been carried out on the effects of design parameters and fundamental results are discussed in the paper.The compliance displacement of the hybrid manipulators can be expressed as 11 , , The matrixes and smallest singular values of the Jacobian matrix.The following objective function ( .DI) to be minimized was proposed by Stoughton and Arai unit vector pointing along the ith limb.
is shown in Fig. 1 (b).As shown in Fig. 2, the fixed frame G is attached to the fixed base with the x-axis pointing from V to 1 g and the coordinate frame H is attached to the moving platform with the U-axis pointing from T to 1 h .The position vectors of points frames are introduced.A moving platform of the 3-UPU mechanism M the following way.1.The frame M is carried into coincidence with the Y frame by a positive angle of rotation 3 about the z -axis.2. The frame Y is carried into coincidence with the P frame by a positive angle of rotation 2 Similarly, Y P C is the transformation from P to Y , and P R C is the transformation from R to P .For the angular velocities of frames
E
16) (a) Gimbaled system (b) Shoulder manipulator Fig. 2 Rotational Manipulators5.2 Shoulder mechanismSadjadian and Taghirad (2006) has solved the forward and inverse kinematic of the shoulder mechanism which is shown in the Fig.2 (b).The parameters can be defined as: are the center of the reference frame and end points of the actuators, respectively.iN denotes the moving end points of the actuators, and the position of the moving platform center N is defined by:
denotes the angular velocities of the roll gimbal (with respect to the H frame) and H frame (with respect to the fixed frame) respectively.If the variable g l of the of F with respect to the parallel or serial frames.In fact, matrix ST C maps the end-effector force F into the serial manipulator's base frame; likewise, matrix PT C maps F into the serial manipulator's base frame.Therefore, the stiffness matrix of the hybrid manipulator can be written as for hybrid manipulator.Equation (36) shows that the stiffness of a hybrid manipulator depends on it's position and direction.i , eigenvalus of the stiffness matrix, represents the stiffness of the manipulator in the corresponding eigenvector direction ( i v ).min denotes the minimum eigenvalue while max the maximum eigenvalue, then the minimum stiffness occurs in the min v direction and maximum stiffness occurs in the max v direction.For comparison purpose, the stiffness constant is taken to be 700 N/m for all linear or revolute actuators.The maximum and minimum stiffness values at each point within the workspace of the three manipulators have been computed.The maximum and the minimum stiffness mappings for the three manipulators at the elevation z = 0.7 are shown in Fig.5through 7.
analysis of the optimized 3UPU-Gimbaled hybrid manipulator (a) Maximum stiffness (b) Minimum stiffness Fig. 6 Stiffness analysis of the optimized Tricept-Gimbaled hybrid manipulator (a) Maximum stiffness (b) Minimum stiffness Fig. 7 Stiffness analysis of the optimized Tricept-Shoulder hybrid manipulator | 3,633 | 2020-09-24T00:00:00.000 | [
"Computer Science"
] |
A BRIEF NOTE ON BUS BASED EVACUATION PLANNING
Evacuation route planning is essential for emergency preparedness, especially in regions threatened by hurricanes, earthquake etc. The evacuees who do not travel on their own are gathered at few collection points, where they are brought on buses to take them to safe region in bus based evacuation. Bus based evacuation planning is more essential for developing country like Nepal. It is necessary to reach the destination as early as possible for bus based evacuation. The bus based evacuation planning (BEP) problem is a variant of vehicle routing problem that arises in emergency planning. The problem in this variant is that not all the evacuees can gather at the same time, elderly and handicaps may need special help like wheelchair and may need more pickup time. Another problem is that at the shelter, some people may need special care like medicine, oxygen etc. In this paper we review a mathematical model to minimize the duration of evacuation together with the dynamic programming and the branch and bound as solution procedures. Moreover, a brief report of a case study for Kathmandu has also been given.
INTRODUCTION
Evacuation is a process in which threatened people are shifted from dangerous places to safer places in order to reduce the health and life vulnerability of affected people as quickly as possible.Evacuation planning is the emergency management in which people are transported from danger zone to safe zone as soon as possible in a minimum time.Evacuation planning is essential for emergency preparedness, especially in regions threatened by hurricanes, typhoon, or after earthquake, landslide, flood, fire, bomb blast, industrial accident, terrorist attack etc.In bus based evacuation evacuees who do not travel on their own due to age, sickness, handicapped, children, tourists or who do not have their own car are gathered at few collection points, where they are brought on buses to take them to safe region.Bus based evacuation planning (BEP) is more essential for developing country like Nepal because most of the people in Nepal cannot afford their own car and are bus-based.Route planning for bus evacuation is necessary to reach the destination on shortest time.The problem of bus based evacuation is that all the evacuees cannot gather same time, elderly and handicaps may need special help like wheelchair and may need more pickup time.Another problem is that at the shelter, some people may need special care like medicine, oxygen etc.There are many studies regarding vehicle routing.However, only few studies have been done till date regarding the bus based evacuation planning.Bish (2011) introduced a model for bus based evacuation planning.Pyakurel et al. (2016) performed a case study on transit dependent evacuation planning for Kathmandu valley by using branch and bound algorithm for BEP and tabu search for robust BEP.Goerigk et al. (2013) proposed several branch and bound algorithm for bus based evacuation.Goerigk and Grun (2014) considered robust bus evacuation models in which the number of evacuees is assumed to be not known but it can be estimated.This paper reviews a mathematical model for BEP problem with the dynamic programming and the branch and bound as solution procedures.Moreover, a brief report of a case study for Kathmandu has also been given.This study is especially useful when advance notice of a threat like hurricane, typhoon is available or after land slide, earthquake and flood etc.The objective of this paper is to transport evacuees from the pickup locations to the shelters in the minimal amount of time.We define the duration as the time span between the first bus leaves its yard until the last evacuee is sheltered.We assume that evacuees that do not travel on their own vehicle due to age, sickness, the lack of a private car or any other reasons, are gathered at few collection points (demand nodes), where they are picked up by buses to take them to safe region (shelter).We have applied Bellman's equation and branch and bound by solving an example.The paper has been organized as follows.A mathematical formulation of the BEP problem has been described in Section 2, solution procedures the dynamic programming and the branch and bound and also a report for the case study for Kathmandu have been studied in Section 3. The last section concludes the paper.The number of evacuees at demand node might be greater than the capacity of a bus, so the split delivery is allowed.BEP network is not fully connected because outgoing arcs are used to go only outside from the yards.The yard plays no further role in the evacuation process because buses do not return to the yard due to the bus and driver's safety.Liman (2006) mentioned that hurricane Katrina flooded bus yards along with buses in New Orleans.So yard is not the best place to store the buses during the threat.
Decision variables
The variables used to develop following formulation for BEP are binary variable that equals 1 if trip t for bus m traverses arc , else .number of evacuees from node j assigned to (or, if is shelter, released from) bus m after trip t, the duration of the evacuation.
Formulation for BEP
The Bus Evacuation Problem (BEP) is a vehicle routing problem that arises in emergency planning.
It models the evacuation of a region from a set of collection points to a set of capacitated shelters with the help of buses, minimizing the time needed to bring the last person out of the endangered region (Goerigk et al. 2013).
Following formulation have been introduced to minimize the duration of evacuation for BEP: (1) Minimize Objective function ( 1) is to minimize the duration of evacuation.Constraint (2) requires to be greater than or equal to the maximum cost incured by any bus which is then minimized by the objective function (1).Therefore it is referred as the min-max" objective.Constraint (3) is the flowbalance constraint for the demand node.It ensures that bus travelling to demand node j on trip t leaves node j on trip t+1.Constraint ( 4) is the flow-balance constraint for the shelter.It ensures that the last bus doesn't have to leave the shelter.Constraint (5) allows a bus to make at most one trip at a time.Constraint (6) denotes that the first trip of each bus starts from its yard.Constraint (7) doesn't allow buses to start from yard for later trips.Constraint (8) doesn't allow the last trip of bus to end at the demand node.Constraint (9) dictates that bus can pick up evacuees from node j, if it travels to node j.Constraint (10) is the bus capacity.Constraint (11) is the shelter capacity.Constraint (12) ensures that all the evacuees are picked up from demand node.Constraint (13) ensures that all the evacuees are delivered to the shelter.Constraints ( 14) and ( 15) are logical binary and non-negativity restrictions on the x and b variables respectively.
Solution techniques BEP as a network
The BEP network structure adds complexity to the solution.In addition to the demand nodes to which initially served from each yard, it indicates which shelter each vehicle should use considering shelter capacities and their location to the demand nodes.However, according to Bish (2011) even for the case where the shelters are un-capacitated (have sufficient capacities for all evacuees), sending each vehicle to its closest shelter from its last pick up node is not necessarily optimal.It is shown by the following example: Observation 1: The route may not be optimal even though it passes through the nearest shelter.According to Bish (2011) even without shelter capacity constraints, it is not always optimal for the BEP to allocate each vehicle to the nearest shelter (i.e. the shelter closest to its pick up node).It is clearer by the following example: Example1: For optimal solution a bus might not use the nearest shelter if it is scheduled to serve additional routes, even without capacity constraints.For instance let's consider the following network fragment.There are two pick up nodes (P 1 and P2) and two shelters (S1and S2).A bus has to pick up a full load of evacuees at node P1 and must pick up a full load of evacuees at node P2also.The bus can transport the current load of evacuees to either shelter S1 or S2.In the following figure shelter S1 is closer than S2 from pick up node P1.If the closest shelters are used, the bus will take route P1-S1-P2-S2 which has the cost of 15 (5+7+3), whereas if the bus takes route P1-S2-P2-S2, the cost would be 14 (8+3+3) which is optimal solution.Dynamic programming A Bellman's equation, named after its discoverer, Richard Bellman, also known as a dynamic programming equation, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming.
Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time.Therefore, it requires keeping track of how the decision situation is evolving over time (Wikipedia).
According to Pedregal (2004) in order to complete a whole process and to reach a desired state, a system has to move successively through a number of different steps.Each one of these actions has an associated cost.To reach the desired state from the given initial state we have to determine the optimal global strategy with the least cost.
Variables:
denotes the variable indicating the successive stages in which a decision must be made about where to lead the system denotes the variable describing the state of the system.At each step , we should have denotes the finite set of feasible states when .The cost associated with the passage from to is denoted by The main aim of the present work is to reach the final desired state from the initial state with the least cost by determining the optimal strategy.This is a typical situation of dynamic programming.We know the optimal path starting from and going to for each where .
Let denotes the cost associated with such an optimal strategy ending at .
The problem which is fundamental law or property of dynamic programming through which optimal cost from to can be found out in the most rational way.Proposition 1: Pedregal (2004) has given following proposition to understand the fundamental property of dynamic programming.If the S denotes the optimal cost from to , then we must have S = We have applied Bellman's equation to find the shortest distance for BEP.Example has been given below: Example 2 Let us consider the network fragment with one yard {Y}, three pick up nodes {P1, P2, P3} and two shelter nodes {S1, S2}.
Figure 3: BEP type network (Scenario YP3S2Vv).
In this example we have three stages with associated sets of feasible states.For each of the pickup nodes {P1, P2, P3}, we can find a unique path from yard Y, so that it must be optimal, and S( , P1) = 8, S( , P2) = 5, S( , P3) = 2.
For the each of the shelter nodes {S1, S2}, we can find the optimal cost based on the fundamental property of dynamic programming.Consequently the shortest distance (minimum cost) is 7 and the corresponding route is Y-P3-S2.
Branch and Bound
Brach and bound method is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as general real valued problems.A branch and bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root.The algorithm explores branches of this tree, which represent subsets of the solution set.Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm (Wikipedia).
The algorithm depends on the efficient estimation of the lower and upper bounds of a region / branch of the search space and approaches exhaustive enumeration as the size (n-dimensional volume) of the region tends to zero (Wikipedia).
The method was first proposed by Land and Doig in 1960 for discrete programming, and has become the most commonly used tool for solving NP-hard optimization problems (Land andDoig 1960, Jens 1999).The name "branch and bound" first occurred in the work of Little et al. on the traveling salesman problem (John et al. 1963, Egon & Paolo 1983).
According to Papadimitriou and Steiglitz (2006) the branch and bound method is a way in which we try to construct a proof that a solution is optimal based on successive partitioning of the solution space.The branch refers to the partitioning process and bound refers to lower bounds that are used to construct a proof of optimality without exhaustive search.Like Bell's equation this method is also applied to find out the shortest distance for BEP.
In a general context two things were needed to develop the tree in the branch and bound algorithm for ILP (Integer Linear Programming).1. Branching A set of solutions, which is represented by a node, can be partitioned into mutually exclusive sets.Each subset in the partition is represented by a child of the original node.2. Lower bounding An algorithm is available for calculating a lower bound on the cost of any solution in a given subset.The set active set is used to hold the live nodes at any point; the variable U is used to hold the cost of the best complete solution at any given time (U is an upper bound on the optimal cost).Notice that the branching process needs not to produce only two children of a given node, as in the ILP version, but any finite number.
Algorithm (Papadimitriou & Stieglitz 2006 To determine the optimal solution of above mentioned example 2 by branch and bound, following figure has been used.The number given adjacent to the node is the total distance.Let us consider one yard {Y}, three pick up nodes {P1, P2, P3} and two shelter nodes {S1, S2} in the following tree search.In order to solve the figure 3 by branch and bound we branch by choosing the next arc with which continue the path.The figure 4 shows a snapshot of a search tree that results when we branch from a node with the lowest lower bound at any point.The total distance of the following paths are given below: The nodes other than the lowest one are killed.So the optimal solution is 7 and the corresponding path is Y-P3-S2.Pyakurel et al. (2016) have done a case study on transit dependent evacuation planning for Kathmandu valley.The objective of this study was to formulate a mathematical model of the densely populated metropolitan capital Kathmandu and implement an evacuation plan using available efficient software.They applied two models and solution algorithms to evacuate a core part of the capital city Kathmandu of Nepal, where a large part of population is transit dependent.They applied branch and bound algorithm for bus based evacuation planning (BBEP) and tabu search for robust bus based evacuation planning (RBBEP).They calculated and compared minimum, average and maximum evacuation times obtained by both the algorithms.The study showed that the results obtained by branch and bound algorithms for the BBEP with perfect information are always better than that of tabu search for the RBBEP with uncertainty in the case of minimum, average and maximum evacuation times.In this study they found that the choice of number of sources and sinks have not played significant role in both approaches.This study motivated to conduct a number of researches like multi-depot multi-model evacuation planning with car and bus based evacuation and with contra flow.The authors have also recommended other case studies in Kathmandu valley with increased number of parameters such as buses, sources, sinks and depots.
Conclusion
In bus based evacuation evacuees who do not travel on their own due to age, sickness, handicapped, children, tourists or who do not have their own car are gathered at few collection points, where they are brought on buses to take them to safe region during emergency.Bus based evacuation planning (BBEP) is more essential for developing country like Nepal because most of the people in Nepal cannot afford their own car and are bus-based.BEP is difficult to solve because it is interrelated with construction of routes, assignment of routes to the multiple vehicles and selection of a shelter for each route.There have been only few studies regarding bus based evacuation.We have applied branch and bound and Bellman's equation which is related to dynamic programming with some modification to solve the problem.For further study we can extend above mentioned model by using bus accommodated with wheelchair, considering pickup time and shelters with special facilities.
ACKNOWLEDGMENTS:
The research of Shree Ram Khadka was supported by the European Commission in the framework of Erasmus Mundus and within the project cLINK and Kantipur Engineering College.
Figure 2 :
Figure 2: A network for the un-capacitated shelter example.
Figure4:
Figure4: A shortest path problem and its solution by branch-and -bound. | 3,928.4 | 2016-11-24T00:00:00.000 | [
"Computer Science"
] |
Reconstruction formula for differential systems with a singularity
Our studies concern some aspects of scattering theory of the singular differential systems $ y'-x^{-1}Ay-q(x)y=\rho By, \ x>0 $ with $n\times n$ matrices $A,B, q(x), x\in(0,\infty)$, where $A,B$ are constant and $\rho$ is a spectral parameter. We concentrate on the important special case when $q(\cdot)$ is smooth and $q(0)=0$ and derive a formula that express such $q(\cdot)$ in the form of some special contour integral, where the kernel can be written in terms of the Weyl - type solutions of the considered differential system. Formulas of such a type play an important role in constructive solution of inverse scattering problems: use of such formulas, where the terms in their right-hand sides are previously found from the so-called main equation, provides a final step of the solution procedure. In order to obtain the above-mentioned reconstruction formula we establish first the asymptotical expansions for the Weyl - type solutions as $\rho\to\infty$ with $o\left(\rho^{-1}\right)$ rate remainder estimate.
Introduction
Our studies concern some aspects of scattering theory of the differential systems with n × n matrices A, B, q(x), x ∈ (0, ∞), where A, B are constant and ρ is a spectral parameter. Differential equations with coefficients having non-integrable singularities at the end or inside the interval often appear in various areas of natural sciences and engineering. For n = 2, there exists an extensive literature devoted to different aspects of spectral theory of the radial Dirac operators, see, for instance [1], [2], [3], [4], [5].
Systems of the form (1) with n > 2 and arbitrary complex eigenvalues of the matrix B appear to be considerably more difficult for investigation even in the "regular" case A = 0 [6]. Some difficulties of principal matter also appear due to the presence of the singularity. Whereas the "regular" case A = 0 has been studied fairly completely to date [6], [7], [8], for system (1) with A = 0 there are no similar general results.
In this paper, we consider the important special case when q(·) is smooth and q(0) = 0 and, provided also that the discrete spectrum is empty, derive a formula that express such q(·) in the form of some special contour integral, where the kernel can be written in terms of the Weyl -type solutions of system (1). Formulas of such a type play an important role in constructive solution of inverse scattering problems: use of such formulas, where the terms in their right-hand sides are previously found from the so-called main equation (see, for instance, [9], [10]), provides a final step of the solution procedure. In order to obtain the above-mentioned reconstruction formula we establish first the asymptotical expansions for the Weyl -type solutions as ρ → ∞ with o (ρ −1 ) rate remainder estimate.
Preliminary remarks
Consider first the following unperturbed system: and its particular case corresponding to the value ρ = 1 of the spectral parameter but to complex (in general) values of x. Assumption 1. Matrix A is off-diagonal. The eigenvalues {µ j } n j=1 of the matrix A are distinct and such that µ j − µ k / ∈ Z for j = k, moreover, Reµ 1 < Reµ 2 < · · · < Reµ n , Reµ k = 0, k = 1, n.
• the symbol V (m) , where V is n × n matrix, denotes the operator acting in ∧ m C n so that for any vectors u 1 , . . . , u m the following identity holds: • if h ∈ ∧ n C n then |h| is a number such that h = |h|e 1 ∧ e 2 ∧ · · · ∧ e n ; • for h ∈ ∧ m C n we set:
Asymptotics of the Weyl -type solutions
Let S ⊂ C \ Σ be an open sector with vertex at the origin. For arbitrary ρ ∈ S and k ∈ {1, . . . , n} we define the k-th Weyl -type solution Ψ k (x, ρ) as a solution of (1) normalized with the asymptotic conditions: If q(·) is off-diagonal matrix function summable on the semi-axis (0, ∞) then for arbitrary given ρ ∈ S k-th Weyl -type solution exists and is unique provided that the characteristic function: are certain tensor-valued functions (fundamental tensors) defined as solutions of certain Volterra integral equations, see [14], [16] for details.
For arbitrary fixed arguments x, ρ (where ∆ k (ρ) = 0) the value Ψ k = Ψ k (x, ρ) is the unique solution of the following linear system: This fact and also some properties of the Weyl -type solutions were established in works [14], [17], in particular, the following asymtotics for ρ → ∞ was obtained: For our purposes we need more detailed asymptocis that can be obtained provided that the potential q(·) is smooth enough and vanishes as x → 0.
We denote by P(S) the set of functions F (ρ), ρ ∈ S admitting the representation: Here the set Λ (depending on F (·) ∈ P(S)) is such that Re(λρ) < 0 for all λ ∈ Λ, ρ ∈ S. We note that the set of scalar functions belonging to P(S) is an algebra with respect to pointwise multiplication.
Suppose that all the functions Then for each fixed x > 0 and ρ → ∞, ρ ∈ S the following asymptotics holds: where Γ(x) is some diagonal matrix, E(x, ·) ∈ P(S).
For the Weyl -type solutions of the unperturbed system we have the asymptotics (following directly from their definition):Ψ whereΨ 0k (x, ρ) := exp(−ρxR k )Ψ 0k (x, ρ). Here and below we use the same symbol E(·, ·) for different functions such that E(x, ·) ∈ P(S) for each fixed x. We rewrite relations (5) in the form of the following linear system with respect to valuẽ Ψ k =Ψ k (x, ρ) of the functionΨ k (x, ρ) := exp(−ρxR k )Ψ k (x, ρ): By making the substitution:Ψ we obtain:F The obtained relations we transform into the following system of linear algebraic equations: with respect to coefficients {γ jk } of the expansion: Coefficients {m ij }, {u i } can be calculated as follows: Using (7), (8) and taking into account that: we obtain the following asymptotics for the coefficients of SLAE (10) as ρ → ∞: and for i = 1, k − 1.
Proceeding in a similar way we obtain: where δ i,k is a Kroeneker delta, Using the obtained asymptotics we obtain from (10) the auxiliary estimate γ ik (x, ρ) = O (ρ −1 ).
Then, using in (10) the substitution γ ik (x, ρ) = ρ −1γ ik (x, ρ) (where, as it was shown above, γ ik (x, ρ) = O(1)) we obtain for i = k, n: In view of (12), (15) this yields: Similarly, for i < k we have: Using (13) the obtained relation can be transformed as follows: Now, using in the right hand side of the obtained formula (13) for m ij (x, ρ) and (18) forγ jk (x, ρ) with j = k, n we conclude that formulas (18) are true for i < k as well.
In our further calculations we use particular form of the coefficients f k,α (x) and g k,α,β (x) given by Theorem 1 [16].
For i = k, n from (18), (16), (12) we get: Theorem 1 [16] yields: Recall that any arbitrary linear operator V acting in C n can be expanded onto the wedge algebra ∧C n so that the identity remains true for any set of vectors h 1 , . . . , h m , m ≤ n; moreover, for any h ∈ ∧ n C n one has V h = |V |h (here |V | denotes determinant of matrix of the operator V in the standard coordinate basis {e 1 , . . . , e n }). In what follows the symbol f denotes the above mentioned expansion of the operator corresponding to the transmutation matrix f. We should note also that the relation f is true for any n × n matrix V . Taking this into account we obtain: For the particular multi-index α = α * (k − 1) ∪ i arising at (19) and arbitrary n × n matrix V we have: Substituting the obtained relations into (19) we arrive at: Proceeding in a similar way in the case i < k, using (13), (17) we obtain: Theorem 1 [16] yields: Repeating the arguments above we obtain: In particular, one gets: If β = α ′ \ k, α = α * (k − 1) \ i, i < k, then for arbitrary n × n matrix V we have: Substituting the obtained relations into (21) we arrive at: From (22), (20), (18) we obtain: In terms of the matrix γ = (γ ik ) i,k=1,n this is equivalent to: where the matrix Γ(x) is diagonal. Finally, using (11) in the formΨ(x, ρ) = fγ(x, ρ) we obtain the required relation.
Taking into account that F + (x, ζ) − F − (x, ζ) = ζ[B,P (x, ζ)] we obtain: On the other hand, we can proceed in a similar way applying the Cauchy formula to the function P (x, ρ) − I. Thus we obtain: P (x, ρ) − I = 1 2πi Substituting this to the definition of the function F (x, ρ) we arrive at the representation: F (x, ρ) = q(x) + lim Compare it with (25) we obtain the desired relation. | 2,247 | 2020-12-12T00:00:00.000 | [
"Mathematics"
] |
Vanadium Characterization in BTO : V Sillenite Crystals
Visible and Infrared Optical Absorption and Electron Paramagnetic Resonance (EPR) techniques have been used to characterize the intrinsic defects in sillenite type crystals: nominally pure Bi12TiO20 (BTO) and doped with vanadium (BTO:V). Optical quality crystals, with the composition Bi12.04±0.08Ti0.76±0.07V0.16±0.02O20, have been grown. Results obtained by these different techniques have shown unambiguously the 5+ valence state of the vanadium ion in BTO:V crystals. In pure BTO samples, the EPR and optical spectra show strong evidence of the presence of the intrinsic defect BiM + ho, which consists of a hole h, mainly located on the oxygen neighbors of the tetrahedrally coordinated Bi3+ ion. After doping with vanadium, results have shown that the characteristic bands, associated to this hole defect center, disappear, suggesting its transformation in single Bi. Anisotropy of the EPR spectra , at 20 K, is related to Fe impurities.
Introduction
Bismuth oxide compounds, with a Bi 12 MO 20 chemical composition (where M = Ge, Si, or Ti), crystallize on the space group I23 (sillenite structure).They exhibit a number of interesting properties, including: piezoelectric, electrooptical, elasto-optical, optical activity and photoconductive properties.Specially interesting is the combination of the electro-optical and the photoconductivity properties, from which results the so-called photorefractive effect, consisting of a reversible light-induced change in refractive index 1 .Due to these properties, sillenite crystals are useful for many advanced and promising applications, such as a reversible recording medium for real-time holography or for image processing applications 2,3 .
Bismuth titanium oxide crystal, Bi 12 TiO 20 (BTO), have some practical advantages relative to its isomorphous Bi 12 SiO 20 (BSO) and Bi 12 GeO 20 (BGO), including lower optical activity, larger electro-optic coefficient and higher sensitivity to red light 3 .Depending on a good knowledge of the defects in these materials, a detailed understanding of the photorefractive behavior and, consequently, the optimization of its corresponding response, can result 2,3 .In-formation about the occurring microscopic process in the crystals and, eventually, materials with better properties, can be obtained by adding some impurities in the crystal composition.
Recently, few articles have been published about the optical properties of BTO:V crystals, with different amounts of vanadium impurity [4][5][6] .In particular, Volkov et al. 6 have investigated the oxidation state of vanadium in BTO crystals, using optical absorption spectra, measured at room temperature, from 22000 to 1200 cm -1 and Electron Paramagnetic Resonance (EPR) measured in the range of 77 K to 300 K.They have concluded that the vanadium is present in 5+ valence state.However, Kool and Glasbeek 7 have reported that the EPR spectra of V 4+ in SrTiO 3 disappear above 35 K, therefore EPR measurements, below 35 K, are required to identify the oxidation state of vanadium ions in this material.Furthermore, it is well known from literature 8,9 that the absorption bands near 750-800 cm 1 , in sillenite type crystals, are characteristic of the (VO 4 ) -3 group.However, BTO crystals show low light transmission values for wave numbers lower than 1200 cm 1 , making necessary to employ more effective methods, in addition to the standard IR absorption tech-niques, to characterize the (VO 4 ) -3 absorption bands in BTO:V samples.
In this work, the growth process of an optical quality vanadium doped BTO crystal (BTO:V) and a more detailed study of the vanadium oxidation state, is reported.For optical characterization of the crystals, visible and infrared absorption were used.To investigate the ground state of the paramagnetic impurities, and the structure of the paramagnetic defects, which are responsible for the observed absorption bands, EPR measurements in the temperature range 20 -300 K, were performed.
Crystal Growth
The crystals growth experiments were made by pulling technique, using a resistive heating furnace, equipped with a 808 Eurotherm microprocessor-based digital temperature controller unit, attached to a Pt-Pt10%Rh thermocouple 10 .The temperature fluctuations were typically lower than 0.2 °C, as measured near the crucible.An axial temperature gradient above the melt, about 30 °C/cm, was measured with a platinum thermocouple attached to the seed holder.High purity platinum cylindrical crucibles, with 35 x 35 mm approximated dimensions, were used.
Because bismuth titanium oxide melts incongruently, single crystals of BTO:V have been grown from high temperature nonstoichiometric solutions, with excess of Bi 2 O 3 solvent.The starting batch melt was prepared by thoroughly mixing of appropriate amounts of bismuth oxide (Johnson Matthey, 99.9995%), titanium oxide (Johnson Matthey, 99.995%) and vanadium oxide (Merck, extra pure), followed by its melting, at temperatures in the range of 900 °C to 950 °C, during periods of time varying from 12 to 24 h.All runs have been carried out in air.BTO seeds oriented along the [001] direction, held in a pure platinum seed holder, have been used to initiate the crystal growth.Pulling rates in the range of 0.1 to 0.2 mm/h, and rotation rate of 30 rpm were used.After growth, the crystals were annealed at 750 °C, in an appropriate furnace in order to reduce thermal stresses.
Starting compositions of 10Bi 2 O 3 :(1-x)TiO 2 :x/2V 2 O 5 , where x = 0.10, 0.13, 0.15 and 0.25, were used.For x = 0.25 no single crystal was obtained, but a polycrystalline material was withdrawn from the crucible.For another values of x single crystals were obtained.All the grown crystals presented in their structure many defects such as inclusions, cracks and high mechanical fragility, only the crystal with x = 0.10 presented optical quality.Intense stress induced birefringence can be seen in these samples by analysis in an optical microscope, equipped with crossed polarizers.The probable origin of these defects is the constitutional supercooling which arises because of the high segregation coefficient of vanadium in this system.This is corroborated by the cellular structure that can be seen at the end face of the crystals.The composition of the grown crystal with x = 0.10 in the melt, was measured by Wavelength Disperse Spectroscopy (WDS), in a digital scanning electron microscope.Its chemical formula can be written as Bi 12.04±0.08Ti 0.76±0.07V 0.16±0.02O 20 , where the oxygen content was obtained by stoichiometric calculations.From these results, the effective segregation coefficient for vanadium in BTO was calculated as k eff = 1.6 ± 0.2.Pictures of asgrown BTO:V crystals are shown in Fig. 1.The first two crystals in the left side were grown from x = 0.10 in the melt, and the others from x = 0.15.Structural defects can be seen in the two crystals at right side.
Experimental Techniques
All measurements in BTO:V were performed with samples obtained from crystals grown of the melt composition x = 0.10.The nominally pure BTO samples were obtained from the single crystals grown as described before.The samples used for optical absorption were cut and carefully polished, with thickness ranging from 160 to 800 µm.Samples for photoacustic measurements were used as a very fine powder obtained from the crystals.For EPR measurements, X-ray oriented samples, with cross section of 2 x 2 mm, were used.
All optical measurements were performed at room temperature.Optical absorption in the visible region was measured with a Cary 17 spectrophotometer.At the infrared region a Nicolet spectrophotometer (model 850) was used to measure the optical transmition in the high energy range, where the samples have a high transmission coefficient.At the lowest energy range, below 1000 cm -1 , photoacustic technique was used.
For EPR measurements, a home built CW spectrometer, operating at X-band, with field modulation frequency of 85 kHz, was used.For allowing comparison among samples spectra, intensities were measured by normalization with the spectrum of a standard ruby sample (Al 2 O 3 :Cr 3+ ), 88 Carvalho et al.
. Some as-grown BTO:V crystals.The first two crystals in the left side were grown from composition x = 0.10 in the melt, and the others from x = 0.15.Structural defects can be seen in the two crystals at right side.
placed inside the microwave cavity and kept at room temperature.A very precise Gaussmeter (Sentec, model 1101) and a Frequency Counter (Hewlett Packard, model 5352B) were used, respectively, for accurate measurements of the magnetic field and microwave frequency values.With this facility, and because of the high stability of the magnet power supply and microwave frequency, the g-values can be measured with an absolute accuracy of ±0.0001.A "Helitran" type gas-flow temperature controller have provided temperature control set with ±1 K precision, within the range of 20 to 300 K.
Optical absorption spectra
In the visible region there is only one perceptive change in the absorption spectra of BTO:V, in comparison to that of the nominally pure BTO, which is the broad absorption shoulder, at the photon energy range of 2.3 to 3.0 eV, typical for crystals of the sillenite type 11 , but absent in the spectra of the vanadium doped samples.This absorption is associated with the intrinsic defect (Bi M 3+ + ho + ), due to an improper occupation of the tetrahedrally coordinated M site, of an Bi 3+ coupled with a hole, h + , in the surrounding oxygen tetrahedron 11 .It is known that the absorption shoulder is absent in crystals with 3+ (e.g.Ga and Al) or 5+ (e.g.P) doping ion valences 11,12 .This has been explained, by assuming a transformation of the absorption center (Bi M 3+ + ho + ) to BiM 5+ , with the first type of dopants and, to BiM 3+ with the second type of dopants 12,13 .According to this model, it is expected that the vanadium ion in BTO:V samples, studied here, are to be most probably in the oxidation state 3+ or 5+.Measurements of the FT-IR transmission spectra of BTO and BTO:V, in the range 400 -6000 cm -1 , have shown only one perceptive change: the appearance of a doublet with maximum absorption at 1528 and 1559 cm -1 .This is in accordance with Volkov et al. 6 which have related this effect to two-phonon transitions, which are associated to the (VO 4 ) -3 group in the M site of the crystal structure.As above-mentioned, BTO crystals have very low light transmission values for wave numbers lower than 1200 cm -1 .To get a better definition of this spectral region, the IR spectra were obtained using photoacustic techniques.The photoacustic spectra, between 400 to 1000 cm -1 , are shown in Fig. 2. A clear absorption band centered at 767 cm -1 , observed only in the doped sample BTO:V, is a characteristic band of the (VO 4 ) -3 group in sillenite type structures 8,9 .This fact is an indication that the oxidation state of vanadium, in this material, is 5+.The remainder spectral region showed no difference between BTO and BTO:V samples.
EPR spectra
EPR measurements have been performed in the temperature range of 20 K to 300 K.The results obtained for BTO and BTO:V samples, at 20 K, are shown in Fig. 3.At room temperature, the EPR spectra of the nominally pure BTO samples have shown only one intense and highly isotropic lorentzian line, with g = 2.0061 and peak to peak width of about 95 gauss.In contrast with what has been reported before 6,14 , no other bands, arising from paramagnetic impurities ions such as iron, chromium or manganese, were observed at room temperature.The well defined room temperature EPR band, observed in here, is not expected to come from those ions; it is most likely to come from an effective spin 1/2, such as an electron or a hole center, without hyperfine interactions.The fact that the measured g-value is slightly greater than the free electron value (2.0023) is an indication that this spectrum is due to a hole center.A similar spectrum was reported by Baquedano et al. 14 for BSO crystal.Their results, and the optically detected magnetic resonance investigations conducted by Reyher et al. 15 , have showed that the paramagnetic center responsible for this spectrum, in nominally pure sillenite crystals, is due to the intrinsic defect (Bi M 3+ + ho + ).Accentuated changes in the EPR spectra of pure BTO samples, at 20 K, relative to the room temperature data, can be observed in Figs.3a and 3b.In addition to a reduction in the line width (from 95 to 50 gauss), the line becomes anisotropic.If the magnetic field is applied parallel to the [100] direction, resolved structures can be clearly seen, showing satellites on each side of the main absorption line.The intensities of the satellite lines decrease with increasing temperature, but remain resolved up to 150 K. Above 150 K their intensities are very low and the resulting line is isotropic.Similar spectra have been observed for BSO 16 and BGO 17 single crystals, doped with iron.The associated defect is attributed to Fe 3+ ions, located in tetrahedral Figure 2. Photoacustic spectra between 400 to 1000 cm -1 of BTO and BTO:V samples.The absorption band centered at 767 cm -1 is characteristic of the (VO4) -3 group in sillenite type crystals and the remainder are related to Bi-O modes.
positions at the Si or Ge sites.Due to the high symmetry at the iron site, the resulting triplet structure repeats when the magnetic field is rotated 90° in the [100] plane, as were observed in this study.According to this argument, the spectrum of the pure BTO sample, at 20 K, can be interpreted as a superposition of the anisotropic spectrum of Fe 3+ impurities, located at high symmetry sites, with an isotropic line produced by the presence of the hole centers.EPR spectra of the vanadium doped sample are shown in Figs.3c and 3d, where it can be seen no evidence of the hyperfine splitting, usually consisting of 8 absorption peaks (nuclear spin I = 7/2), characteristic of the paramagnetic vanadium spectrum.Based on this fact, it could be concluded that the vanadium in BTO:V crystals is in a diamagnetic state, so its valence state must be 5+.This in accordance with previous works 8,9 .Another interesting feature in the spectrum of the BTO:V sample, when compared to pure BTO, is a great reduction in line intensity, as it can be seen in Figs.3c and 3d.After doping, it is apparent the complete disappearance of the absorption band associated to the hole center.The paramagnetic center, responsible for the weak and almost isotropic absorption band observed in BTO:V has not been identified.With increasing temperature, the BTO:V spectrum gets weaker and, within the sensitivity of the equipment, it can be hardly seen at room temperature.Furthermore, the line shape observed in BTO:V sample is complex, having a relatively narrow band on top of a broad background signal.The disappearance of the isotropic absorption line, caused by vanadium doping, can be interpreted by the same mechanism which is responsible for the absorption shoulder disappearance in the visible spectrum: the partial transformation of the paramagnetic center (Bi M 3+ + ho + ) in BiM 3+ or BiM 5+ .The anisotropic lines observed in the pure sample, at 20 K, which are assigned to Fe 3+ impurities, do not appear in the spectrum of the doped samples.However, an isotropic EPR line, with g ≈ 4.3 (not shown in Figs.3c and 3d) was observed in the BTO:V spectrum at 20 K, indicating that the Fe 3+ impurity is, in this case, located at a low symmetry site.Iron has a lower segregation coefficient, compared to vanadium so, it is reasonable to expect that the high symmetry sites will be occupied by vanadium ions.The traditional explanation for the occurrence of the g ≈ 4.3 line suggests that it is due to Fe 3+ ions with rather large crystal field splittings [18][19][20] .In a weak magnetic field and for the extreme case of rhombic symmetry, where the ratio between the zero-field splitting parameters is E/D = 1/3, an intense, isotropic and sharp absorption line is expected with g = 30/7 = 4.28.Small departures from the extreme rhombic condition will manifest themselves in three distinct g-values with average of about 4.3.This may lead to a broadening or can be seen in the structure often observed on the g = 4.3 feature.
Conclusions
An important defect in sillenite crystals, such as BTO, has been identified by Optical Absorption and EPR results, and it consists of an Bi 3+ ion associated with a hole.The disappearance of this defect, after doping with vanadium, observed by optical and magnetic techniques, can be explained by previous models.Results, in here, allowed to conclude, unequivocally, that the valence state of vanadium ions in BTO crystals is 5+.
Figure 3 .
Figure 3. EPR spectra of BTO and BTO:V samples.Measurements were taken at 20 K, with the magnetic field oriented parallel to [110] and [100] directions.G-values were measured at the zero crossing point of the lines.Accuracy on the g-values is limited by baseline position error, and estimated to be ±0.0005. | 3,875.2 | 1999-04-01T00:00:00.000 | [
"Materials Science"
] |
Energy-dispersive X-ray diffraction system with a spiral-array structure for security inspection
Energy-dispersive X-ray diffraction (EDXRD) is a promising technique for detecting drugs and explosives in security inspections. In this study, we proposed an EDXRD structure with a spiral-array of detectors that can be used for the detection of thick objects. The detectors are configured to share the same diffraction angle, and the detection area of the system is multiplied along the optical axis. Based on the spiral-array structure, an experimental system with 5 CdTe detectors was established. Experimental results demonstrate that the accurate data can be acquired at different positions within the 250-mm detection area, and the data measured by 5 detectors have a good consistency. This work may provide a new and commercial method for the detection of thick luggage in the field of security inspection.Energy-dispersive X-ray diffraction (EDXRD) is a promising technique for detecting drugs and explosives in security inspections. In this study, we proposed an EDXRD structure with a spiral-array of detectors that can be used for the detection of thick objects. The detectors are configured to share the same diffraction angle, and the detection area of the system is multiplied along the optical axis. Based on the spiral-array structure, an experimental system with 5 CdTe detectors was established. Experimental results demonstrate that the accurate data can be acquired at different positions within the 250-mm detection area, and the data measured by 5 detectors have a good consistency. This work may provide a new and commercial method for the detection of thick luggage in the field of security inspection.
I. INTRODUCTION
In recent years, the smuggling of explosives, drugs, and other contraband has become a major threat to modern social security. There is a growing need to provide rapid, nondestructive material characterization of objects hidden inside baggage and parcels. 1 The main method of security inspection, based on X-ray transmission imaging, suffers from a relative difficulty in detecting low-Z eff materials, such as explosives, drugs, and commonly used organics, because of their similar densities and atomic numbers. 2,3 Energy dispersive X-ray diffraction (EDXRD) is a promising technique for material characterization, as it produces a diffraction pattern that provides a unique fingerprint of the diffracting crystal. 4 As drugs, explosives, and most contraband are crystalline, the diffraction technique is suitable for material identification in security inspections. According to Bragg's law, the interplanar spacing d of the crystal can be calculated from the wavelength λ of the radiation and the diffraction angle θ. If a sufficient subset of the spacing d can be derived, then material characterization becomes possible. 5 EDXRD uses a polychromatic source to generate X-rays with a wide range of λ and keeps the detector at a fixed angle to receive the photons at a specific diffraction angle. In this method, energy resolution of the diffraction spectrum (equivalent to the angular resolution of the system) is the most important feature for the identification of materials. To improve the energy resolution, the angular spread should be decreased. Thus, narrow collimators are used to provide the collimation of incident and diffracted beams. However, the collimation reduces the flux of the beam 6 and limits the detection area of the system. To improve the efficiency of the detection, several organizations have developed diverse array-detector EDXRD systems. For the prototype DILAX system from UCL, 20 cadmium zinc telluride (CZT) crystal semiconductor detectors are organized into a sector array, with the detection areas of the individual detectors arrayed sequentially along the Y-axis (set the X-, Y-, and Z-axes to represent the length, width, and thickness of the object, respectively). 7 This arrangement gives the DILAX system the capability to cover objects up to 200 mm wide. The diffraction image of the detected object can be obtained with a linear scan movement along the X-axis. This system can detect objects with a thickness less than 60 mm, such as laptops and handbags. For the thicker objects, such as luggage, GE Security proposed the XDI system with a 2D pixelated energy-resolving detector. The pixelated detector can analyze the voxels lying on the YZ planar of the object simultaneously. 8,9 This system has high detection efficiency and can detect objects of ARTICLE scitation.org/journal/adv large thicknesses with the object movement along the X-axis. However, the 2D pixelated detector is difficult to manufacture and very expensive. For the detection of thick objects, an economical detection method is using the transmission imaging to determine the region of interest (ROI) on the XY plane of the suspicious luggage, and then using EDXRD method to cover the detection along the Z-axis of the material in the ROI for further inspection. Therefore, it has much potential to develop the array-detector system with commercial off the shelf (COTS) and low-cost detectors that can cover a large-thickness detection area.
In this paper, we propose an EDXRD system with a spiralarray structure to solve the problem of thick object detection in the field of security inspection. Experimental results show that the effective diffraction profiles of materials can be obtained at any positions within the 250-mm detection area. The energy resolution of the detector is 1.5 keV at 122 keV. The energy resolution of the spectra measured by the spiral-array system is about 2.5 keV at 24.9 keV (equivalent to the system angular resolution of 0.5 ○ ), and the spectra measured by the 5 detectors have a good consistency.
II. THEORY OF EDXRD
When coherently scattered photons interfere within a material, X-ray diffraction occurs, and a series of peaks is produced, according to Bragg's law, where λ is the X-ray wavelength, d is the interplanar spacing in the material, θ is the diffraction angle, and n is a positive integer specifying the order of diffraction. Since d depends on the structure of the material, the diffraction profile has a characteristic pattern, and identification of the material becomes possible when a sufficient subset of interplanar spacing d can be obtained. In the EDXRD method, a polychromatic X-ray source is used, the scattering angle is fixed by a diffraction cell, and the diffraction profile is measured using an energy-resolving detector. 10 Thus, Bragg's law can be converted to the following equationin terms of the X-ray energy: where h is Planck's constant, c is the velocity of light, and E is the energy of the incident X-rays. In an EDXRD spectrum, the combination of all the peak positions and intensities provides a unique spectroscopic "fingerprint," from which the material can be identified.
III. DESIGN OF THE EDXRD SYSTEM
A. Single-detector system For security applications, the single inspection of the system with a larger detection area enables inspection of more materials, which makes the system more efficient. To achieve a large detection area, the EDXRD system usually employs array detectors. The detection area of the whole system is composed of the detection area of each detector. Thus, the size of the detection area of a single detector is the major factor that affects the size of the detection area of the whole EDXRD system.
An experimental setup of a single-detector EDXRD system is shown in Fig. 1. The X-ray beam emitted from a source is incident upon the sample through the primary collimators. The scattered photons interfere within the sample, and the diffracted beams are radiated. An energy-resolving detector acquires the diffracted photons through the secondary collimators. The overlapping region between the incident beam (green area) and the accepting area of the detector (red area) is the effective detection area of the EDXRD system (yellow area). The approximate size of the detection area can be calculated using the geometrical parameters of the system, where LD is the length of the detection area along the Z-axis and WD is the width along the Y-axis. W P1 and W P2 are the aperture widths of the primary collimators P 1 and P 2 ; W S1 and W S2 are the aperture widths of the secondary collimators S 1 and S 2 ; L P1 , L P2 , L S1 , and L S2 are the distances between P 1 and P 2 , P 2 and the diffraction center, the diffraction center and S 1 , and S 1 and S 2 ; and θ is the nominal diffraction angle. The width of the detector crystal W det is commonly larger than the aperture widths of the secondary collimators. Thus, we used a set of Soller-slit collimators as the secondary collimators to utilize the detector fully. As shown in Fig. 2, the length of the effective detection area L det can be approximately increased to For a single-detector EDXRD system, the length of the detection area L det is generally around 60 mm (when the width of the detector crystal W det is 5 mm and the diffraction angle is 5 ○ ). Using this system, thin objects such as laptops and handbags can be detected easily. However, the complete examination of thicker objects (such as luggage) requires multiple scans at different thicknesses. For security applications, high throughput is a major requirement and the scanning time must be minimized. Therefore, there is a need for a system capable of detecting large-thickness objects.
B. Array-detectors system
In this paper, we describe an EDXRD system to examine large-thickness objects. This system employs a set of detectors arranged in a spiral array, each detector sharing the same diffraction angle θ. As shown in Fig. 3, by placing the detectors at different distances from the optical axis, the corresponding effective detection areas are located at different positions along the optical axis. As the number of detectors is increased, the detection area of the whole system increases, enabling the array to achieve large-thickness detection.
Because the mechanical size of a conventional detector is larger than the detector areas (the size of the crystal), setting the detectors at different distances from the optical axis in a straight line would cause the gaps in the detection area. To use as many detectors as possible, the detectors are arranged in a spiral pattern around the optical axis in 3D space (i.e., around the Z-axis), while the distance between an individual detector and the optical axis is increased gradually, as shown in Fig. 4(a). If the total number of detectors is N, the angle between the working planes of two adjacent detectors is α = 360 ○ /N. Assuming that the perpendicular distance between the first detector and the optical axis is L 1 and that the distance along the optical axis between the centers of the detection areas of adjacent detectors is D (D should be slightly less than L det , since there is hardly any signal from the material at the edge of the detection area), the perpendicular distance between the i-th detector and the optical axis is Let us establish a coordinate system in the plane in which the individual detectors are located and take the intersection of the optical axis and that plane as the origin of coordinates. In this system, the coordinates of the first detector's position in the XY plane are (L 1 , 0) and the coordinates of the i-th detector are ). The length of the whole system's detection area is N * D.
Because the EDXRD system with spiral-array detectors uses multiple independent detectors to detect the diffraction spectra, the consistency of the spectra measured by these detectors is a key factor in the system. The consistency of the spectra depends on the energy resolution and collecting efficiency of each detector. Using the simulation model established in the previous study, 11 we can calculate the energy resolution and collecting efficiency of each detector separately and obtain the simulated diffraction spectra of materials.
C. Experimental setup
We conducted our experiments using a tungsten-target X-ray tube (Varian, NDI-225-22). The diameter of the focal spot of the source was 5.5 mm, and the radiation coverage was 40 ○ . The source was operated at a voltage of 80 kV, generating a polychromatic beam with energies in the range from 0 to 80 keV. The source current was 25 mA. As shown in Figs. 3 and 4, primary collimation of the incident beam was provided by the two collimators P 1 and P 2 , with P 1 (having a 1-mm diameter aperture) placed next to the source collimator and P 2 (with a 1.3-mm diameter aperture) placed at a distance of 355 mm from P 1 . The collimated X-ray beam was directed onto the sample, which was placed at a distance of 165 mm from P 2 .
We employed 5 detectors to acquire the diffracted photons (Amptek, AXR/PA-230), and we arranged the detector configuration as shown in Fig. 4. Each detector consisted of a 5 × 5 × 1-mm 3 CdTe crystal, a preamplifier, and a housing. The typical energy resolution was better than 1.5 keV full width at half maximum (FWHM) at 122 keV ( 57 Co). Each detector was mounted on a two-stage thermoelectric cooler, and it can operate in a conventional environment without an external cooler. We used five sets of Soller-slit collimators to collimate the diffracted beams. The Soller-slit collimators contained 10 internal tungsten-steel plates, 100 mm long and 0.5 mm thick, with 0.5-mm spacing between them. Each set of Soller-slit collimators was closely connected to the corresponding detector, and both were mounted on a base plate. We placed each detector and its Soller-slit collimators inside a lead enclosure to reduce background noise. As shown in Fig. 5, we designed a pentagonal prism according to the distances between the detectors and the optical axis. The distance between the orthocenter and the bottom edge of the prism .20 mm, respectively). Each detector was connected to the prism by a wedge, which fixed the diffraction angle of the EDXRD system. In this setup, the diffraction angle was 5 ○ . Thus, the length of each detector's detection area L det was about 57 mm, and the length of the system's detection area was about 250 mm. The center of the prism was placed along the optical axis. The distance between the bottom of the sample and the front of each Soller-slit collimator was 800 mm.
IV. RESULTS AND DISCUSSION
A. Performance of the EDXRD system In order to verify the performance of the EDXRD system, we measured a drug simulant placed at different positions in the effective detection area. We chose paracetamol (C 8 H 9 NO 2 ) as the simulant for its complex crystalline structure, with several main peaks in the energy range from 20 to 50 keV, which is similar to many drugs. The paracetamol is in the form of pills, about 20 g in weight and packed in a plastic bag. We measured the diffraction profiles at different positions along the Z-axis (20, 70, 120, 170, and 220 mm from the front edge of the detection area). The integration time was 30 s. Figure 6 shows the spectra measured by each corresponding detector at different positions (black line), and the simulated spectra corresponding to the positions (red line). We applied an FFT smoothing filter to each measured profile and filtered out the portion with a relative intensity less than 0.1 to reduce the background noise. The intensity of each profile was normalized to the intensity of the highest peak.
As shown in Fig. 6, the spectra measured at different positions have good consistency in the positions of the diffraction peaks. The full width at half maximum (FWHM) of the two main peaks (at 24.9 keV and 37.8 keV) of each diffraction profile is shown in Table I. Assuming that the energy resolution of the diffraction profiles is ΔE/E, E is the energy corresponding to the position of the diffraction peak, and ΔE is the FWHM of the peak. The average energy resolution of the diffraction spectrum measured at 20 mm is 0.085, and the average energy resolution of the simulated spectrum is 0.070. The average energy resolution of the spectra measured at 70 mm, 120 mm, 170 mm, and 220 mm is 0.108, 0.089, 0.108, and 0.103, respectively. The average energy resolution of the corresponding simulated spectra is 0.070, 0.073, 0.075, and 0.079. The farther from the front edge of the detection area, the lower the energy resolution of the simulated profiles, but the change is small. For the measured profiles, the energy resolution is lower than the value of the simulated profiles. The average energy resolution of the 5 detectors is 0.099, and the variance is 0.93, which means that the energy resolution of the EDXRD system is relatively stable and the spectra measured by the 5 detectors have relatively good agreement. The difference in the diffraction spectrum is mainly affected by the incomplete integral, the incoherent scattering photons, and the background noise. Overall, the EDXRD system with the spiralarray detectors can effectively obtain the diffraction spectrum of the material within the 250 mm detection area.
B. Detection of concealed object
For security applications, the detection of concealed objects is very important for an EDXRD system. The general EDXRD equipment requires multiple scans along the thickness of the baggage to obtain the diffraction spectrum of the material hidden in the baggage. Instead, the EDXRD system with spiral-array detectors only needs a single point detection on the ROI of the baggage. A 10 cm thick suitcase with drugs and common powders inside was used to verify the ability of the EDXRD system for concealed objects' detection. A layer of clothes was put in the suitcase to increase the background complexity. The detected samples were (a) flour, (b) methcathinone, (c) red-bean powder, and (d) milk powder. The ROI was manually set for the sample detection, and the detection time was 30 s. (In practical applications, the ROI can be determined by transmission imaging technique.) The diffraction profiles of the samples in the suitcase are shown in Fig. 7. The diffraction spectrum of the methcathinone, which has distinct pattern of peaks, is significantly different from the others. The EDXRD system can obtain the
ARTICLE
scitation.org/journal/adv diffraction spectra effectively with the occlusion and scattering of the background. For the 10 cm thick suitcase, the EDXRD system with spiral-array detectors can cover the detection of the entire materials within the ROI, which improves the efficiency of the inspection.
V. SUMMARY
In this study, we proposed an EDXRD system with a novel spiral-array structure, which was capable of inspecting thick luggage. The CdTe semiconductor detectors were selected in this system, which were COTS and more low-cost than the pixelated detector. The detection area of the system was multiplied by arranging the detection area of each detector sequentially along the optical axis based on the spiral-array structure. The experimental system which can cover the 250-mm detection area along the Z-axis was established with 5 detectors. We performed experiments using paracetamol to simulate an illicit drug, and the results show that the system can obtain the diffraction spectra effectively and accurately anywhere within the detection area. The energy resolution of the system was about 0.099, and the spectra measured by the 5 detectors had a good consistency. Experimental results with concealed samples showed that the system can detect objects hidden inside luggage effectively. In future work, the transmission imaging technique can be combined to determine the ROI of the suspicious baggage and the detection method is promising and commercial in the field of security inspection. | 4,517.8 | 2019-12-01T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Chemical-technological research and radiocarbon AMS dating of wall painting fragments from the ruins of the XIIth-XIIIth centuries AD church from archaeological excavations in the city of Smolensk, Russia
In 2012, the ruins of a temple of the old Russian period were found during archaeological research in the medieval historical territory of Smolensk. The archaeological complex consists of the ruins of an ancient temple, built in the middle of the XIIth century AD, and adjacent to it from the South-West of the territory, which housed the remains of the market XI-turn XIII-XIV centuries AD and necropolis XIII–XVI centuries AD. Chronologically diverse use of the investigated territory up to the XVIth century AD was determined by the nearby Church. Approximately 1000 fragments of wall paintings, 5 fragments window glass and 4 glazed floor tiles were found near the ruins of the Church building. For the first time fragments of wall paintings medieval of Old Russian temple were dated by the AMS radiocarbon dating and went through chemical-technological research (analysis of the plaster foundation, the definition of used pigments) by X-RAY diffractometry (XRD) and scanning electron microscopy (SEM/EDS). Optical microscopy also was used for visual observations of the samples of the wall painting. According to the results of the radiocarbon analysis, the fragments of the wall paintings were divided into two chronological groups. The earlier belongs to the last quarter of the XIIth–the first quarter of the XIIIth century AD. Samples of the wall paintings from the second group are dated back to the third quarter of the XIIIth century AD. A narrow range of Accelerator Mass Spectrometry (AMS) radiocarbon dating of fragments of the murals, obtained from carbonates due to the presence of high content of C14 isotope in carbon of the plaster, is simultaneous in age to the moment of creation of the plaster base. As a result of chemical and technological researches of the fragments of the wall paintings it was established that the plaster basis of the fragments of the wall painting consists of two layers. The plaster base contains organic binders. Chemical and technological analysis of pigments presents the following results: (1) the basis of the blue paint layer is ultramarine (mineral) and anatase (mineral); (2) the basis of the green paint layer is celadonite (mineral); (3) the basis of the brown paint layer is ochre (clay); (4) black particles in the colorful mixture of brown is an organic wood coal pigment.
Introduction
Any remains of the temples, constructed in the period before the Mongol invasion to the lands of Rus' in 1237 AD-1240 AD, are considered rare archaeological findings. However, none of the previously known archaeological complexes associated with ancient Russian temples was examined using a complex of natural scientific research methods, including the parallel implementation of chemical and technological analysis of samples of wall paintings (the study of plaster base; determination of pigments used in paintings) and radiocarbon dating of carbon-containing mural fragments and masonry elements of the temple by atomic mass spectrometry.
In 2012 AD, 61 years after the last similar finding, the remains of a previously unknown old Russian church were unexpectedly discovered during archaeological excavations in the city of Smolensk at Krasnoflotskaya Street 1-3. The reconstructed area of this four-column single-domed temple with galleries is about 250-300 m 2 . In the context of the historical topography, it was located on the site of the medieval territory named « Pyatnitsky End » , on the right bank of the Pyatnitsky creek, which ran along the bottom of a ravine with the depth of 7-10 m. The territory on which the church was found adjoins the outer side of the tracing line of the destroyed section of the fortification wall of the Smolensk Kremlin, built in 1595 AD-1602 AD, not far from the place where the now-defunct Pyatnitskaya tower was previously located (Fig. 1), and is located on the left bank of the Dnieper river at a distance of about 150 m from the riverbank.
In 2012 specialists of the research team of the Capital Archaeological Bureau (« CAB ») carried out a preliminary clearing of the fragments of the found temple, after which these fragments were studied by a team of architectural archaeologists of the Institute of Archaeology of the Russian Academy of Sciences. At this time, the research group « CAB » conducted archaeological excavations on the territory adjacent to the temple. The study area near the temple (excavation 1) consisted of "Introduction" and "Results and discussions" sections with the total area of 205 m 2 , measuring 10 × 10 and 10 × 10.5 m (Fig. 2), divided into squares of 2 × 2 m. Lithological deposits in the excavation with the capacity of up to 3.2 m were composed of sandy loam of brown and grey colour shades. The preserved cultural layer containing artefacts and buried objects of the Xth-XVIth centuries AD: remains of buildings of a commercial space of the XIth-XIIIth century AD in the shape of 95 pillar holes and a necropolis of the XIIIth-XVIth centuries AD, which included 91 burials (Fig. 2). The cultural layer was disassembled according to nominal layers (2-9) with the thickness of 0.20 m each with spatial instrumental organization of objects and findings [1,2].
The most significant site going through the eastern wall of the investigated area (excavation 1, line of squares 10-15-20-25) was structure 4, with the area of 6.4 m 2 (Fig. 2). It was the south-western part of the galleries of the old Russian church, built chronologically later than the church itself, and dated back to the XIIIth century AD. The foundation of the galleries, cleared on the area of 6.4 m 2 and consisting of elongated boulders with smoothed edges with the longitudinal size of 11-23 cm, was cut into the ground from the level of layer 6 (− 108 to − 117 cm from the nominal zero) to the depth of 0.5 m to the levels, corresponding to layers 7-8.
A large number of objects related to the arrangement and decoration of this church was found in layers 6-8 of excavation 1: fragments of a wall painting, window glass, a part of a clay ceramic voice speaker (a clay jar built into the masonry of the walls, reversed into the inner part of the building. Voice speakers were used to reduce the load on the walls of buildings and to improve the acoustic properties of the premises), details of lead frames from windows and fragments of glazed tiles. In 2013 after the completion of the work of architectural archaeologists, the research team of « CAB » carried out the conservation of the discovered remains of the church, during which the samples of coal and wood from the masonry and structures of the Old Russian temple were selected for radiocarbon AMC dating. Wood from structures of the church was collected for dendrological analysis too. Pieces of charcoal selected for research were part of the masonry mortar of the temple for which they were (2), fragments of glazed tiles (3,4), detail of lead frame from window (5) from the church specially prepared. Fragments of the wooden boards from the temple taken for the research were severely charred, which indicates a fire in the temple, which destroyed its wooden structures.
Parts of stained glass, lead clips, glazed tiles ( Fig. 3), oak board from the temple (Fig. 4) and fragments of wall paintings (Fig. 5) found during excavation 1, allow us to get a partial picture of its original arrangement. Wooden structures inside the church, according to the results of the dendrological analysis, were made of oak, floors and, perhaps, the elements of walls were decorated with glazed tiles of yellow-brown and dark red colors, small pieces of glass in lead clips about 10 cm long were used in the construction of the windows and the main color tones the wall paintings were dark blue and green.
Sample research methodology
The wall painting fragments were examined using radiocarbon AMC dating, XRD, SEM/EDS. Optical microscopy also was used for visual observations of the samples of murals. Several wall painting samples were used for radiocarbon AMS Dating. We had a possibility to study another 12 fragments of the wall paintings ( Fig. 6) with an average size of about 5 cm 2 and 1.8-2.3 cm in thickness, representing the remains of non-reconstructed wall paintings from the background of decorative compositions and remains of clothes of the characters once depicted.
Radiocarbon AMS dating
To determine the radiocarbon age of the wall painting fragments, radiocarbon AMC dating of 4 samples of mural fragments, was carried out in the Center for Applied Isotope Studies (CAIS) at the University of Georgia, USA. The samples of carbonates and coal from the collected fragments of the mural were analyzed by the AMS technique according to the established method, normally used for fine art objects [3]. The samples were treated with 5% HCl at 80 °C for 1 h, then washed with deionized water through the fiberglass filter, and rinsed with diluted NaOH to remove possible contamination by humic acids. Samples were then treated with diluted HCl again, washed with deionized water, and dried at 60 °C. For AMS analysis, the cleaned samples were combusted at 900 °C in evacuated, sealed ampoules in the presence of CuO. The resulting carbon dioxide was cryogenically purified from the other reaction products and cata-lytically converted to graphite using the method of Vogel et al. (1984) [4]. Graphite 14C/13C ratios were measured using the CAIS 0.5 MeV accelerator mass spectrometer. The sample ratios were compared to the ratio measured from the oxalic acid I standard (NBS SRM 4990).
Afterwards, the results of radiocarbon AMC dating of fragments of the wall paintings were compared with the results of radiocarbon dating by a similar method of samples taken from carbon-containing elements of the architectural remains of the temple-lime mortar masonry and burnt oak boards.
X-RAY diffractometry
In order to determine the composition and structure of the plaster base of the murals and pigments of blue, green, brown colors which were used in the creation of the murals of the church, samples no 1-12 ( Fig. 6) were The fragments of the wall paintings from the church which were studied. The red points indicate the analysis areas that were considered in the study of the composition of the paint layer for XRD and SEM/EDS methods analyzed. The studies were done on an ARL X'TRA X-ray diffractometer with Cu Kα radiation (copper anode) and a 35 kV accelerating voltage; beam current was 40 mA; the angle range was 3-80°; at a 0.02° angle step. The weight of each of the samples was 10 mg. The compounds were identified using the PDF-2 database of the International Center for Diffraction Data (ICDD). When studying sample no 1 the diffractogram of the original fragment of the wall painting was taken, the ink layer on the surface was intact. From the surfaces of the wall painting fragments (samples no 1-12), a sample of the paint layer was mechanically taken out, which partially captured the trowel layer of the mural plaster base, which included kaolinite. The resulting sample was crushed to powder with a particle size of not more than 20 microns. The following sample of the plaster base with the size of 2 × 2 mm was taken from the internal parts of these fragment of the mural (sample no 1), which had not been exposed to the air: for this purpose, the wall painting sample was mechanically cleared from surface contaminants and dust.
Scanning electron microscopy
For the purpose of elemental analysis of the plaster base of sample no 1 and the qualitative comparison of the composition of the paint mixture of samples no 1, 3-12 ( Fig. 6) these samples were analysed for on a Quanta 3D 200i scanning electron-ion microscope manufactured by FEI (Holland). The study was conducted in low vacuum mode in water vapor to avoid problems with electric charging of non-conductive samples, using reference samples in accordance with the algorithm proposed by Pukhov and Kurbatov [5]. When performing SEM/EDS analysis, the quantitative calculation of the element composition can be performed correctly under the following conditions: the sample has a homogeneous element composition within the scanning area of the electronic probe; the sample surface does not have roughnesses exceeding 30-300 nm in size, depending on the accelerating voltage used, i.e. the sample surface must be polished when analyzing at sufficiently large area.
Qualitative analysis of plaster base for sample no 1 was carried out in seven research areas (Fig. 10a). Sample preparation for elemental analysis consisted of sawing sample no 1 with a diamond wheel until it formed an even cut. No special preparation of samples to determine the chemical composition of pigments located on the surface of samples no 1, 3-12 was carried out. Due to subtlety and fragility of the colourful layers on the studied samples of the wall painting, it was impossible to polish them in the process of sample preparation. Therefore, the SEM/EDS analysis was held on the natural uneven surface of the paint layer, which prevented us from obtaining the data for the appropriate quantitative calculation of the elemental composition of the studied paint layers on fragments of the wall painting.
Optical microscopy
Wall painting samples no 1, 7-9 ( Fig. 6) were examined under a stereomicroscope LEICA MZ 125 (Germany) in a simple reflected and transmitted polarized light at a magnification of 40 times and were photographed from different angles using a KEYENCE VH-Z100UR (Microscope Multi Scan; Japan) optical microscope. The 3D magnification of the sample no 1 is demonstrated at the axes of the image. The structure of the plaster base of the wall painting fragments was studied too, as well as the pigments.
Radiocarbon AMS dating
The radiocarbon dates of the mural fragments from the excavations in Smolensk contained in the plaster base, made from carbonates and coal, turned out to be about 30-160 years younger than the dates obtained for the carbon-containing samples from the masonry church solution, which has a calibrated 2σ radiocarbon age around the middle of the XII century AD. (UGAMS-15774 P70 900 ± 20; UGAMS-15775 P85 940 ± 20) (Fig. 7). According to the results of radiocarbon AMC analysis, the wall painting fragments were divided into two chronological groups.
The earliest of them (Fig. 8, Table 1) is dated back to the last quarter of the XII-the first quarter of the XIII century AD (UGAMS-16215 no 6/2 880 ± 20; UGAMS-116216 no 4/8 880 ± 20; UGAMS-16217 no 1/10 890 ± 20). In general, it coincides in age with the radiocarbon date of oak wood coal, found inside the temple (UGAMS-15776 P75 960 ± 20, 2σ 1056 AD (2.8%) 1076 AD, 1154 AD (92.6%) 1224 AD) (Figs. 4, 7).The radiocarbon age of the second group of wall paintings (Fig. 8, Table 1) belongs to the third quarter of the XIII century AD (UGAMS-112563 no 12/1 740 ± 20). A normal distribution of radiocarbon AMS dates was recorded: samples of materials used in the construction of the church studied in a similar way prove to be older than similarly studied samples of the wall painting fragments.
The same extremely minimal chronological error of ± 20 years was obtained for all AMC radiocarbon dates, which in fact is very unusual. This result is undoubtedly the consequence of the high quality of the used dating material, selected to minimize the influence of environmental factors, when all the pieces were taken from the inside of the fragments of the wall painting. We believe that the similar narrow value of the chronological error for all dated samples of wall painting (no 6/2, no 4/8, no 1/10, no 12/1) is explained by their highly uniform carbon saturation, dating back to the creation of the solution that served as the basis for the wall painting-for example, atmospheric carbon that got into this solution in the process of obtaining slaked lime.
Results of X-RAY diffractometry
A qualitative analysis of the fragment of the wall paintings (sample no 1) obtained using a diffractometer showed that its plaster base mainly consists of (CaCO 3 ) in the form of calcite and aragonite, which are known to have the same chemical composition, but at the same time possess different crystal lattice [6]. Also a small amount of quartz (SiO 2 ) was detected (Fig. 9a). The upper plaster layer of the mural plaster base consists of kaolinite (Al 2 (Si 2 O 5 )(OH) 4 ). In samples no 4, 5 with blue colorful layer, ultramarine (Na 4 Al 6 Si 8 O 23 S 4 ) (Fig. 9b), in samples no 1, 10-12 ultramarine (Na 4 Al 6 Si 8 O 23 S 4 ) and anatase peak (TiO 2 ) were detected (Fig. 9c). In the samples no 2, 6-9 of the mural, in the zone of brown colorful layer there is a high content of calcite (CaCO 3 ) and quartz (SiO 2 ), as well as traces of some amounts of kaolinite (Al 2 (Si 2 O 5 )(OH) 4 ) (Fig. 9d). In sample no 3 of the wall painting, the zone of green colorful layer was explored. When conducting XRD analysis, glauconite, celadonite and chromceladonite have similar spectral characteristics. XRD analysis of sample no 3 gave the following result: (K(Mg,Fe,Al) 2 (Si,Al) 4 O 10 (OH) 2 ). That is, based on the results of the XRD analysis of the green pigment, it is celadonite (Fig. 9e). Glauconite and celadonite are distinguished by the intensity of green color provided by Fe; there is no clear boundary between these substances from the mica category. As it is known, celadonite has a quality to retain color and keep it unchanged under the influence of air and light, and it was often used as a pigment mainly in wall paintings from Roman period [7][8][9] until XIV centuries in wall paintings in churches in Europe [10].The green lands as the pigments, mainly consisting of celadonite and glauconite, appear in Byzantine wall painting culture [11] and fragments of wall painting from Smolensk are undoubtedly related to them. Samples no 1-12 in the paint layer also contain trace contents of (SiO 2 ) and (CaCO 3 ), which were probably captured in samples of those paint layers from the plaster base during their selection (Fig. 9a-e).
Results of elemental analysis using scanning electron microscope
Qualitative analysis of the samples of the wall paintings using a scanning electron microscope confirms the data obtained using X-ray diffractometry. Elemental analysis demonstrates the presence of Al, Si, S, Na in the composition of samples no 4-5 which ensures the presence of ultramarine (Na 4 Al 6 Si 8 O 23 S 4 ) (Fig. 11a), as for the series of samples no 1, 10-12, in addition to the abovementioned elements, some Ti was also recorded, which indicates the presence of anatase (TiO 2 ) content in them (Fig. 11b). For samples no 2, 6-9, Al and Si were identified, which indicate kaolinite in the shape of aluminosilicates (Al 2 (Si 2 O 5 )(OH) 4 ) which give brown color (Fig. 11c). In sample no 3, Mg, K, Si, Fe, Al were recorded, which corresponds content of celadonite or glauconite in the green pigment (Fig. 11d), but by XRD analysis it was determined that the green pigment is celadonite. Results of chemical elemental analysis for the plaster base of sample no 1 (Fig. 10a) demonstrate its typologically similar (Table 2), although heterogeneous composition with a high content of Ca and C as the main components, the fact if which indicates the presence of CaCO 3 in the sample. In addition, an extremely high carbon content of 8-28 Wt % may possibly indicate its organic origin, i.e. it appeared in lime solution after adding organic binders, or could have got into the lime from the atmosphere when it was extinguished. Manganese present in sample no 1 probably indicates the presence of clay materials in its composition in which it forms microcrystalline aggregations. (Table 2) [12]. The presence of Na, Al, and Si together with Mn allows one to presumably consider the concretion, studied in zone 1 of sample no 1, as manganese illite, belonging to the group of mica minerals of clay deposits (Figs. 10b, 11e, Table 2). The possible high content in carbon of C14 isotope, simultaneous in age to the moment of creation of the plaster base, undoubtedly, was higly important to the accuracy of its radiocarbon AMC dating. This carbon, located in the lime layer of the mural, cannot be associated with the lime for the following reason: the results of radiocarbon AMC dating dated the age of the wall painting fragments back to the XIIth to XIIIth centuries. It is known that radiocarbon dating occurs on carbon isotope C14, which has a half-life of 5.700 ± 30 years (see Nubase-2016 database). The limestone formation period occurred approximately 300,000,000-150,000,000 years ago. Consequently we can state that no carbon suitable for the radiocarbon dating method could have been contained in plaster lime. However, the plaster is easily dated by the radiocarbon method and therefore contains a large amount of carbon introduced at the time of creation of the wall painting. Therefore the introduced carbon cannot have been a structural part of lime.
Results of visual observations using optical microscopes KEYENCE VH-Z100UR and LEICA MZ 125
The qualitative analysis of sample no 1 examined on a diffractometer and a scanning electron microscope, established the presence of ultramarine and anatase. The two-layer structure of the paint layer on sample no 1 was captured by 3D photograph, indicating the scale of the thickness of the paint layers using optical microscope KEYENCE VH-Z100UR (Fig. 12). On this 3D photo the dark underlayer under the blue pigment consists of anatase. The blue-colored ink layer unevenly lying on top of the dark anatase underlayer, is ultramarine.
Optical microscopy method, by a KEYENCE VH-Z100UR and a LEICA MZ 125 optical microscopes, identified coal to form part of the paint layer in samples no 7, 8, 9. The visual criterion for determining coal in this case was the fact that the structure of black particles corresponded to the structure of wood materials. Results of the research by SEM/EDS and XRD methods did not detect a single pigment of black colour. It is known that organic pigments cannot be determined by SEM-EDS and XRD methods. Nevertheless, it is clearly defined when examining samples with the optical microscope in sample no 7-9. Consequently we can assume that the black pigment has organic nature, and identify it as coal, which particles have a fibrous composition, characteristic of the structure of wood.
The results of sample studies of plaster base sections from fragment no 1 using LEICA MZ 125 microscope revealed that the plaster base of the temple murals was two-layered. The lower, main layer of plaster consisted of unevenly mixed lime and clay-limestone rocks (Fig. 13) with a thickness of about 1.20 cm, with remains of straw filler 0.15 cm long and 0.02 cm thick. The second upper layer of plaster with a thickness of 0.15-0.20 cm was
Conclusions
Consideration of the received radiocarbon AMC datings, allows us to outline the chronology of the temple construction in the ancient Russian period. According to the results of radiocarbon AMC dating, the construction of the temple took place around the middle-in the third quarter of the XIIth century AD. The wall paintings in the temple were created in the last quarter of XIIth-the first quarter of the XIIIth century AD. The likely renewal or addition to the wall paintings of the temple, determined by the presence of fragments of the wall paintings of the group with a later radiocarbon age, occurred in the third quarter of the XIIIth century AD, which may have been related to the extension of the galleries of the church building.
The results of determining the composition of colourful mixtures by X-Ray diffraction, electron microscopy (SEM/EDS) and optical microscopy revealed that certain minerals (ultramarine, anatase, celadonite), clay (ochre), and organic wood coal pigment were used as pigments for creation of the murals under study. XRD method of studying of the green samples showed that the base of the green colourful layer is formed by celadonite. Anatase was first recorded as a pigment in ancient Russian wall paintings. Previously, anatase was considered only as a material used to produce titanium whites, which became widespread after the second half of the XIXth century. However, the presence of anatase in the palette of ancient artists was recorded in archaeological objects from different eras and civilizations. This mineral was found among pigments on the territories of Roman villas of the IIth century AD in England [13], as well as on polychrome decor of XVIth century Chinese porcelain originating from Portuguese ships sunk near the Cape of Good Hope [14].
The presence of blue colour ultramarine on the colourful layer of the studied fragments of the wall paintings is one of the earliest examples of this mineral being used in the paintings of ancient Russian temples as it was the most expensive pigment of the Middle Ages [15].
The creations of blue colour were also recorded in the study of XIVth century wall paintings from Patriarchate of Peć Monastery in Serbia, but there it was produced from cheaper pigments-coal and azurite [16]. This is explained by the fact that the clients who ordered those murals were not members of royal family. Expensive ultramarine is present only in the XIIIth century wall paintings of Serbian monasteries Žiča and Mileševa, pained by the order of kings of Serbia who had the financial ability to purchase them [17,18]. In ancient Russian wall paintings of the XIth-XIIIth centuries AD, apart from this finding in Smolensk, pigment ultramarine is present only in Novgorod the Great churches, the customers of which were also princes: it is present on the murals of St. George's Monastery of St. George (built in 1119 AD), in the drum of St. Sophia Cathedral (built in 1045 AD-1050 AD), in fragments of XIIth century AD murals from the altar part of the Nikolo-Dvorishchensky cathedral (built in 1113 AD-1136 AD) [19]. From these historical analogies it can be concluded that the creation of the church of Smolensk was ordered by a prince.
The two-layer structure of the plaster base is confirmed by XRD method and by optical microscopy.
The two-layer structure of the original plaster of the wall paintings foundation also exists in the Church of St. John the Theologian (built in 1173 AD), located 750 meters away from the explored territory. A large amount of carbon, up to 28 Wt%, recorded during elemental analysis on the electron scanning microscope proves the likely presence of an organic binder in the plaster base of the murals. | 6,256.6 | 2020-05-11T00:00:00.000 | [
"Chemistry",
"History"
] |
Reinforcing Health Data Sharing through Data Democratization
In this paper, we propose a health data sharing infrastructure which aims to empower a democratic health data sharing ecosystem. Our project, named Health Democratization (HD), aims at enabling seamless mobility of health data across trust boundaries through addressing structural and functional challenges of its underlying infrastructure with the core concept of data democratization. A programmatic design of an HD platform was elaborated, followed by an introduction of one of our critical designs—a “reverse onus” mechanism that aims to incentivize creditable data accessing behaviors. This scheme shows a promising prospect of enabling a democratic health data-sharing platform.
Introduction
Sharing health data creates value for clinical care, trials, and case studies, as well as an improved knowledge base [1-3] for healthcare researchers and healthcare organizations. Furthermore, it is crucial for advancing health ecosystems [4]. Health data also have immense commercial value [5] for other parties such as the pharmaceutical industry, data analytics providers, insurers, data markets, or business intelligence. It is also relevant for patients who want to control and share their data (e.g., the service digi.me, the crowd-sourced project wiki.p2pfoundation.net/Category:Health, etc.) in their interests, e.g., monitoring of health status, independent health data analysis, or experience sharing in a patient community.
The huge value associated with health data can lead to data misuse, for example, targeted use of ransomware, participation in the black market [6][7][8], and other cybercrimes. The conventional health data infrastructure was not designed for anticipating value-driven data mobility and the associated cyber threats. There is a structural deficiency in the conventional infrastructure on which patch-like remedies only add to the complexity of the challenge.
Obstacles for health data sharing are data silos, lack of appropriate tooling and lack of needed trust. Rather than reinforcing the infrastructure from a traditional view of vulnerability identification, protection, detection and response, in this project, we address the health data infrastructure's structural and functional deficiency to facilitate data mobility across trust boundaries through the concept of data democratization and a corresponding set of theories and technologies to implement the concept. Data democratization is a process of making data accessible to everybody and easying the understanding of that data for expediting decision making and supporting the business process [9,10]. Data democratization requires a strong governance for data and process management as well as a related culture, education, training, and tooling to enable this process irrespective of the actors' domain of expertise and technical know-how. Different contexts and objectives of actors as well as trust antecedents of the actors' environments establish trust boundaries to be overcome by harmonization/mapping of the related policies [11]. A policy is set of legal, political, organizational, functional, and technical obligations for communication and cooperation, defining the intended behavior of a system [4].
Our work aims at defining, architecting, implementing and evaluating a democratic health data infrastructure that is expected to incentivize all parties, including individuals, to prove, negotiate, and configure their rights and duties associated with health data. The conflicts of interest among different parties can be reconciled through a set of automated mechanisms so that data can be mobilized across trust boundaries.
A burgeoning health data sharing scenario could be more integrated and multifaceted than it used to be. Different parties could have different concerns towards sharing health data, e.g., privacy leakage, technical complexity of interoperability and security, lack of incentives, lack of resources and tools, and high cost of multilateral negotiation. The conventional health data infrastructure is insufficient in coping with data protection in an era of data mobility, e.g., accountability across trust boundaries. Moreover, the plan of future health data infrastructure usually only insufficiently considers fundamental logics and rationales (e.g., risk and incentive modeling, rights negotiation, cognitive modeling, etc.) of data mobility besides technical and regulatory compliance. The complexity attributed to a multitude of social and technical factors makes it difficult to make informed decisions to minimize the risk of a data breach while facilitating data mobility.
We are dedicated to architecting and constructing a data transaction model by strikingly practicing the concept of Data Democratization (or say, democratic data sharing). Formally, this indicates two core ideas, which will be followed throughout the design of our HD platform:
•
All stakeholders are treated identically without discrimination. The platform and any constructed protocol would not take into account any player's distinguishing attributes (e.g., size, market volume, profitability, proprietary technology and knowledge, dominance in administrative power or market influence, information sources, etc.) and therefore each player can be treated equally in our platform; • The promotion of fairness as a complementary. Based on the first fundamental, this is also essential when facing the inequivalence reality among each party, especially quite often seen between the individual data subject (DS) and the so-called "digital oligarchs".
State-of-the-art research and ethical and legal efforts have paid extensive attention to the first idea. However, we argue that fairness promotion is also critical concerning data democratization, due to the extremely unequal reality that exists between the individuals and the colossus entity.
For properly responding to the aforementioned challenges and solution weaknesses, we also consider architectural standards as well as related security and privacy specifications from the International Standards Organization (ISO), the European Committee for Standardization (CEN) and Health Level Seven International (HL7). In that context, we have to first mention ISO 23903:2021 [12], the interoperability and integration reference architecture model and framework, but also ISO 22600:2014 [13]. Privilege management and access control, Part 1-3, ISO 21298:2017 [14]. Functional and structural roles, but also the HL7 Privacy and Security Logical Data Model, Release 1, June 2021 [15], all using the ISO 23903 models and principles.
In this paper, we concentrate on a high-level architecture that will meet our design intention. We first provide a brief overview on related works in Section 2. In Section 3, we distinguish different types of stakeholders that would be concerned in our platform based on the differences in motivations, privacy tendencies, functionalities, etc. The definition and description of each derived role are presented in Section 3.1. To improve the universal understanding, in Section 3.2, we provide a mapping between the roles defined in our platform and the roles presented in the European General Data Protection Regulation (GDPR). We deeply consider the hierarchical perspective of our platform and propose, in Section 4, a conceptual architecture to achieve our goal of data democratization. Section 5 illustrates democratic-promoting designs. One of our essential design primitives is a tokenbased approach to fairness promoting mechanism which focuses on facilitating a "reverse onus" during negotiation between two stakeholders with great disparity. We conclude our work in Section 6.
Related Works
The national eHealth infrastructure (e.g., local health authorities' Electronic Medical Record (EMR) systems, north Norway telemedicine infrastructure, and Norsk Helsenett) [16] in Norway has been built since the middle 1990s and is in its design intended for an organization's internal use, which emphasizes localized data retention and confidentiality. The "one citizen-one journal" plan was proposed in 2012 with the laws regarding medical records and health registers, updated in 2015 to facilitate data mobility. The national pilots Helseplattformen and Helseanalyseplattformen [17] were launched in recent years to technically implement connectivity and coordination in data sharing. On the EU level, the effort has so far mainly focused on the technical (e.g., the epSOS project), policy [18][19][20] and legal [21] interoperability towards the EU eHealth strategy 2020 [22]. As regulations are evolving and national laws always differ, the current health data infrastructure builds segregated silos, differing in purposes, data sharing methods, regulatory compliance practice, and the users' roles.
The trend towards preventive and personalized healthcare implies that health data can be collected from non-conventional health data sources, such as patients' devices, living environments [23], and the healthcare industry [24][25][26][27]. These patient-generated health data (PGHD) have frequently not yet been integrated into the national health data infrastructure. We also note that there, so far, exists a trend towards a patient-to-patient [28] crowdsourced information-sharing community, where the generated new health knowledge may be regarded as PGHD too. However, such platforms are usually plagued by insufficient consideration of privacy.
The data breach caused by health IT outsourcing from Helse Sør-Øst [29] in Norway 2017 received massive public attention. Important concerns have been: the lack of risk management for decision making, lack of diligence from local health authorities regarding data protection of outsourced IT operation, and the lack of technical control (effective rights management in this case). The local municipalities may have an even worse situation [30] due to the fact that they have not adopted the national health data infrastructure. In addition to managerial and technical challenges, we may find it hard to consolidate an unambiguous set of "standard" rules and policies patching up all loopholes or fuzzy zones in laws [31], forcing all organizations and states to unanimous consensus [32]. The complexity attributed to legal, ethical, economic, managerial, interoperability, and technical factors makes security policy and decision making a great burden for all parties dealing with health data mobility, which can be seen from cases such as health record selling [33], data sharing with the government [34] bypassing patients, or patient safety endangered by health data access control in emergency [35,36]. An advanced solution for meeting those challenges is the deployment of ISO 23903 with the ontological representation of policies including ethical ones.
Classification of Stakeholders and Matching with Roles Defined in GDPR
The prior task for our work is to distinguish discrepant stakeholders with significant behavior characteristics and interest relationships. We first classify our HD platformrelevant stakeholders into seven types. Then, we present a sample of matching between these types of roles and the roles defined in GDPR.
Stakeholders Classification
The HD platform will "circulate" among diverse stakeholders. Some stakeholders participating in the Health Democratization (HD) project intend to get the health data to enable their service provision, while some others have the right of disposal of the health data. Other stakeholders may tend to provide data processing/storage/analyzing facilities. As follows, we classify these stakeholders into sveen different types according to their contexts, objectives and functions, ruled by related policies. A policy is set of legal, political, organizational, functional, and technical obligations for communication and cooperation, defining the intended behavior of a system [4].
•
Computing resource manager (CRM) The service provider assists each actor in managing computing, storage, and communication resources in facilitating data sharing with other actors.
The CRM service is provided through general computing infrastructure layers to support data sharing activities on the logic and operation layers, and is neither intended nor supposed to have any interest in the semantics layer (e.g., the content or utility of the data).
The actor is supposed to fully represent the interest of the stakeholders it serves. Depending on trust models and other factors, one CRM may serve one single or multiple actors. In the latter case, the CRM may have a conflict of interest when it comes to security and privacy aspects.
•
Data consumer (DC) The actor can access data directly, query a database, or receive data from DS, DG, or DSP to exploit the value of the shared data. It is a destination with which the data are shared.
•
Data generator (DG) The actor directly generates data from a DS or converts sensed signals into formatted data, through biomedical sensing, human recording/reporting, social media, human observation, questionnaire, interview, and other technical or non-technical means. A typical DG can be for instance a health or medical sensor, personal mobile device, speech-to-text generator, online questionnaire, a human being, etc.
•
Data manager (DM) The service provider assists each actor in processing, managing, and exchanging the data with other players.
The DM processes, manages, and exchanges data up to the operation layers, and is neither intended nor supposed to have any interest in the semantics layer (e.g., the content or utility of the data). At the syntax level (data structure, data models, database structure, dataset structure, etc.), operations such as formatting, encoding, decoding, transforming, indexing, pseudonymizing, access controlling, encrypting, decrypting, differential privacyenhancing, content-dependent encryption/decryption, machine learning, data analysis, etc., are included At the binary level (file structure, file management system, etc.), operations such as storing, copying, appending, deleting, encoding, decoding, transmitting, logging, encrypting, decrypting, file format conversion, etc., are included.
DM is supposed to fully represent the interest of the actor/customer served. Depending on applied trust models and other factors, one DM may serve one single or multiple actors. In the latter case, the DM may have a conflict of interest when it comes to security and privacy aspects due to possible trust boundaries.
•
Dataset provider (DSP) The DSP creates and maintains-under the consent given by DS, and possibly the agreement with DG-one or several both syntactically and semantically structured datasets sourced from DS or/and DG, and shares the data with other stakeholders for a datasemantics-dependent purpose consented (in advance or real-time) by DS and harmonized by other involved parties with their rights and obligations. It is a possible source of data provided for sharing. It can also be a destination of shared data.
The DSP differs from DM, as DSP has an interest concerning the content or utility of the data for sharing, while DM does not.
A typical DSP can be: (1) an end dataset provider (e.g., hospital, an Electronic Health Record (EHR) operator, a research institute, etc.); or (2) a proxy dataset provider (e.g., a data portal, a data cache service provider, etc.).
•
Data rights manager (DRM) The DRM assists each actor in managing his/her rights in relation to other actors, i.e., proving, negotiating, and recording the terms and conditions describing the rights and obligations regarding the data to be shared.
The DRM processes data up to the logic layers and is intended or supposed to have an interest in the data semantics (e.g., the content or utility of the data). This can include activities such as risk and benefit analysis, ethic and socioeconomic constraints, multi-party policy reconciliation, computational strategizing and negotiation, rights and obligations updating and recording, etc.
The DRM is supposed to fully represent the interest of the actor/customer served. Depending on trust models and other factors, one DRM may serve one single or multiple players. In the latter case, the DRM may have a conflict of interest when it comes to security and privacy aspects.
• Data analysis service provider (DASP) The actor provides data analysis as a service to DS, DG, DC, or DSP.
Relation with Roles Defined in GDPRSubsection
The relation between the participants or stakeholders (DS, DG, DC, DSP, DASP, DRM, DM, and CRM), defined in the Health Democratization (HD) project, and the three roles ("data subject", "data controller", and "data processor"), defined in GDPR, is understood in the following way.
The Data Controller is defined in GDPR as the party which determines the purposes and means by which personal data are processed. An organization can be a Joint Data Controller when, together with one or more organizations, it jointly determines 'why' and 'how' personal data should be processed. Such a joint controller relation must result in an agreement defining the respective responsibilities. The Data Processor processes personal data only on behalf of the controller.
The three roles in GDRP were defined as a legal status to clarify rights and obligations. The participants in HD are defined in a way taking into account their functional roles in data sharing as well as their interest and rights in the shared data. Thereby, they are defined to facilitate understanding the various data sharing types, models, and scenarios through their independency and dependency relation among each other in function and interest in a specific data sharing transaction.
The data subject defined in HD is equivalent to that in GDPR. A virtual example for illustrating the relation described above is given as follows: • Example: A general practitioner (GP) can provide a value-added service for his/her patients who have their own Personal Health Record (PHR) system which is technically provided and maintained by a PHR service provider who builds their service on infrastructure provided by the public cloud from Amazon. The GP can specify what data are needed for a health monitoring process for purpose of a specific longitudinal study to personalize the care plan for a specific patient. The GP sets up the longitudinal study plan, collects data from a PHR which has the data sourced from different independent wearable sensor data vault used by the patient, outsources part of the collected data to a data analytics service provider for data analysis purpose, accumulate the data, and finally design a new care plan for the patient. To provide legitimate, auditable, and efficient service information and contract management, the GP uses a contract management App to communicate with the patient for negotiating the rights, obligations, prices, and other issues concerning the offering of the service.
We have developed the following mapping between the aforementioned HD project stakeholder types (GP, patient, PHR service provider, sensor service provider, data analytics service provider) and the GDPR roles, listed in Table 1. Table 1. A sample mapping between stakeholders defined above and the GDPR-defined roles.
Party Participant Defined in HD Role Defined in GDPR
Patient data subject data subject GP data consumer joint data controller PHR portal managed by the patient data manager data processor PHR service provider dataset provider joint data controller, data processor Amazon cloud computing resource manager data processor Sensor service provider data generator data processor Data analytics service provider data analytics service provider data processor Contract management App data rights manager data processor
Conceptual Layered Architecture of HD Platform
To meet the principle of data democratization and the promised capabilities, we gazed deep into the platform from a hierarchical perspective. The HD platform enables developing and managing the democratic negotiation procedures during the healthcare data business, for use in, and exchange of, clinical and individual health information between the potential DS/DM and the potential DC.
For each principle and the potentially promissing scenario, the executive process could be considered as a correlation between the data sharing participants and an affair-related data sharing function at different executive levels, defining the business system's behavior. The objective must be to adjust the system's behavior in its structure and function according to the multiple applicable policies from legal, procedural, contextual up to ethical policies and principles including individual policies of the stakeholders involved.
Guided by ISO 23903, which standardized the model and framework of an interoperability and integration reference architecture, but also by ISO 22600 and ISO 21298 (all those standards have also been approved as CEN standards and and re-used in the HL7 security and privacy specifications), as well as the eHealth standardization in the Nordic countries [37] concerning the interoperability [38,39], our HD platform represents the proposed the stakeholders' classification and related functions. The data sharing function ranges from the incipient data provenance to the rights and obligation tracking according to the agreement. We stratify our platform into four conceptual layers, named "Computing Infrastructure Layer", "Data Sharing Operation Layer", "Data Sharing Logic" and "Healthcare Business Layer". Each layer is eligible for interoperability with its adjacent layers. Our Architecture also obtains references from the peer work on diverse eHealth networking and healthcare data sharing solutions [40][41][42]. Figure 1 presents our conceptual architecture in detail. The architectural threedimensional model describes the data sharing hierarchical structure, the data sharing participants, and the data sharing functions for achieving the business objectives. It outlines a thorough view of related implications of democratic data sharing with our platform. Each square implicates a potential relevance at the practical level. The main systematic-level functions required in our platform include: • Data provenance: providing backward traceability of medical devices, the personal device in the homecare environment, etc., and the health data sourced from these devices to be audited in a trusted way regarding rights and operation status; • Risk Assessment: enabling each data subject to have different risk acceptance tolerance and incentive degrees when they are entitled to rights and benefits from data; • Computational negotiation: negotiating agents can operate and negotiate decisions. The requirements will be developed in compliance with the GDPR, healthcare regulations, and other relevant policies. When processing and exchanging personal data between the agents, the design of the infrastructure will address such key requirements of the GDPR as data protection by design and by default, accountability, pseudonymization, right of access, and right to be informed, to rectify, to erasure, and to be forgotten; • Multi-lateral security policing: enabling individuals to be able to share and control access to health data without having to place extensive trust in entities, and institutions must also be able to share data responsibly for research, innovation, and quality assurance across institutional boundaries. The aforementioned design could be merged into a democratic design. Figure 2 shows a function-level relational architecture between the defined roles and the functions.
We illustrated several proven enabling technologies which could be used as a mature solution in the counterpart functions, such as the blockchain-based data provenance A dynamic data sharing transaction could consist of the following steps: • A data provenance process that clarifies among the concerned players the history of the parties with their rights regarding the data to be shared; • If a default (pre-defined) right and obligation setting is not unanimously agreed upon by the involved players, a knowledge-driven negotiation process must be performed where each player takes into account different factors such as ethical and legal contexts, risk assessment of data breach/privacy breach, benefit from data sharing, etc., based on risk models, and AI-based inference. As business systems are frequently highly dynamic regarding their objectives, context, processes, etc., a dynamic policy management and mapping in consistency with legal and ethical requirements and principles is inevitable; • The computational negotiation mechanism takes as inputs the risk assessment result from individual players as well as the multi-party security policy logical representation and reconciliation solution, and generates a new recommendation to all the involved parties for achieving an agreement. This process could iterate in several rounds; • The outcome of the computational negotiation determines the data sharing protocol and the security and privacy-enhancing technical methods for data sharing (e.g., homomorphic encryption, secure multi-party computation, differential privacy methods, federated machine learning, etc.); • The new configuration of rights of the involved players is recorded using blockchain technology, and the execution of data sharing is encoded into a smart contract which could trigger the automated data sharing now or in the future.
The aforementioned design could be merged into a democratic design. Figure 2 shows a function-level relational architecture between the defined roles and the functions.
Democratization-Promoting Primitive Design
Numerous state-of-the-art proven technologies and solutions could be utilized for realizing our HD platform. However, a gap still exists between the current solutions and the data democratization vision [43,44]. Our vital task is to design promising technical and procedural solutions that can promote democratic data sharing. For integrating different specifications and solutions, that way enabling comprehensive interoperability, it is inevitable to harmonize the different representation styles and languages by properly re-engineering them on the basis of ISO 23903:2021. Thereby, the axes of the ISO 23903 Reference Model correspond to those in Figure 1 as follows: the ISO 23903 Domain dimension summarizes both the Data Sharing Participants and the Data Sharing Function; the ISO 23903 Development Process dimension is represented by the process-related components; while the ISO 23903 Granularity dimension representing the composition/decomposition of the systems elements is completely missing.
In this section, we will introduce one of the critical data-democratization-promoting designs. We illustrated several proven enabling technologies which could be used as a mature solution in the counterpart functions, such as the blockchain-based data provenance mechanism, the conventional privacy-enhancing methods, and crypto-based solutions such as the operable contract enforcement. Our primitive design series for enabling data democratization, such as the risk assessment and multi-lateral security policing, focus on the the negotiating part, which ensures that the backward traceable health data could be traded or shared under an equipotent situation.
In addition, the green part shown in Figure 2 represents one of our innovations, which moves forward to a more democratic vision in the principle of fairness promotion. The next section will provide a brief overview on this part.
Democratization-Promoting Primitive Design
Numerous state-of-the-art proven technologies and solutions could be utilized for realizing our HD platform. However, a gap still exists between the current solutions and the data democratization vision [43,44]. Our vital task is to design promising technical and procedural solutions that can promote democratic data sharing. For integrating different specifications and solutions, that way enabling comprehensive interoperability, it is inevitable to harmonize the different representation styles and languages by properly re-engineering them on the basis of ISO 23903:2021. Thereby, the axes of the ISO 23903 Reference Model correspond to those in Figure 1 as follows: the ISO 23903 Domain dimension summarizes both the Data Sharing Participants and the Data Sharing Function; the ISO 23903 Development Process dimension is represented by the process-related components; while the ISO 23903 Granularity dimension representing the composition/decomposition of the systems elements is completely missing.
In this section, we will introduce one of the critical data-democratization-promoting designs.
Token-Economy-Powered Incentive Mechanism for Promoting Reverse Onus
In our HD platform, each DC may have the right to claim how much privacy they need to perform a certain healthcare service, whereas the DM may lack the knowledge to assess the validity of the claim. Our HD platform seeks to provide an incentive mechanism to help improve the privacy level of health data. This also could play a role when considering the principle of "data minimization" defined in GDPR.
Considering a vulnerable DS (and his/her DM) with insufficient knowledge to engage in a beneficiary negotiation with the data user, this mechanism will assist this negotiation for achieving a more reasonable scheme or contract from a privacy perspective. It will also cover the execution of the agreement-based contract, especially when the real-world scenario goes beyond the contract's coverage, by a token-economy-based mechanism and a virtual credit system. The incentive component is expected to restrain the "grey gap" of privacy leakage.
The objectives of the incentive demo include: (1) to provide a "reverse onus" mechanism between data collector and data manager; (2) to promote the faithful execution of the contract; and (3) to inhibit potential incompliant/illegal data user.
In our mechanism shown in Figure 2, the data usage approving helps build the privacy-enhancing consistency between data collector and data manager. After the mutual agreement was achieved and the contract was built, the private credit system will monitor the execution of the protocol to stimulate the data collector to follow the privacy terms, build a token currency system on encouraging privacy-friendly behaviors, and generate the virtual credit of privacy integrity of data collector based on its history log. This credit will be further used to consult the future negotiations.
Data Usage Approval
The negotiation procedure is protected by requiring the data collector to apply form (appFm) on the usage of the health data, including: The appFm will always be approved by our incentive component. Here, we only assess the privacy-leakage risk and register the appFm in the token currency credit system.
Token Economy Rules
The incentive component organized all the data transmission into a "purchase" behavior in the token-economy system. Here, the component will use a token named "healthcoin", inherited from our previous work [45], to build a token balance and transaction system. The rules of this token system are as follows: Rule 1 (coin creation): For each time the data collector registers the appFm, the system will create some healthcoin and transfer them to the data collector's balance. The amount of the health coin is determined by the details of appFm and the credit level of the data collector. By default, for each time the data piece is requested, there will be 1 $ healthcoin generated and transferred to the data collector; Rule 2 (health data purchase): Each time a data transmission in the platform happens, the token system will consider it as a purchase behavior of the data collector by using its healthcoin. By default, the price of the health data will be 1 $ as long as it is following the appFm claimed by the data collector. The component will always satisfy the purchase if the data collector can afford the price. In combination with Rule 1, it is clear that an honest data collector will always work well in our system; Rule 3 (credit score): The incentive component will set a credit score for each data collector, denoted as α∈[0, 1], where 1 means data collector has the highest credit score. The credit score will be adjusted based on the simple idea that the more balance of healthcoins it has, the more dangerous the data collector will be, since the credit score will always be satisfied when the data collector has enough healthcoins to obtain whatever data he wants; Rule 4 (credit-based coin creation): Based on Rule 1, for each appFm, the healthcoin DC will gain is (amount × α) $, where the amount is calculated by Rule 1; Rule 5 (discount): DC can claim a discount by reducing the data requirement (e.g., precision, amount, frequency) to get a discount, the discount strategy is simply following the ratio of data distortion.
Behavior Analysis
An honest and stable DC can be adapted into this system very well, because it always has a low balance, i.e., a high credit score, and earned enough healthcoin for his claimed appFm. When a greedy data collector performs:
1.
Excessive data transmission, the balance cannot be enough for him to afford the rest of the data, and hence go against his plan of appFm; 2.
Hoarding the healthcoin (e.g., by utilizing Rule 5 to save healthcoin on purpose) to perform potential privacy data transmission. However, when the balance becomes high, the gain from the new appFm will decrease, and the balance will be exhausted soon since the payments it gets barely cover his expenses.
When an embarrassed DC cannot afford a regular data transmission claimed by himself, he can choose to use the discount to make up for the loss, regain his credit and normal balance in the future.
Incentive Mechanism
Based on the aforementioned token-based mechanism, it will be a choice put in front of DC, which is either to break the balance, be bankrupt, but collect some more health data and then receive profit from it; or to honestly behave as a normal stakeholder, with no gain from extra health data, but also without loss from being bankrupt and leading to harm to the profit from the contract.
The incentive mechanism's job is to maintain the platform always in a configuration status, which encourage DC to always being the honest part rather than harming DS and DM. Here we use a policy toolkit to configure the global parameters to incentive honest behaviors.
1.
The credit parameter This is the aforementioned parameter to decide how acutely the credit score will decrease with the increase of the healthcoin balance. The incentive mechanism could use this parameter to adjust the balance in the system. For example, when DC finds not enough income and decides to ask for more data, the incentive mechanism can turn up this parameter to achieve a more severe balance reaction;
2.
Gain/loss ratio θ This is a parameter that reflects the gain (from the privacy stealing) and loss (from the regular business). Both gain and loss are inherited from outside information, e.g., the domain expert advice, or the market analysis. Notice that this is not the gain/loss of the virtual healthcoin, but the real-world profit;
3.
Discount ratio δ When the data collector finds it acceptable to discount the data transmission every time to achieve the profit, the incentive mechanism will use discount ratio δ to ensure the discount is no longer cost-effective. For example, obtaining a 50% decrease in data precision with only a 10% discount.
Conclusions
In this paper, we raised the concept of data democratization, which will reinforce health data-sharing concerning privacy enhancement and benefits insurance. Based on current standards, an overall conceptual layered architecture was proposed which aims to enable such a vision. We illustrated the key components that lead to a democratic data-sharing scenario regarding data provenance, risk assessment, multi-lateral security policing, and computational negotiation. Some proven technologies are also illustrated to cope with the corresponding functions. The output of our HD platform is an executable and auditable contract, democratically signed between well-defined stakeholders. The contract could also be a configuration instruction for conventional privacy-enhancing technologies (e.g., differential privacy) and the crypto-based solutions (e.g., ABE access policies).
We further introduced an advanced concept of data democratization, which emphasized fairness promotion in the HD platform. A token-economy-powered incentive mechanism for promoting "Reverse Onus" on data usage was proposed. This mechanism rebalances inequitable situations among the stakeholders.
Future work will keep on implementing and integrating the proposed conceptual designs. Several landing case studies will be put into consideration to improve the practicability of our work. In that context, we have to harmonize our approach by correctly and comprehensively deploying ISO 23903:2021. ISO 23903:2021 provides a model and framework for a system-theoretical, architecture-centric, ontology-based and policy-driven approach to formally and correctly represent any living or non-living system including its evolution/development [4]. Policies considered ranged from legal, procedural and contextual up to ethical policies and principles. Details will be presented in our paper to pHealth 2022. | 7,796.6 | 2022-08-26T00:00:00.000 | [
"Medicine",
"Computer Science",
"Political Science"
] |
Image Reconstruction Requirements for Short-Range Inductive Sensors Used in Single-Coil MIT
MIT (magnetic induction tomography) image reconstruction from data acquired with a single, small inductive sensor has unique requirements not found in other imaging modalities. During the course of scanning over a target, measured inductive loss decreases rapidly with distance from the target boundary. Since inductive loss exists even at infinite separation due to losses internal to the sensor, all other measurements made in the vicinity of the target require subtraction of the infinite-separation loss. This is accomplished naturally by treating infinite-separation loss as an unknown. Furthermore, since contributions to inductive loss decline with greater depth into a conductive target, regularization penalties must be decreased with depth. A pair of squared L2 penalty norms are combined to form a 2-term Sobolev norm, including a zero-order penalty that penalizes solution departures from a default solution and a first-order penalty that promotes smoothness. While constraining the solution to be non-negative and bounded from above, the algorithm is used to perform image reconstruction on scan data obtained over a 4.3 cm thick phantom consisting of bone-like features embedded in agarose gel, with the latter having a nominal conductivity of 1.4 S/m.
Introduction
Magnetic induction tomography (MIT), as applied to the determination of an electrical conductivity distribution within low-conductivity biological targets, is ordinarily accomplished with a system consisting of two or more coils, each commonly using circular loop geometry [1].Choice of coil diameter largely depends upon the intended application, with larger diameters providing improved sensitivity to deeply buried features, while smaller diameter coils offer improved resolution for shallow features.Typically, coil diameter ranges from ∼5 cm up to ∼40 cm, but no larger than nominal target dimensions [1][2][3].Achieving both interior sensitivity and adequate resolution remains a challenge according to Klein et al. [2], which considers both circular and noncircular coil geometry.
An alternative approach to multicoil MIT, as recently demonstrated [4], uses a single, multiloop type coil to provide both excitation and sensing.Image reconstruction in this case is made possible by an analytic formula that connects a 3D electrical conductivity distribution with measured inductive loss via a Fredholm integral equation of the first kind.Primary limitations of the Fredholm integral include a restriction to short circular loop coil geometry, electrical conductivities beneath ∼200 S/m, and targets having near-uniform relative permittivity and permeability.Though uniform relative permittivity is usually not present, even in phantoms, the conductivity limitation is certainly acceptable for biological materials where conductivity is typically less than ∼2.0 S/m [5][6][7][8].Thus far, single-coil MIT scans have been demonstrated only for coil diameters less than or equal to ∼5 cm, where interest has focused on shallow features, such as the lumbar spine and near-surface pressure ulcers.
An important distinction between multicoil and single-coil MIT electronics lies in the manner of excitation and signal detection.While multicoil MIT methods rely upon detection of the phase difference between excitation and response fields [9], the single-coil MIT system discussed herein relies upon inductive loss detection in an RLC tank circuit.Either strategy encounters increasingly serious problems as electrical conductivity becomes small-either the induced field becomes too small to detect a phase difference, or inductive loss becomes small when compared with background noise.An advantage of the inductive loss measurement is that loss depends linearly upon conductivity as long as the Fredholm integral remains valid.
Instrument sensitivity to interior features declines with depth into a target for either single-or multicoil MIT [2,10,11].With single-coil MIT, this is well visualized from the Fredholm integral where the kernel may decline by more than 10-fold at a distance of one coil radius away from the coil plane, though actual decay is dependent upon radial location.Because of declining sensitivity with depth, care must be taken in methods used for regularization when attempting to invert the ill-conditioned Fredholm integral.Choice of penalty types, with built-in depth-dependence, can determine whether or not image reconstruction is able to resolve both near-surface and interior features.
Data collected from single-coil MIT scans have thus far been processed with Tikhonovregularized image reconstruction methods involving a single L2 type penalty term.These have either suppressed rapid spatial variations in electrical conductivity through the solution gradient norm, or departures in electrical conductivity from a precomputed default solution.This work combines both into a single depth-dependent penalty, more generally regarded as a 2-term Sobolev norm.
Regardless of regularization choices implemented during image reconstruction, several other features are indispensible for successful single-coil MIT.Firstly, we must recognize that inductive sensor response diminishes quickly with sensor-target distance.Since all inductive sensor measurements must be made relative to sensor response at infinite separation between target and sensor coil, it is essential to locate this asymptote.Here, we illustrate an image reconstruction method that treats asymptote location as an unknown, alongside conductivity, so that all other measurements properly reflect target-coil interaction relative to infinite separation.
Secondly, since no induction coil is ideal, data preprocessing should remove tank circuit losses associated with coil-target parasitic capacitance.Though usually small, this can make a difference in image quality, as recently shown [12].Current work uses a Texas Instruments LDC1101 chip to measure tank circuit losses for an RLC resonant circuit that includes the sensor coil [13,14], which allows a distinction between inductive and capacitive losses.
Finally, image reconstruction for single-coil MIT should use any available a priori information about the target, such as an expected range for conductivity.For biological samples, electrical conductivity can be reliably expected to fall into a range from 0.0 to 2.0 S/m.Thus, the methods reported here use Lagrange multiplier methods to constrain the solution between known bounds.Any other a priori information should also be used, such as the known spatial boundaries of the target.Finite element methods used here leverage that information.
The next section provides details of the image reconstruction method used for singlecoil MIT that incorporates the features discussed above.Since the goal is to showcase these features for a relatively small coil, they are illustrated in the context of quadratically regularized least squares, which is likely known to the reader.Following the discussion of the algorithm, image reconstruction is performed on experimental data collected from a single-coil scan over a phantom using a 3D robotic gantry.Phantom construction involves the placement of "bone-like" inclusions throughout an agarose matrix of dimensions 30 × 30 × 4.5 cm-the last dimension giving specimen depth, which is about twice the coil radius chosen for scanning in this work.
Dual-Penalty Regularized Least Squares
Image reconstruction of data obtained from a single-coil scan is based upon an analytical formula linking 3D electrical conductivity distribution σ c within a target to predicted inductive loss in the sensor coil [4].This formula fully accounts for skin effects provided that conductivity is much smaller than ∼ √ 2π/(µνa 2 ) (µ is magnetic permeability, ν is frequency, and a is coil radius).
Integration in Equation ( 1) is entirely in the coil frame, with origin located at the coil center and XY plane coplanar with the coil plane-a "c" subscript indicates the coil frame.The argument of toroid function Q 1/2 is given by Radial distance ρ is subscripted when locating coil loop j or k.Equation (1) has shown very close agreement with experiment in several studies [13], and forms the basis for image reconstruction from single-coil scan data.It predicts the expected inductive contribution to loss in a tank circuit, which is dependent on the position and orientation of the sensor coil.To make this clearer, Equation (1) can be written as a convolution integral relating coil position ⃗ c and rotation matrix Ω to measured loss [4]: The integration in Equation ( 3) is now fully from the perspective of the lab frame, with a subscript "l" attached to conductivity as a reminder.ΩT rotates a vector in the lab frame back to the coil frame, while vector ⃗ c is from lab origin to coil center and⃗ r locates a field point in the lab frame.Because of cylindrical symmetry about the coil axis, Ω is the identity matrix if coil and XY planes are parallel.Parallel configurations are common with many scan methods.
The kernel K(⃗ r c ) in Equation ( 3) is given as the sum over the loop radii of the set of very short concentric solenoids connected in series: Kernel K(⃗ r c ) directly assesses sampling and assigns the extent of importance given to each location within a target-it is zero along the coil axis.Example kernel plots have been shown elsewhere [4], which demonstrate the loss of sensitivity to regions farther from the coil.After inductive loss is repeatedly measured in the vicinity of a conductive target, the measurement set is used to solve for conductivity by inversion of Equation (1) or Equation (3).
Image reconstruction begins by discretizing the integral of Equation (3).Here, electrical conductivity is expressed as a linear combination of basis functions over a finite element mesh consisting of N nodes, that spans just the target: After introducing this expansion into Equation (3), we have a loss prediction for each coil location during a scan: The remaining integral is over known functions and is evaluated for each element to build model matrix T. The toroid function is evaluated here using the algorithm of Fettis [15], which is especially effective in the difficult argument range between 1.0 and ∼1.1.
Inductive loss data, together with Equation ( 6), can be written as a matrix equation, with predicted loss Z(⃗ c i ) providing an approximation of the measured loss " ⃗ b": Ordinarily, the number of unknowns contained in the vector ⃗ σ exceeds the number of available equations, determined by the number of measurements.Thus, the system of Equation ( 7) is underdetermined.If measurements are nearly redundant, then the number of meaningful equations becomes smaller.
Even with good sampling, strategies are still needed to reduce the size of the solution space.Given that electrical conductivity is strictly positive, solution non-negativity is imposed.Another requirement that further shrinks the solution space is to enforce an upper conductivity bound.For example, the electrical conductivity in biological tissues is known to be less than ∼2.0 S/m [5][6][7][8].With phantoms, the upper bound may be precisely known.Here, Lagrange multipliers and "active set" methods are used to keep the solution between zero and some upper bound.
Though bound constraints are imposed, there still is no unique solution for Equation (7).Thus, Tikhonov regularization is employed by minimizing the sum of the error norm and two weighted L2 norms, further limiting the solution space.Multipenalty regularization has previously been used in 1D applications, yielding results superior to single-penalty regularization schemes [16].Here, one L2 norm penalizes departures from an overall conductivity average (the default solution-see Equation (12) of [4]) and a second penalizes solutions with large spatial gradients: Anticipating a finite element mesh with conductivity unknown at N nodes, minimization problem (8) is subjected to the constraints: Matrix G is the conductivity gradient matrix, consisting of a first-order differential operator, which has a structure dependent upon the type of finite element mesh and associated basis functions.This first-order penalty norm remains unchanged if a constant is added to the unknown conductivity.Thus, the objective function in (8) may be written as The penalty term involving diagonal matrix D penalizes solutions that deviate from the default solution, ⃗ β.Inasmuch as D applies a scalar to the conductivity displacement, this penalty is a zero-order contribution to the overall penalty.Higher-order penalty terms could be developed and summed together, but are not considered here.
Through either D or G, differing penalties may be imposed on specific components of the conductivity vector or gradient.For example, solution components at deeper target locations may be less penalized.Given the inherently weaker coil EM field at increased depth, a reduced penalty is usually necessary to improve solution sensitivity to interior regions.
The two penalty terms may now be combined into a single L2 normed penalty.First, the quantity to minimize, from expression (10), is rewritten as This allows the two penalty terms to be combined as The new matrix H has an increased number of rows compared to either G or D. Letting E represent the number of elements in a 3D mesh, the number of rows in G equals 3 × E, while D has N rows.To more clearly see the structure of matrix G, we apply the gradient operator to the conductivity in Equation ( 5) to yield for element e: Equation ( 13) can be written in matrix form: Entries within G consist of the vectors ∇φ j , obtained from the basis functions, with the matrix entry location determined by the element and node-numbering scheme used.Any particular element within the matrix G can be written as The size of G is 3E × N, while indices i, k, and m are given by i = 3(e − 1) Element number is given by e, while m = 1, 2, 3 correspond to x, y, z Cartesian coordinates.Index j is determined by the way the mesh elements and nodes are numbered.Thus, j(k, e) is determined according to how the particular local node number k, attached to element e, maps to a global node number j.The number of nodes attached to an individual element e is l(e).Prismatic elements are chosen here.
A simple approach to verifying the structure of matrix (12) in coding is to apply the matrix to a trial conductivity vector for a virtual target with a prescribed linear variation in conductivity across the mesh.The result should equal the assigned slope in each of the X, Y, and Z directions.If this procedure is used when a global constant (or element-wise constant) is added to the trial function, the results of Equation ( 11) are unchanged, so only solutions containing spatial changes in conductivity are penalized via matrix G.
Commonly, G has more rows than columns, so that its rank might be expected to equal N.But because Equation ( 15) is unaffected by the addition of a constant to conductivity, G is column rank deficient-specifically, column rank equals N − 1.To restore the column rank of G to N, an additional, independent row vector could be appended to G.For example, all entries of this last row could be set equal to zero, except for entry N, which could be set exactly to 1.0.The consequence of this choice is that solutions exhibiting high conductivity at this particular node are penalized.Alternatively, working with both penalty types simultaneously via the matrix H achieves the same effect so that the column rank of matrix H has the desired value of N. Thus, G requires no modification.
Both matrices T and H must be further modified to accommodate an "instrument offset", a feature that is particularly unique to single-coil, scanning MIT.This is due to the rapid decay of inductive loss as the coil sensor is moved farther from the target, gradually approaching some asymptotic value.Rather than try to specify or measure this asymptotic value, which is difficult, it is treated as an unknown.Therefore, the vector of unknowns is modified to The first entry is the instrument offset, or sensor reading asymptotically approached at infinite distance.To accommodate the offset in minimization problem (12), a new first column of 1 ′ s must be added to T, producing T0 , while H is modified to include a new first row and first column, consisting of 0 ′ s, except for the (1, 1) entry which is identically set equal to 1.0.The balance of the new H0 is the previously assigned H, which has full column rank N. Since all sensor readings need to be relative to the asymptotic sensor reading, determination of the sensor asymptote is necessary, even if small.
Matrix H is further processed via QR decomposition.This leads to modification of the last term in problem ( 12): Result (18) follows since matrix Q is orthonormal, while vector ⃗ β 0 has zero as its first entry.Therefore, minimization problem ( 12) is now written as min 1 2 To The subscript "0" is present to indicate that the problem now includes the unknown offset, or asymptote, which may also be constrained to be non-negative.Thus, R0 has the same row and column modifications as H0 , while R is unchanged.
Conductivity Constraints
Further progress requires that bounds are imposed on the solution, which is accomplished through building an objective function from (19) that adds in Lagrange multiplier terms: The constrained set of K unknowns {σ a k } is called the active set, denoted by superscript a, and is initially found by minimizing without constraints and noting which unknowns are out of bounds.If a multiplier associated with a lower (upper) bound is found to be negative (positive) in subsequent iterations, the corresponding unknown remains in the active set; otherwise, it is released.During any iteration, if an unknown is found to be out of bounds, it is then added to the active set in a subsequent iteration.This strategy pertains to conductivity unknowns and optionally the unknown "instrument offset".Note that this algorithm simultaneously manages all constraints in any iterative step, rather than the one-at-a-time approach used elsewhere [17].Iteration ceases when the active set no longer changes its membership, accomplished in fewer than ∼9 iterations for results reported here.
Before putting (20) into standard form, the displacement in conductivity relative to its target average, or default solution, is used rather than the conductivity itself: The first component of ⃗ χ 0 is the instrument offset, since the first component of ⃗ β 0 is zero.Minimization problem (20) becomes The displacement in measured loss is given as Just as ⃗ χ 0 is the displacement in conductivity, ⃗ δ l is the associated displacement in the measured inductive loss, relative to the loss that would be expected if all material had a uniform conductivity given from ⃗ β 0 .Minimization problem (22) now has the new bounding constraints: Minimization problem ( 22) is next placed into standard form [18] by making the substitution: The inverse of matrix R0 now preprocesses the model matrix T0 , and, together with the new unknown ⃗ y, predicts the measured loss displacement contained in ⃗ δ l .Quadratic optimization problem (22) may be minimized by first decomposing the product, T0 R−1 0 , using singular value decomposition: The measured loss displacement vector ⃗ δ l is also processed, using the transpose of Ũ to create a modified loss displacement vector ⃗ b ′ , Defining a new unknown vector, ⃗ z, according to we end up with a new, but simpler constrained quadratic optimization problem: From minimization problem (29), the relevant objective function can be written out in full as As noted before, N is the number of mesh nodes, while M is the total number of inductive loss measurements available.Subscripts associated with unknowns in the Lagrange multiplier sum are meant to connect the indices of the two numbering schemes-one that tracks an unknown's index number and a new index to track particular members of the current active set.If rank of the decomposed matrix in (26) equals M, then the second term in Equation ( 30) is absent.
To find the optimal solution, Equation (30) is minimized by setting each ∂L/∂z j to zero, to give New composite matrix Γ is defined as Also, setting ∂L/∂λ k = 0 for each "k" in the active set gives an additional set of relations for members of the active set: Depending on the constraint applied to an active set member, σ a bq is either the lower bound (=0) or the upper bound for member q in the active set.Combining Equations ( 31) and (33) to eliminate z j gives a relatively small set of linear equations for the Lagrange multipliers: Matrix P is defined by Matrix P(q, k) is symmetric under interchange of indices q and k.Recall that l(q) and l(k) are intended to map a particular constrained variable q (or k) to its index "l" among all variables.The numerator of Equation (35) forms the dot product of row l(q) with row l(k), with each term adjusted by the jth denominator.
After solving for the Lagrange multipliers in Equation (34), Equation (31) is computed again, generally yielding a new active set, which calls for solving (34) again.The process is repeated until the active set is stable.All reconstructions reported here converge in fewer than ∼9 iterations.
Depth-Dependent Penalties
There are three levels of control over the penalties applied during image reconstruction.First, the global penalty parameter τ is sequentially set to progressively smaller values, chosen from the set of singular values obtained from the decomposition of Equation ( 26), though any value may be assigned.A reduction in τ reduces the penalty of each type, but is only reduced to the point where the computed error (first term of Equation ( 19)) becomes equal to the measured noise floor of the measurement system.Noise floor is obtained from a placebo scan without any phantom present on the 3D gantry stage, ∼0.9 mΩ.As τ approaches infinity, the solution not only approaches the average conductivity value, but also becomes smooth.As τ approaches zero, error falls below the known noise level in violation of the discrepancy principle so that spurious solutions are produced due to overfitting.Hence, determining the noise floor is important.
Secondly, specifying the ratio α/τ sets the relative magnitude of the two penalties.Setting this ratio to one permits each penalty to have nearly equal importance.If α/τ is set to far less than one, solution smoothness alone is emphasized without regard to the default solution.On the other hand, ratios much greater than one suppress departures from the default (average) solution without regard for smoothness.
Finally, either of the two penalties can be reduced with greater depth into the target, producing benefits similar to efforts used in diffuse optical tomography to restore interior sensitivity [19].The rationale for penalty adjustment, in general, is to compensate for the much higher kernel values found nearer to coil windings.Locations where the kernel is larger are more strongly favored under image reconstruction.Penalty reduction is an effort to reduce, if not eliminate, the "unfair" emphasis given to locations where kernel values are persistently large.There are many choices available for adjusting a penalty to make it depth-dependent, though only one choice is presented here.Control of depthdependence is feasible through either diagonal matrix D, or gradient matrix G or both.A straightforward way to reduce the penalty for locations at greater depth is to use the kernel itself, as given by Equation (4).As an example, components of the diagonal matrix D can be modified according to a scaled kernel: The radial distance ρ is usually chosen as the value that produces the maximum kernel value along the ρ axis for fixed z (see [4] for different coil types), while z s is chosen as some nominal average coil-target-boundary separation distance or possibly the closest approach distance.If sampling is confined to a single plane above the target, then the distance to that plane could be chosen as z s .Here, z s was set to 2.0 mm for all calculations.The impact of other values for z s was not explored.Parameter η is commonly <1, with smaller values lessening the role of depth-dependency for a penalty.Penalties could also be altered for locations falling outside of the scanning region in order to improve chances for target structure resolution in undersampled locations.A choice of lateral penalty dependence depends on whether the intention is to force remote locations toward a target average (increased penalty) or to increase sensitivity (decreased penalty) to structure outside the scanning region.As discussed in later sections, some phantom locations will fall outside of the X or Y scanning space, but no lateral alteration of penalty is used for this work.The expected consequence is that conductivity will tend toward the default solution outside the scanning region.
Phantom Construction and Properties
A single, low-conductivity phantom was constructed and scanned to provide a test for the dual-penalty image reconstruction algorithm.Scans were accomplished on a repurposed 3D printer, as discussed in the next section.The phantom was built up inside a plastic tray having internal dimensions of 29.8 × 29.8 cm square and 4.5 cm deep.An assortment of very-low-conductivity features was prepared, each consisting of an epoxywood-flour composite.Sufficient wood flour was added to a marine epoxy to yield a thick, but pourable, consistency.Wood flour also helped to promote an increase in both relative permittivity and conductivity.The composite was poured into forms of various sizes and shapes and was allowed to cure.After it was fully cured, electrical conductivity at ∼10 MHz was measured to be ∼0.1 S/m while relative permittivity was ∼8.
Four of the low-conductivity features were square blocks of dimension 4.0 × 4.0 cm (height) and 3.0 cm thick.Then, 2.2 cm diameter holes were drilled through two of these, in the horizontal thickness direction, while 1.9 cm holes were drilled through the remaining two blocks.Another set of four epoxy-based "fat" rectangular blocks had dimensions of 2.1 × 2.1 × 10.0 cm, while four additional "thin" rectangular blocks had dimensions of 1.5 × 1.5 × 10.0 cm.
The set of 4 square blocks were positioned parallel to each other, with holes coaxially aligned parallel to the X-axis of the tray and gantry, with 1.2 cm of spacing between adjacent pairs.The four fat rectangular blocks were positioned on one side of the row of centrally located square blocks, parallel to each other and the tray Y-axis.The thin rectangular blocks were also positioned parallel to each other and the tray Y-axis, but on the opposite side of the row of square blocks.Spacing between fat rectangular blocks was ∼1.5 to 2.0 cm while that between the thin rectangular blocks was ∼2.0 to 2.5 cm.The layout is shown in Figure 1, with the tray already partially filled with agarose gel to ensure that the eight rectangular blocks were elevated up to the vertical midpoint of the tray.The square blocks, however, touched the tray bottom and extended to a height of ∼4.0 cm.They were slightly buried by ∼0.2 cm of agarose.After placing the rectangular blocks into position, and allowing them to rest on the lower layer of previously cured agarose, additional doped agarose was poured into the tray to fully submerge all low-conductivity features.All agarose was doped with sufficient sodium chloride [20] to give a conductivity of ∼1.4 S/m at room temperature when cured.Sufficient agarose was poured into the tray so that the middle blocks were covered by ∼2 mm of gel, giving a total gel height of ∼4.3 cm, filling all holes.An edge view of the completely filled phantom, prior to gel solidification in the upper portion of the phantom, is shown in Figure 2. The key features of this phantom that challenge image reconstruction include the dimensions and locations of blocks, the gaps between blocks, and the holes through the central square blocks.Positioning the rectangular blocks out beyond the scanning region provides an additional challenge to image reconstruction.
3D Scanning Gantry
Recent single-coil scans were accomplished manually, all while optically tracking coil position with an IR camera [21].Though able to track the X, Y, and Z positions of the coil center to within ±0.25 mm each, together with coil orientation, the random nature of manual coil repositioning led to considerable sampling redundancy.Thus, a more methodical way of coil positioning is needed, but without sacrificing positioning accuracy.Hence, a 3D scanning gantry was used here that not only provides full control of sampling locations, but further improves upon coil positioning accuracy.
A discarded Creality Ender 3D printer was acquired and repurposed for the single-coil scanning measurements needed for image reconstruction.The print head was removed and modified to allow for mounting of the enclosure and its attached sensing coil.A custom stepper motor controller was built, and associated software was written to control the movement of each stepper motor mounted on the printer while simultaneously measuring inductive loss [13] at ∼8.85 MHz.
Stepper motors controlling movement along X and Y axes were configured to permit 0.0125 mm steps, while vertical steps were set to 0.01 mm.A 3D lattice of points was provided from a text file to direct the coil to desired locations where inductive loss in the sensing coil was measured [12,13] before moving on to the next location.The sensing coil consisted of four circular, parallel PCB traces, connected in series.Loop radii were 25.0 mm and coaxially spaced 0.3 mm apart to form a very short solenoid.Each trace was 0.5 mm wide, prepared from 2 oz.Cu, yielding a coil inductance of ∼2.35 µH for the complete coil.
Two 31 × 31 cm × 4 cm thick EVA (ethylene vinyl-acetate co-polymer) foam slabs were stacked on the gantry stage, with the phantom then placed on top of the upper EVA slab.The purpose of the EVA is to provide some isolation from the metallic components comprising the gantry stage.The entire setup, with phantom in position, is shown in Figure 3.
A wide variety of scanning lattices is feasible with the setup shown in Figure 3, though only one is considered here.The primary constraints are the gantry support rods of the structure, which limits scanning along the X and Y borders.Referring to Figure 3, coordinate (0, 0, 0) mm is located at the rear left lower corner of the tray, while (298, 298, 0) mm is located at the front right lower corner of the tray.The particular lattice of sampling points featured in this work consists of 7 interleaved horizons, starting at Z = 45 mm and ending at Z = 57 mm.All horizons use a grid spacing of 13.5 mm.Odd numbered horizons run from (60.0, 60.0) mm to (249.0, 249.0) mm, while even numbered horizons run from (53.25, 53.25) mm to (242.25, 242.25) mm.All together, there are 1575 sampling locations across the lattice.An admittance measurement is acquired at (152.4,152.4,165) mm to establish a reference value, from which all loss values are computed [12,13].Loss values are also corrected for a small amount of tank circuit loss due to parasitic capacitance [12].Total scan time for this lattice is ∼10 min, though motors are easily programmed for faster scans.
To facilitate image reconstruction, the space occupied by the phantom was discretized into a finite element mesh consisting of six equally thick layers of prismatic elements spanning a total height of 4.3 cm.The 2D triangular mesh extruded to create the six layers is shown in Figure 4.The region delineated by the interior, bold rectangle contains all 12 low conductivity features previously described.The dashed red line indicates the maximal lateral extent of sampling, so that the coil center is never placed outside this dashed line.
Image reconstruction requires a stopping condition, where the global regularization parameter τ must not be reduced any further.Here, the Morozov discrepancy principle is used [22], which states that the computed error must not be reduced below a known measurement error.A measurement error is determined for the gantry system used here by performing "placebo" scans that include just the empty tray.For the lattice just described, a loss error of 0.9 mΩ was obtained.In all reconstructed images reported in the next section, regularization parameter τ was reduced only to the point that error-the normed difference between predicted and measured loss values-was ∼1.0 mΩ.
Image Reconstruction Results
To illustrate the need for depth-dependent penalties, a sequence of "sagittal slices" (Y = 15 cm plane) is presented that does not use depth-dependence.The first sets the ratio α/τ = 10, so that the zeroth-order penalty term dominates over the first-order penalty.Figure 5 illustrates the Y = 15 cm sagittal slice obtained under this condition.Though the four low-conductivity blocks are showing at the correct position and width (∼3 cm), they do not show correctly over their complete height of ∼4 cm.The origin of this problem can be traced to the rapid decay of the kernel with depth, indicating a decreased sensitivity to material at greater depth.Of course, the coil cannot enter the phantom interior to compensate for such lost sensitivity.Fewer problems may be expected with the lateral resolution of features since the coil sensor can be moved in lateral directions with only modest restrictions.However, because of the limitations of lateral sensor movement on the 3D gantry used here, similar sensitivity problems are also expected that will hinder our ability to fully resolve the lateral extent of phantom features.
During the course of image reconstruction, the average electrical conductivity is computed as 0.94 S/m (via Equation ( 12) of [4]).Viewing the bottom of the image in Figure 5, conductivity is very close to this value.Even though the penalty is applied equally along the Z-axis, reconstruction suggests that the penalty is too high near Z = 0, preventing any significant departure from the average near the lower boundary.The features that should be visible in the lowest portions of the image have collapsed to the average value.Apparently, the uniformly applied penalty is too large for these features to appear beneath ∼2.0 cm, though instrumentation noise likely also contributes to a lack of sensitivity to structure at depths exceeding ∼2 cm [4].Examination of Figure 6 shows a similar behavior when the first-order penalty is strongly favored, by 10×.In this case, the lower portions of the square blocks emerge, but with excessive smoothing.Though conductivity is found to be well below the average value, the blocks are blurred together, giving a value of ∼0.35 S/m.This suggests that, again, the penalty imposed on solutions is excessive at greater depths into the phantom, limiting sensitivity to specific features.When each penalty type is weighted equally, but still without depth-dependence, the sagittal slice shown in Figure 7 is obtained.In the absence of depth-dependent penalties, this is perhaps the best that can be achieved with this particular phantom, scanning lattice, and coil.In Figure 7, the blocks can be resolved up to a depth of nearly ∼2.5 cm.Note that this is the radius of the coil used for each of these three images.If coil radius is, indeed, a limiting factor for depth resolution when penalties are depth-independent, then a larger coil may give superior results.Figures 5-7 suggest that depth-dependent penalties may be helpful.Reducing the 0-order penalty at greater depths would allow conductivity to more easily deviate from its average value, while reducing the first-order penalty term at greater depths would prevent excessive smoothing of features as they appear.To reduce the complexity of managing two depth-dependent parameters, we first consider the simpler task that adjusts parameter η 0 (of Equation ( 36)) for the 0-order penalty, but in combination with the ratio α/τ.Increasing α/τ as the 0-order penalty is decreased at depth should limit smoothing as features appear.
Setting η 0 of Equation (36) to 0.2, the reduction of the 0-order penalty term, with depth, is shown in Figure 8.The decrease shown in Figure 8 allows electrical conductivity to more easily deviate from the average during image reconstruction.To prevent excessive smoothing at greater depths when features emerge, as in Figure 6, α/τ is increased.Focusing again on the Y = 15 cm sagittal slice, and comparing with Figures 5-7, Figure 9 shows the effects of using a depth-dependent zero-order penalty when α/τ = 10, the same as used in Figure 5. Setting this ratio to the same value as before helps to more clearly reveal the impact from assigning depth-dependence to the zero-order penalty.Making a direct comparison of Figure 9 with Figure 5 clearly shows that depth-dependence greatly improves the ability of the algorithm to reveal the square inclusions over their full depth.Though the square blocks are now plainly in view, we note that their separation is not as distinct near the bottom of the phantom as at the top.This can be more clearly demonstrated by building line plots across the image of Figure 9 exactly at Y = 15.0 cm and Z = 1.0 cm or 3.5 cm-shown in Figure 10.Note that the three gaps between blocks are very clearly discernible when Z = 3.5 cm, but are only modestly discernible at Z = 1.0 cm.Ideally, three identical square peaks should appear, having width ∼1.2 cm and heights reaching up to ∼1.4 S/m.Nevertheless, image reconstruction with depth-dependent penalties is clearly advantageous, improving sensitivity to features at greater depth.To gain some sense of the impact of small changes in the 0-order penalty's depthdependence, another reconstruction was computed with η 0 reduced to 0.15, keeping α/τ = 10.The same sagittal slice showed no discernible change compared with Figure 9.In an effort to improve the image further, depth-dependence is also added to the firstorder penalty.Although α/τ = 10, the first-order penalty still plays a significant role for locations Z <∼3.0 cm where the zeroth-order penalty has been reduced as much as 10× (see Figure 8).In fact, the two penalties are actually comparable in the lower half of the phantom.To explore the effect of adding depth-dependence to the first-order penalty, η 1 is set = 0.10 while η 0 remains = 0.15.The result is shown in Figure 11, and may be compared with Figure 9.The motivation for reducing the first-order penalty with depth is the appearance of excessive smoothing on the lower portions of the square feature at X = 21.75 cm in Figure 9.The sagittal slice for this case, as shown in Figure 11, indicates some further improvement in resolution, which is made clearer by building another line plot in the manner of Figure 10, which is shown in Figure 12.Note that the bottom portions of the two far right blocks are now somewhat more distinct, and are more filled out compared with the Figure 9 image.Figures 9 and 11 illustrate the benefits of reducing the size of the penalties with depth into the phantom for a sagittal slice.The spacing between reconstructed blocks is ∼1.0 cm while reconstructed block thickness is ∼3.0 cm-these results are close to actual physical dimensions.Importantly, the blocks are distinct over the full phantom depth of 4.3 cm, in spite of a coil radius that is only 2.5 cm.Similar results are found when smaller α/τ ratios are explored while adjusting depth-dependence on both zeroth-and first-order penalties, though no noticeable improvement in image fidelity was observed.
Figure 13 gives the X = 13.0 cm transverse slice for the same reconstruction as in Figure 11.The central block is resolved at the correct location, with the correct dimensions over its entire depth, and appears essentially square.However, the (1.9 cm diameter) hole is missing.Though the left-side rectangular block clearly appears to be larger than the right side block, as it should, neither block emerges over its full length.Truncation is likely due to insufficient sampling toward the edges of the phantom.Furthermore, vertical dimensions of both blocks are somewhat exaggerated due to smoothing.A mid-plane horizontal slice taken at Z = 2.15 cm is shown in Figure 14.This image also shows the failure of reconstruction to reveal the lateral rectangular blocks over their full length.Though the four square blocks fully appear and line up along the X-axis as they should, the conductivity is assigned a near-zero value, smaller than the expected 0.10 S/m.The depressed value may be a result of the sudden jump in relative permittivity, a feature not captured by the model Equation (1)-agarose gel relative permittivity is ∼72.A gradual lateral reduction in penalty terms may help compensate for inadequate lateral sampling, possibly producing benefits comparable to the use of depth-dependence.Likely, the inadequacy of lateral sampling produces three unwanted results-central blocks show a depressed conductivity; side blocks are truncated; exaggerated side block vertical dimensions.Also, inadequate lateral sampling may have contributed to an absence of holes passing through the central square blocks.The depressed block conductivity suggests that the conductive holes might likewise be suppressed under reconstruction.
Discussion
As shown, combined zeroth-and first-order penalties work well when combined with depth-dependence, facilitating the resolution of central blocks over their full depth.This is in spite of a severely underdetermined image reconstruction problem, or using a coil with radius much smaller than the phantom's depth.Future work with larger coils that are naturally able to promote greater target penetration may obviate the need for depthdependent penalties.Nevertheless, the present work illustrates how the shortcomings of smaller coils may be addressed when larger coils are not an option.As Equation (1) indicates, larger coils, as well as higher frequency, improve sensitivity both nearer to and farther from coil windings.However, the discussion immediately preceding Equation (1) illustrates how the valid conductivity range is reduced with larger coils-doubling coil radius reduces the viable upper conductivity fourfold.An interesting approach, not yet tested, would be to merge data from larger and smaller coils, possibly improving image reconstruction throughout a target.
A similar adjustment of penalties along X and Y axes, to compensate for inadequate lateral sampling, was suggested as a means to fully resolve the rectangular features on either side of the row of centrally located square blocks.However, penalty reduction outside the XY region of these features, managed together with the manipulation of penalties over phantom depth, is likely to be a complex endeavor.A better approach would be to improve lateral scanning access, either by implementing a considerably larger 3D gantry or through the use of a flexible robotic arm, able to acquire data over larger distances and orientations.Depending upon phantom dimensions, results obtained thus far for single-coil MIT [4] suggest that lateral scans need to extend at least two coil diameters beyond a phantom's edge, which is beyond the capability of the current 3D gantry.
Figure 1 .
Figure 1.Layout of epoxy-based features within the supporting tray prior to complete filling with an agarose gel-the tray is partially filled with gel, as shown, to elevate the rectangular blocks to the mid-height of the tray.The central square blocks extend from the tray bottom to ∼4.0 cm of height above the tray floor.Note the holes drilled through the central square blocks.
Figure 2 .
Figure 2. Edge view of phantom, immediately after pouring sufficient molten agarose to fill the upper half of the phantom, fill holes, and fully cover all low-conductivity features; the rectangular features are shown resting on the lower (white) portion of previously poured and cured agarose.
Figure 3 .XFigure 4 .
Figure 3. Discarded 3D printer repurposed for single-coil scanning experiments.The Y-axis runs from left to right tray edge, parallel to the embedded rectangular features, while the X-axis runs from the rear of the tray to the foreground tray border.
Figure 5 .
Figure 5. Sagittal slice at Y = 15 cm.The zero-order penalty is favored 10× over the first-order penalty.Conductivity at Z∼0 cm is near the average value of 0.94 S/m.
Figure 6 .
Figure 6.Sagittal slice, with the first-order normed penalty favored 10× over the zero-order penalty.Note the oversmoothed conductivity beneath ∼2 cm.
Figure 7 .
Figure 7. Sagittal slice obtained with each penalty given equal weight.Features are distinct as deep as Z = 2 cm.
Figure 8 .
Figure 8.To improve sensitivity to features within the deeper layers of the phantom, the zero-order penalty is reduced with depth.Equation (36) is plotted for two values of the parameter η, to give a sense for its effect.
Figure 9 .
Figure 9. Sagittal slice at Y = 15 cm when the zero-order penalty is reduced according to the curve η = 0.2 shown in Figure 8 and α/τ = 10.
Figure 10 .
Figure 10.Line plot of conductivity at constant Y = 15 and Z = 1 cm or 3.5 cm.Note prominent peaks at X = 10.5, 15.0, and 19.5 cm at Z = 3.5 cm corresponding to gaps between features; near the bottom, at Z = 1.0 cm, the same gaps are only modestly noticeable.
Figure 12 .
Figure 12.Line plot of conductivity at constant Y = 15 and Z = 1 cm or 3.5 cm, though now both penalty terms are reduced with depth into the phantom.Peaks located at X = 10.5, 15.0, and 19.5 cm for Z = 1.0 cm are somewhat more prominent than before.
Figure 13 .
Figure 13.Transverse slice of the phantom, cutting through one of the large central square blocks, as well as the long rectangular blocks on either side.The central block is 4.0 × 4.0 cm, with a hole drilled into the center; the left block is 2.1 × 2.1 × 10 cm and the right block is 1.5 × 1.5 × 10 cm.
Figure 14 .
Figure 14.Tomographic slice taken of the XY plane at Z = 2.15 cm, which is located midway between top and bottom of the phantom.The four square blocks are clearly shown, but the lateral rectangular blocks are incomplete.
Funding:
All work described herein was funded internally through Tayos Corp and received no external funding.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable. | 10,327.8 | 2024-04-24T00:00:00.000 | [
"Engineering",
"Physics"
] |
Rost Projectors and Steenrod Operations
Let X be an anisotropic projective quadric possessing a Rost projector ‰. We compute the 0-dimensional component of the total Steenrod operation on the modulo 2 Chow group of the Rost motive given by the projector‰. The computation allows to determine the whole Chow group of the Rost motive and the Chow group of every excellent quadric (the results announced by Rost). On the other hand, the computation is being applied to give a simpler proof of Vishik's theorem stating that the integer dimX + 1 is a power of 2.
M. Rost noticed that certain smooth projective anisotropic quadric hypersurfaces are decomposable in the category of Chow motives into direct sum of some motives.The smallest (in some sense) direct summands are called the Rost motives.For example, the motive of a Pfister quadric is a direct sum of Rost motives and their Tate twists.The Rost projectors split off the Rost motives as direct summands of quadrics.In the present paper we study Rost projectors by means of modulo 2 Steenrod operations on the Chow groups of quadrics.The Steenrod operations in motivic cohomology were defined by V. Voevodsky.We use results of P. Brosnan who found in [1] an elementary construction of the Steenrod operations on the Chow groups.
As a consequence of our computations we give description of the Chow groups of a Rost motive (Corollary 8.2).This result (which has been announced by M. Rost in [11]) allows to compute all the Chow groups of every excellent quadric (see Remark 8.4).
We also give an elementary proof of a theorem of A. Vishik [3, th. 6.1] stating that if an anisotropic quadric X possesses a Rost projector, then dim X + 1 is a power of 2 (Theorem 5.1). is odd if and only if we don't carry over units while adding n and i in base 2.
Proof.For any integer a ≥ 0, let s 2 (a) be the sum of the digits in the base 2 expansion of a.By [9, Lemma 5.4(a)], n+i i is odd if and only if s 2 (n + i) = s 2 (n) + s 2 (i).
The following statement is obvious: Lemma 1.2.For any non-negative integer m, we don't carry over units while adding m and m + 1 in base 2 if and only if m + 1 is a power of 2.
The following statement will be applied in the proof of Theorem 4.8: Corollary 1.3.For any non-negative integer m, the binomial coefficient −m−2 m is odd if and only if m + 1 is a power of 2.
Proof.By Lemma 1.1, the binomial coefficient −m − 2 m = (−1) m 2m + 1 m is odd if and only if there is we don't carry over units while adding m and m + 1 in base 2. It remains to apply Lemma 1.2.
Integral and modulo 2 Rost projectors
Let F be a field, X a quasi-projective smooth equidimensional variety over F .We write CH(X) for the modulo 2 Chow group of X.The usual (integral) Chow group is denoted by CH(X).We are working mostly with CH(X), but several times we have to use CH(X) (for example, already the definition of a modulo 2 Rost correspondence can not be given on the level of the modulo 2 Chow group).
The both groups are graded.We use the upper indices for the gradation by codimension of cycles and we use the lower indices for the gradation by the dimension of cycles.
For projective X 1 and X 2 , an element ρ ∈ CH(X 1 × X 2 ) (we do not consider the gradation on CH for the moment) can be viewed as a correspondence from X 1 to X 2 ([2, §16.1]).In particular, it gives a homomorphism [2, def. 16 where pr 1 and pr 2 are the two projections of X 1 × X 2 onto X 1 and X 2 , and can be composed with another correspondence ρ ∈ CH(X 2 × X 3 ) [2, def.16. 1.1].The same can be said and defined with CH replaced by CH.
Starting from here, we are constantly assuming that char F = 2. Let ϕ be a non-degenerate quadratic form over F , and let X be the projective quadric ϕ = 0. We set n = dim X = dim ϕ − 2 and we assume that n ≥ 1.
An element ∈ CH n (X × X) is called an (integral) Rost correspondence, if over an algebraic closure F of F one has: with X = X F and a rational point x ∈ X.A Rost projector is a Rost correspondence which is an idempotent with respect to the composition of correspondences.
Remark 2.1.Assume that the quadric X is isotropic, i.e., contains a rational Proof.Let ρ be a modulo 2 Rost projector and let be an integral Rost correspondence representing ρ.The correspondence F is idempotent, therefore, by the Rost nilpotence theorem (see [7, th. 3.1]), r is idempotent for some r; so, r is an integral Rost projector.Since ρ is idempotent as well, r still represents ρ.Lemma 2.4.Let be an integral Rost correspondence and let ρ be a modulo 2 Rost correspondence.Then * on CH 0 (X) and on CH 0 (X) is the identity; also ρ * on CH 0 (X) and on CH 0 (X) is the identity.Moreover, for every i with 0 < i < n, the group * CH i (X) vanishes over F .
Proof.It suffices to prove the statements on .Since CH 0 (X) and CH 0 (X) inject into CH 0 (X F ) and CH 0 (X F ) (see [5, prop. 2.6] or [13] for the statement on CH 0 (X)), it suffices to consider the case where the quadric X has a rational closed point x and = ) = 0 and since [X] generates CH 0 (X) while [x] generates CH 0 (X), we are done with the statements on CH 0 (X) and on CH 0 (X).Since ) for any closed subvariety Z ⊂ X of codimension = 0, n, we are done with the rest.
Steenrod operations
In this section we briefly recall the basic properties of the Steenrod operations on the modulo 2 Chow groups constructed in [1].
Let X be a smooth quasi-projective equidimensional variety over a field F .For every i ≥ 0 there are certain homomorphisms S i : CH * (X) → CH * +i (X) called Steenrod operations; their sum (which is in fact finite because is the total Steenrod operation (we omit the * in the notation of the Chow group to notify that S is not homogeneous).They have the following basic properties (see [1] for the proofs): for any smooth quasi-projective F -scheme X, the total operation S : CH(X) → CH(X) is a ring homomorphism such that for every morphism f : Y → X of smooth quasi-projective F -schemes and for every field extension E/F , the squares are commutative.Moreover, the restriction S i | CH n (X) is 0 for n < i and the map α → α 2 for n = i; finally S 0 is the identity.
Also, the total Steenrod operation satisfies the following Riemann-Roch type formula: (in other words, S modified by c(−T ) this way, commutes with the pushforwards) for any proper f : Y → X and any α ∈ CH(Y ), where f * : CH(Y ) → CH(X) is the push-forward, c is the total Chern class, T X is the tangent bundle of X, and c(−T X ) = c −1 (T X ) (the expression −T X makes sense if one considers T X as the element of K 0 (X)).This formula is proved in [1].Also it follows from the previous formulated properties of S by the general Riemann-Roch theorem of Panin [10].
Lemma 3.1.Assume that X is projective.For any α ∈ CH(X) and for any ρ ∈ CH(X × X), one has where T X is the class in K 0 (X) of the tangent bundle of X.
Proof.Let pr 1 , pr 2 : X × X → X be the first and the second projections.By the Riemann-Roch formula applied to the morphism pr 2 , one has By the projection formula for pr 2 , this gives and since S (as well as c) commutes with the products and the pull-backs, we get
Main theorem
In this section, let ϕ be an anisotropic quadratic form over F , and let X be the projective quadric ϕ = 0 with n = dim X = dim ϕ − 2 ≥ 1.We are assuming that an integral Rost projector (see §2 for the definition) ∈ CH n (X × X) exists for our X and we write ρ ∈ CH n (X × X) for the modulo 2 Rost projector.We write h for the class in CH 1 (X) (as well as in CH 1 (X)) of a hyperplane section of X. Proposition 4.1.One has for every i ≥ 0: Proof.Since h ∈ CH 1 (X), we have S(h) = S 0 (h) + S 1 (h).Since S 0 = id while S 1 on CH 1 (X) is the squaring, S(h Besides, for the tangent bundle of the quadric we have: c(T X ) = (1 + h) n+2 .Therefore, the formula of Lemma 3.1 gives the formula of Proposition 4.1.
Lemma 4.2.Let L/F be a field extension such that the quadric X L is isotropic.Then S i (ρ L ) = 0 for every i > 0.
Proof.By the uniqueness of a modulo 2 Rost projector on an isotropic quadric (Remark 2.1) Y has a rational point over F (X). Therefore there exists a rational morphism X → Y .Let α ∈ CH n (X × X) be the correspondence given by the closure of the graph of this morphism.Then ∆ * ( •α) ∈ CH 0 (X), where ∆ : X → X ×X is the diagonal morphism, is an element of CH 0 (X) of degree 1.This is a contradiction with the fact that the quadric X is anisotropic.Lemma 4.4.Let X be an anisotropic F -quadric such that the Witt index of the quadratic form ϕ F (X) is 1.Then for every α ∈ CH i (X F (X) ), i > 0, the degree of the 0-cycle class h i • α is even.
Proof.It is sufficient to consider the case i = 1.We have ϕ F (X) ψ⊥H for an anisotropic quadratic form ψ over F (X) (where H is a hyperbolic plane).Let X be the quadric ψ = 0 over F (X).There is an isomorphism [5, §2.2] for every α ∈ CH 1 (X) and this integer is even.
Proof.We replace µ by its representative in CH i (X × X) and we mean by h the integral class of a hyperplane section of X in the proof.Since the degree homomorphism deg : CH n (X) → Z is injective ([5, prop.2.6] or [13]) with the image 2Z, it suffices to show that deg(µ * (h i )) is divisible by 4. Let us compute this degree.By definition of µ * , we have µ * (h i ) = pr 2 * µ • pr * 1 (h i ) .Note that the product µ • pr * 1 (h i ) is in CH 0 (X × X) and the square CH 0 (X × X) commutes (the two compositions being the degree homomorphism of the group CH 0 (X × X)).Therefore the degree of µ * (h i ) coincides with the degree of pr 1 * µ•pr * 1 (h i ) .By the projection formula for pr 1 * the latter element coincides with the product h i • pr 1 * (µ).
We are going to check that the degree of this element is divisible by 4. Since the degree does not change under extensions of the base field, it suffices to verify the divisibility relation over F (X).The class pr 1 * (µ) F (X) is divisible by 2 by assumption, therefore the statement follows from Lemmas 4.3 and 4.4.
Putting together Corollary 4.6 and Proposition 4.1, we get Corollary 4.7.For every i > 0, one has: Finally, by Corollary 1.3, computing the binomial coefficient modulo 2, together with Lemma 2.4, computing ρ * (h n ), we get Theorem 4.8.Suppose that the anisotropic quadric X of dimension n possesses a Rost projector.Let ρ be a modulo 2 Rost projector on X.Then for every i with 0 < i < n, one has Since CH 0 (X) is an infinite cyclic group generated by h n , h n in CH 0 (X) is not 0. Therefore we get Corollary 4.9.For every i such that 0 < i < n and n − i + 1 is a power of 2, the element S n−i (ρ * (h i )) (and consequently ρ * (h i )) is non-zero.
Dimensions of quadrics with Rost projectors
The following Theorem is proved in [3].The proof given there makes use of the Steenrod operations in the motivic cohomology constructed by Voevodsky (since Voevodsky has announced that the operations were constructed in any characteristic = 2 only quite recently, the assumption char F = 0 was made in [3]).Here we give an elementary proof.
Theorem 5.1.If X is an anisotropic smooth projective quadric possessing a Rost projector, then dim X + 1 is a power of 2.
Proof.Let us assume that this is not the case.Let r be the largest integer such that n > 2 r −1 where n = dim X.Then Theorem 4.8 applies to i = n−(2 r −1), stating that S n−i ρ * (h i ) = 0. Note that n − i ≥ i.Since the Steenrod operation S i is trivial on CH j (X) with i > j, it follows that n − i = i and therefore S n−i ρ * (h i ) = ρ * (h i ) 2 .Since the element * (h i ) (where is the integral Rost projector) vanishes over F , its square vanishes over F as well.The group CH 0 (X) injects however into CH 0 (X F ), whereby * (h i ) 2 = 0 and thereafter S n−i ρ * (h i ) = 0, giving a contradiction with Corollary 4.9.
Remark 5.2.It turns out that Theorem 5.1 is extremely useful in the theory of quadratic forms.For example, it is the main ingredient of Vishik's proof of the theorem that there is no anisotropic quadratic forms satisfying 2 r < dim ϕ < 2 r + 2 r−1 and [ϕ] ∈ I r (F ) (see [14]).
Rost motives
Let Λ be an associative commutative ring with 1.We set ΛCH = Λ ⊗ Z CH (we still do not need any other Λ but Z and Z/2).
We briefly recall the construction of the category of Grothendieck ΛCHmotives as given in [4].A motive is a triple (X, p, n), where X is a smooth projective equidimensional F -variety, p ∈ ΛCH dim X (X × X) an idempotent correspondence, and n an integer.Sometimes the reduced notations are used: (X, n) for (X, p, n) with p the diagonal class; (X, p) for (X, p, n) with n = 0; and (X) for (X, 0), the motive of the variety X.
For a motive M = (X, p, n) and an integer m, the m-th twist M (m) of M is defined as (X, p, n + m).
The set of morphisms is defined as In particular, every homogeneous correspondence α ∈ CH(X × X ) determines a morphism of every twist of (X, p) to a certain twist of (X , p ).The Chow group ΛCH * (X, p, n) of a motive (X, p, n) is defined as It gives an additive functor of the category of ΛCH-motives to the category of graded abelian groups.For any Λ, there is an evident additive functor of the category of CH-motives to the category of ΛCH-motives (identical on the motives of varieties).In particular, every isomorphism of CH-motives automatically produces an isomorphism of the corresponding ΛCH-motives.This is why bellow we mostly formulate the results only on the integral motives.
We are coming back to the quadratic forms.Definition 6.1.Let be an integral Rost projector on a projective quadric X.We refer to the motive (X, ) as to an (integral) Rost motive.(While the CH-motive given by a modulo 2 Rost projector can be called a modulo 2 Rost motive.)A Rost motive is anisotropic, if the quadric X is so.
Let now π be a Pfister form and let ϕ be a neighbor of π which is minimal, that is, has dimension dim π/2+1.According to [7, 5.2], the projective quadric X given by ϕ possesses an integral Rost projector .Proposition 6.2.Let ϕ be as above.Let be the Rost projector on the quadric X given by a minimal neighbor ϕ of another Pfister form π .The Rost motives (X, ) and (X , ) are isomorphic if and only if the Pfister forms π and π are isomorphic.
Proof.First we assume that (X, ) (X , ).Looking at the degrees of 0cycles on X and on X , we see that ϕ is isotropic if and only if ϕ is isotropic, whereby π is isotropic if and only if π is isotropic.Therefore, the forms π F (π ) and π F (π) are isotropic.Since π and π are Pfister forms, it follows that π π .Conversely, assume that π π .By [7, cor. 3.3], in order to show that (X, ) (X , ), it suffices to construct morphisms of motives (X, ) (X , ) which become mutually inverse isomorphisms over over an algebraic closure F of F (in this case, the initial F -morphisms are isomorphisms by [7, cor. 3.3], although probably not mutually inverse ones).
Since π π , the quadratic forms ϕ F (ϕ) and ϕ F (ϕ ) are isotropic.Therefore there exist rational morphisms X → X and X → X.The closures of their graphs give two correspondences α ∈ CH(X × X ) and β ∈ CH(X × X).
Over F we have: where x ∈ X F and x ∈ X F are closed rational points, while a is an integer (which coincides, in fact, with the degree of the rational morphism).Similarly, We are going to check that the integers a and b are odd.For this we consider the composition Over F this composition gives [X ×x]+ab [x×X].Consequently, by [6, th. 6.4] and Lemma 4.3, the integer ab is odd.
Let us take now with some degree 2 closed points y ∈ X and y ∈ X .Then the two Fmorphisms (X, ) (X , ) given by these α and β become mutually inverse isomorphisms over F .Definition 6.3.The motive (X, ) for X and as in Proposition 6.2 (more precisely, the isomorphism class of motives) is called the Rost motive of the Pfister form π and denoted R(π).Remark 6.4.It is conjectured in [7, conj. 1.6] that every anisotropic Rost motive is the Rost motive of some Pfister form.
Motivic decompositions of excellent quadrics
Theorem 7.1.Let ϕ be a neighbor of a Pfister form π and let ϕ be the complementary form (that is, ϕ is such that the form ϕ⊥ϕ is similar to π).Then where m = (dim ϕ − dim ϕ )/2, X is the quadric defined by ϕ, and X is the quadric defined by ϕ .
We recall that a quadratic form ϕ over F is called excellent, if for every field extension E/F the anisotropic part of the form ϕ E is defined over F .An anisotropic quadratic form is excellent if and only if it is a Pfister neighbor whose complementary form is excellent as well [8, §7].
Let π 0 ⊃ π 1 ⊃ • • • ⊃ π r be a strictly decreasing sequence of embedded Pfister forms.Let ϕ be the quadratic form such that the class [ϕ] of ϕ in the Witt ring of F is the alternating sum while the dimension of ϕ is the alternating sum of the dimensions of the Pfister forms.Clearly, ϕ is excellent.Moreover, every anisotropic excellent quadratic form is similar to a form obtained this way and the Pfister forms are uniquely determined by the initial excellent quadratic form.
Let X be an excellent quadric, that is, the quadratic form ϕ giving X is excellent.As Theorem 7.1 shows, the motive of X is a direct sum of twisted Rost motives.More precisely, Corollary 7.2.Let X be the excellent quadric determined by Pfister forms Example 7.5 (Norm forms, [12, th.17]).Let ϕ be a norm quadratic form, that is, ϕ is a minimal neighbor of a Pfister form π containing a 1-codimensional subform which is similar to a Pfister form π .Then
Chow groups of Rost motives
The following theorem computes the Chow groups of the modulo 2 Rost motive of a Pfister form.Theorem 8.1.Let ρ be the modulo 2 Rost projector on the projective ndimensional quadric X given by an anisotropic minimal Pfister neighbor.Let i be an integer with 0 ≤ i ≤ n.If i + 1 is a power of 2, then the group ρ * CH i (X) is cyclic of order 2 generated by ρ * (h n−i ).Otherwise this group is 0.
Proof.According to Proposition 6.2, we may assume that X is a norm quadric, that is, X contains a 1-codimensional subquadric Y being a Pfister quadric.Let r be the integer such that n = dim X = 2 r − 1.
We proceed by induction on r.Let Y ⊂ Y be a subquadric of dimension 2 r−1 − 2 which is a Pfister quadric.Let X be a norm quadric of dimension 2 r−1 − 1 such that Y ⊂ X ⊂ Y .Let ρ be a modulo 2 Rost projector on X .By Example 7.5, passing from CH-motives to the category of CH-motives, we see that the motive of X is the direct sum of the motive (X, ρ) and the motives (X , ρ , i) (we do not care about the gradations on the Chow groups).
Also the motive of Y decomposes in the direct sum of the motives (X , ρ , i) In The group ρ * CH(X ) is known by induction.In particular, the order of this group is 2 r .It follows that the order of ρ * CH(X) is at most 2 r+1 .Corollary 4.9 gives already r+1 non-zero elements of ρ * CH * (X) living in different dimensions (more precisely, ρ * (h n−2 s +1 ) = 0 for s = 1, . . ., r − 1 by Corollary 4.9 and for s = 0, r by Lemma 2.4) and therefore generating a subgroup of order 2 r+1 .It follows that the order of ρ * CH(X) is precisely 2 r+1 and the non-zero elements we have found generate the group ρ * CH(X).
The integral version of Theorem 8.1 is given by Corollary 8.2.For X as in Theorem 8.1, let be the integral Rost projector on X.Then for every i with 0 ≤ i ≤ n, the group * CH i (X) is a cyclic group generated by * (h n−i ).Moreover, the element * (h n−i ) is • 0, if i + 1 is not a power of 2; • of order 2, if i + 1 is a power of 2 and i ∈ {0, n}; • of the infinite order, if i ∈ {0, n}.
Proof.The statements on CH 0 (X) and on CH 0 (X) are clear.The rest follows from Theorem 8.1, if we show that 2 * CH i (X) = 0 for every i with 0 < i < n.Let L/F be a quadratic extension such that X L is isotropic.Then ( L ) * CH i (X L ) = 0 for such i by [7, cor. 4.2].Therefore, by the transfer argument, 2 * CH i (X) = 0.
Remark 8.3.The result of Corollary 8.2 was announces in [11].A proof has never appeared.
Remark 8 . 4 .
Clearly, Corollary 8.2 describes the Chow group of the Rost motive of an anisotropic Pfister form.Since the motive of any anisotropic excellent quadric is a direct sum of twists of such Rost motives (Corollary 7.2), we have computed the Chow group of an arbitrary anisotropic excellent projective quadric.Note that the answer depends only on the dimension of the quadric.
is a Rost projector.Moreover, this is the unique Rost projector on X ([7, lemma 4.1]).Rost correspondence ρ ∈ CH n (X × X) is a correspondence which can be represented by an integral Rost correspondence.A modulo 2 Rost projector is an idempotent modulo 2 Rost correspondence. | 5,815.6 | 2002-01-01T00:00:00.000 | [
"Mathematics"
] |
Mean distribution approach to spin and gauge theories
We formulate self-consistency equations for the distribution of links in spin models and of plaquettes in gauge theories. This improves upon known mean-field, mean-link, and mean-plaquette approximations in such that we self-consistently determine all moments of the considered variable instead of just the first. We give examples in both Abelian and non-Abelian cases.
It is always of interest to think about methods that allow easy extraction of approximate results, even though the computer power available for exact simulations is growing at an ever increasing pace. Mean-field methods are often qualitatively reliable in their self-consistent determination of the long-distance physics, and have a wide range of applications, with spin models as typical examples. For a gauge theory, formulated in terms of the gauge links, however, it is questionable what a mean link would mean, because of the local nature of the symmetry. This can be addressed by fixing the gauge, but the mean-field solution will then in general depend on the gauge-fixing parameter. Nevertheless, Drouffe and Zuber developed techniques for a mean field treatment of general Lattice Gauge Theories in [1] and showed that for fixed βd, where β is the inverse gauge coupling and d the dimension, the mean-field approximation can be considered the first term in a 1/d expansion. They established that the mean field approximation can be thought of as a resummation of the weak coupling expansion in a particular gauge and that there is a first order transition to a strong coupling phase at a critical value of β. Since it becomes exact in the d → ∞ limit, this mean field approximation can be used with some confidence in high-dimensional models [2].
The crucial problem of gauge invariance was tackled and solved by Batrouni in a series of papers [3,4], where he first changed variables from gauge-variant links to gauge-invariant plaquettes. The associated Jacobian is a product of lattice Bianchi identities, which enforce that the product of the plaquette variables around an elementary cube is the identity element. In the Abelian case this is easily understood, since each link occurs twice (in opposite directions) and cancels in this product, leaving the identity element. In the non-Abelian case the plaquettes in each cube have to be parallel transported to a common reference point in order for the cancellation to work. It is worth noting that in two dimensions there are no cubes so the Jacobian of the transformation is trivial and the new degrees of freedom completely decouple (up to global constraints).
This kind of change of variables can be performed for any gauge or spin model whose variables are elements of some group. Apart from gauge theories, examples include Z N -spin models, O(2)-and O(4)-spin models and matrix-valued spin models. In spin models, the change of variables is from spins to links and the Bianchi constraint dictates that the product of the links around an elementary plaquette is the identity element. A visualization of the transformation and the Bianchi constraint for a 2d spin model is given in Fig. 1.
first choose a set of live variables, which keep their original dynamics and interact with an external bath of meanvalued fields. Interactions are generated through the Jacobian, which is a product of Bianchi identities represented by δ-functions where P denotes a plaquette and ∂C denotes the oriented boundary of the elementary cube C. The δ-functions can be represented by a character expansion in which we can replace the characters at the external sites by their expectation, or mean, values. Upon truncating the number of representations, this yields a closed set of equations in the expectation values which can be solved numerically. The method can be systematically improved by increasing the number of representations used and the size of the live domain. While this method works surprisingly well, even at low truncation, it determines the expectation value of the plaquette in only a few representations. Here, we propose a method that self-consistently determines the complete distribution of the plaquettes (or links) and thus the expectation value in all representations. This is due to an exact treatment of the lattice Bianchi identities which does not rely on a character expansion. The only approximation then lies in the size of the live domain which can be systematically enlarged, as in any mean field method. It is worth noting that our method works best for small β and low dimensions: it does not become exact in the infinite dimension limit. In this way it can be seen as complementary to the mean field approach of [1]. We will also see that the mean distribution approach proposed here actually works rather well for both small and large β.
The paper is organized as follows. In section II we describe the method in general terms and compare it to the mean field, mean link and mean plaquette methods before describing more detailed treatments of spin models and gauge theories in sections III and IV respectively. Finally, we draw conclusions in section V.
A. Mean Field Theory
Let us for completeness give a very brief reminder of standard mean field theory. Consider for definiteness a lattice model with a single type of variables s which live on the lattice sites. The lattice action is assumed to be translation invariant and of the form where i, j labels the lattice sites and V (s) is some local potential. Let us now split the original lattice into a live domain D and an external bath D c . The variables {s i | i ∈ D c } all take a constant "mean" value s. The mean field action then becomes (up to a constant) where s is determined by the self-consistency condition that the average value of s in the domain D is equal to the average value in the external bath, Once s has been determined the mean field action (3) can be used to measure other observables local to the domain D.
B. Mean Distribution Theory
To generalize the mean field approach we relax the condition that the fields at the live sites interact only with the mean value of the external bath. Instead, the fields in the external bath are allowed to vary and take different values distributed according to a mean distribution. The self-consistency condition is thus that the distribution of the variables in the live domain equals the distribution in the bath. Consider a real scalar theory for illustration purposes. Starting from the action with nearest neighbor coupling κ and a general on-site potential V , we expand the field φ ≡ δφ +φ around its mean value φ and integrate out all the fields except the field at the origin φ 0 = φ+ δφ 0 and its nearest neighbors, denoted φ i , i = 1, . . . , z, where z is the coordination number of the lattice. The partition function can then be written where p J (δφ 1 , . . . , δφ z ) is a joint distribution function for the fields around the origin and absorbs everything not explicitly depending of δφ 0 into its normalization. So far everything is exact and, given a way to compute p J , we could obtain all local observables, for example φ n 0 . Now, p J is in general not known, so we will have to make some ansatz and determine the best distribution compatible with this ansatz. In standard mean field theory the ansatz is p J (δφ 1 , . . . , δφ z ) = z i=1 δ(δφ i ) and only φ is left to be determined as explained above. In the mean distribution approach we will assume that the distribution is a product distribution p J (δφ 1 , . . . , δφ z ) = z i=1 p(δφ i ) and determine p self-consistently to be equal to the distribution of δφ 0 , i.e. where The mean value φ has to be adjusted such that the distribution p has zero mean. After p and φ have been determined any observable, even observables extending outside the live domain, can be extracted under the assumption that every plaquette is distributed according to p. Local observables are given by simple expectation values with respect to the distribution p. This strategy can also be applied to spin and gauge models, taking as variables the links and plaquettes respectively, as discussed in the introduction. For a gauge theory, the starting point is the partition function in the plaquette formulation where S[U p ] is any action which is a sum over the individual plaquettes, for example the Wilson action S[U P ] = β P (1 − ReTrU P ), or a topological action [5,6] where the action is constant but the traces of the plaquette variables are limited to a compact region around the identity. The difference to the mean plaquette method is that it is not assumed that the external plaquettes take some average value, but rather that they are distributed according to a mean distribution. More specifically, we assume that there exists a mean distribution for the real part of the trace of the plaquettes and that the other degrees of freedom are uniformly distributed with respect to the Haar measure. Such a distribution must exist and it can be measured for example by Monte Carlo simulations. For definiteness let us consider compact U (1) gauge theory with a single plaquette P 0 as the live domain. The plaquette variables U P = e iθ P ∈ U (1) can be represented with a single real parameter θ P ∈ [0, 2π] and the real part of the trace is cos θ P . Our goal is to obtain an approximation to the distribution p (cos θ P0 ), or equivalently p To obtain a finite number of integrals we now make the approximation that all plaquettes which do not share a cube with P 0 are independently distributed according to some distribution p(θ). Clearly this neglects some correlations among the plaquettes but this can be improved by taking a larger live domain. Again, let C denote an elementary cube with boundary ∂C and P denote a plaquette. We define i.e. C 0 is the set of all cubes containing P 0 , and P C is the set of plaquettes, excluding P 0 , making up C 0 . The sought distribution is then determined by the self-consistency equation This self-consistency equation is solved by iterative substitution: given an initial guess for the distribution p (0) (θ P0 ), it is a straightforward task to integrate out the external plaquettes and obtain the next iterate p (1) (θ P0 ) from eq. (14), and to iterate the procedure until a fixed point is reached, i.e. p (n+1) (θ P0 ) = p (n) (θ P0 ). This is a functional equation, which is solved numerically by replacing the distribution p by a set of values on a fine grid in θ P or by a truncated expansion in a functional basis. In this paper we have chosen to discretize the distribution on a grid. As mentioned above, this can be done in a completely analogous way also for spin models and for different types of actions. In
III. SPIN MODELS
We will start by applying the method to a few spin models, namely Z 2 , Z 4 and the U(1) symmetric XY -model and we will explain the procedure as we go along. Afterwards, only minor adjustments are needed in order to treat gauge theories. We will derive the self-consistency equations in an unspecified number of dimensions although graphical illustrations will be given in two dimensions for obvious reasons.
Let us start with an Abelian spin model with a global Z N symmetry. The partition function is given by where s i = e i 2π N ni , n i ∈ {1, · · · , N }(∈ Z N ). In the usual mean field approach we would self-consistently determine the mean value of s i by letting one or more live sites fluctuate in an external bath of mean valued spins. However, Batrouni [3,7] noticed that by self-consistently determining the mean value of the links, or internal energy, U ij ≡ s i s † j , much better estimates of for example the critical temperature could be obtained for a given live domain. Thus, we first change variables from spins to links. The Jacobian of this change of variables is a product of lattice Bianchi identities, δ (U P − 1), one for each plaquette [8]. This can be verified by introducing the link variables U ij via dU ij δ U ij s j s † i − 1 and integrating out the spins in a pedestrian manner. Since the Boltzmann weight factorizes over the link variables, all link interactions are induced by the Bianchi identities and hence the transformation trivially solves the one dimensional spin chain where there are no plaquettes [9].
As mentioned above, each δ-function can be represented by a sum over the characters of all the irreducible representations of the group. For Z N this is merely a geometric series, δ (U P − 1) = 1 N N −1 n=0 U n P . Since only the real part enters in the action it is convenient to reshuffle the sum so that we sum only over real combinations of the variables, where δ N even is 1 if N is even and 0 otherwise. The next step is to choose a domain of live links. In this step, imagination is the limiting factor; for a given number of live links there can be many different choices and it is not known to us if there is a way to decide which is the optimal one. The simplest choice is of course to keep only one link alive but in our 2d examples we will make use also of a nine-link domain [7] to see how the results improve with larger domains. These two domains are shown in the left (one link) and right (nine links) panels of Fig. 3. In the case of a single live link, there are 2(d − 1) plaquettes and thus there are 2(d − 1) δ-functions of the type in eq. (16).
A. Mean link approach
Let us for simplicity consider the case of one live link, denoted U 0 . The external links, denoted U k by some enumeration ij → k, are fixed to the mean value by demanding that U n k = U −n k = U n , ∀k = 0. Each plaquette containing the live link also contains three external links, and the δ-function eq. (16) becomes For large N it is best to perform the sum analytically to obtain (for N = 2M ) For U(1) we define πn0 M = θ 0 as M → ∞ and since U < 1 we get which can efficiently be dealt with by numerical integration. The partition functions for the single live link for Z 2 , Z 4 and U (1) [10] spin models then become .
B. Mean distribution approach
In the mean distribution approach we sum over the external links assuming they each obey a mean distribution p(U ), for which a one-to-one mapping to the set of moments { U n } exists. The difference between the two methods becomes apparent when expressed in terms of the moments, which are obtained by integrating the distributions of the external links against the δ-function given by the Bianchi constraint in eq. (16) Comparing to eq (17), we see that for N ≤ 3 there is only one moment and the two methods are thus equivalent, but for larger N the mean link approach makes the approximation U n = U n whereas the mean distribution approach treats all moments correctly.
Thus, for small N we do not expect much difference between the two approaches, and this is indeed confirmed by explicit calculations. For U (1), however, there are infinitely many moments which are treated incorrectly by the mean link approach and this renders the mean distribution approach conceptually more appealing.
By using the Bianchi identities, one link per plaquette can be integrated out, giving It is often convenient not to work solely with distributions of single links, but also of multiple links, which are defined in the obvious way, and can efficiently be calculated recursively. The above partition function then simplifies slightly to .
In Figs. 4 and 5 we show results for 2d Z 2 , Z 4 and U (1) spin models, the latter for the Wilson action S = β ij Re s i s † j and the topological action e S = ij Θ (δ − |θ i − θ j |). Note the remarkable accuracy of the mean distribution approach in the latter case, even when there is only one live link.
IV. GAUGE THEORIES
To extend the formalism from spin models to gauge theories, we merely have to change from links and plaquettes to plaquettes and cubes. The partition function for a U (1) gauge theory analogous to eq.(22) becomes in the mean plaquette approach and in the mean distribution approach. Results for d = 4 are shown in Fig. 6 for the Wilson action (left panel ) and for the topological action (right panel ). Another nice feature of the mean distribution approach is that other observables become available, like for instance the monopole density in the U (1) gauge theory, under the assumption that each plaquette is distributed according to the mean distribution p. A cube is said to contain q monopoles if the sum of its outward oriented plaquette angles sums up to 2πq. Given the distribution p(θ) of plaquette angles the (unnormalized) probability p q of finding q monopoles in a cube is given by and the monopole density n monop is given by In Fig. 7 we show the monopole densities for 4d U (1) gauge theory as obtained by Monte Carlo simulations and by the mean distribution approach. Note that the monopole extends outside of the domain of a single live plaquette, which was used to determine the mean distribution p. The left panel shows results for the Wilson action and in the right panel the topological action is used. We can also treat SU (2) Yang-Mills theory without much difficulty. For the mean plaquette approach we need the character expansion of the δ-function where θ C is related to the trace of the cube matrix U C through TrU C = 2 cos θ C .
In the mean plaquette approach we again make the substitution U C → U 0 U 5 in the case of a single live plaquette. The above delta function then becomes For SU (2), the analogue of a restriction δ on the plaquette angle is a restriction on the trace of the plaquette matrix to the domain [2α, 2], where −1 ≤ α < 1. If we define a 0 ≡ 1 2 Tr U 0 = cos θ 0 the approximate SU (2) partition function can be written [11] in a way very similar to the U (1) partition function (27) from which U can be easily obtained as a function of α and β.
The mean distribution approach works in a completely analogous way as for U (1), but let us go through the details anyway, since there are now extra angular variables to be integrated out. The starting point is again an elementary cube on the lattice. Five of the cubes faces have their trace distributed according to the distribution p(a 0 ) and we want to calculate the distribution of the sixth face compatible with the Bianchi identity U C = 1. In other words, taking U 6 as the live plaquette, we want to evaluatẽ where we have decomposed U 6 = Ω 6Û6 Ω † 6 withÛ 6 a diagonal SU (2) matrix with trace 2a 0,6 , i.e. Ω 6 is the angular part of U 6 . The choice to include the measure factor 1 − a 2 0 in the distribution is arbitrary but convenient. To facilitate the calculation we recursively combine the product of four of the plaquette matrices into one matrix, U 1 U 2 U 3 U 4 →Ũ , by pairwise convolution of distributions (with p 1 (a 0 ) ≡ p(a 0 )) where α 1 ≡ α, α 2i = max(2α i − 1, −1) and χ A is the characteristic function on the domain A. The domain of integration in the (a 0,1 , a 0,2 )-plane is simply connected with parametrizable boundaries and comes from the condition that the argument of the delta function has a zero for some cos θ 12 ∈ [−1, 1]. We then obtain for the sought distributionp (a 0,6 ) ∝ dΩ 6 dU 5 p(a 0,5 ) where it is now easy to integrate outŨ = U † 6 U † 5 . If we denote by θ 56 the angle between U 5 and U 6 , the angular integral over Ω 6 contributes just a multiplicative constant and we obtaiñ p(a 0,6 ) ∝ da 0,5 d cos θ 56 p(a 0,5 ) p 4 a 0,5 a 0,6 − 1 − a 2 0,5 1 − a 2 0,6 cos θ 56 a 0,5 a 0,6 − 1 − a 2 0,5 1 − a 2 0,6 cos θ 56 , which can be evaluated numerically in a straightforward manner. In the end, since there are 2(d − 2) cubes sharing the plaquette P 0 , and since the a priori probability for P 0 to have trace 2a 0 is 1 − a 2 0 e βa0 , with respect to the uniform measure, we obtain for one live plaquette which also defines the functional self-consistency equation for p(a 0 ). Results for the Wilson and topological actions can be seen in Fig. 8 in the left and right panels, respectively [12]. For SU (3) one can proceed in an analogous manner, only the angular integrals are now more involved and the trace of the plaquette depends on two diagonal generators so the resulting distribution function needs to be two dimensional.
V. CONCLUSIONS
It has been shown before [7] that determining a self-consistent mean-link gives a much better approximation than the traditional mean-field. Furthermore, the symmetry-invariant mean link can be generalized to a mean plaquette in gauge theories [3]. Here, we have shown that the approximation can be further improved by determining the self-consistent mean distribution of links or plaquettes. The extension from a self-consistent determination of the symmetry invariant mean link or plaquette to a self-consistent determination of the entire distribution of links and plaquettes is shown to improve upon the results obtained by Batrouni in his seminal work [3,4]. Especially appealing is the fact that the mean distribution approach yields a non-trivial result for the whole range of couplings and not just in the strong coupling regime, which is sometimes the case for the mean link/plaquette approach, or just in the weak coupling regime which is accessible to the mean field treatment of [1]. Indeed, the mean distribution approach gives a nearly correct answer when the correlation length is not too large, and by enlarging the live domain the exact result is approached systematically for any value of the coupling. As the domain of live variables is enlarged, the mean link/plaquette and the mean distribution results tend to approach each other but since determining the full mean distribution does not require much additional computer time it should always be desirable to do so.
Furthermore, another appealing feature of the mean distribution approach is that once the distribution has been self-consistently determined, other local observables, like the vortex or monopole densities become readily available. Finally, the whole approach applies to non-Abelian models as well. | 5,628 | 2016-01-06T00:00:00.000 | [
"Mathematics"
] |
Accumulation of Extracellular Hyaluronan by Hyaluronan Synthase 3 Promotes Tumor Growth and Modulates the Pancreatic Cancer Microenvironment
Extensive accumulation of the glycosaminoglycan hyaluronan is found in pancreatic cancer. The role of hyaluronan synthases 2 and 3 (HAS2, 3) was investigated in pancreatic cancer growth and the tumor microenvironment. Overexpression of HAS3 increased hyaluronan synthesis in BxPC-3 pancreatic cancer cells. In vivo, overexpression of HAS3 led to faster growing xenograft tumors with abundant extracellular hyaluronan accumulation. Treatment with pegylated human recombinant hyaluronidase (PEGPH20) removed extracellular hyaluronan and dramatically decreased the growth rate of BxPC-3 HAS3 tumors compared to parental tumors. PEGPH20 had a weaker effect on HAS2-overexpressing tumors which grew more slowly and contained both extracellular and intracellular hyaluronan. Accumulation of hyaluronan was associated with loss of plasma membrane E-cadherin and accumulation of cytoplasmic β-catenin, suggesting disruption of adherens junctions. PEGPH20 decreased the amount of nuclear hypoxia-related proteins and induced translocation of E-cadherin and β-catenin to the plasma membrane. Translocation of E-cadherin was also seen in tumors from a transgenic mouse model of pancreatic cancer and in a human non-small cell lung cancer sample from a patient treated with PEGPH20. In conclusion, hyaluronan accumulation by HAS3 favors pancreatic cancer growth, at least in part by decreasing epithelial cell adhesion, and PEGPH20 inhibits these changes and suppresses tumor growth.
Introduction
Pancreatic ductal adenocarcinoma is the fourth-leading cause of cancer-related deaths in the United States with a median 5year survival rate of 6% [1]. Pancreatic cancer is characterized by a desmoplastic response involving stromal fibroblasts, inflammatory cells, and pathological deposition of altered extracellular matrix [2,3] that contains high levels of fibrous collagen, proteoglycans, and glycosaminoglycans including hyaluronan (HA, hyaluronic acid) [3]. Hyaluronan is a negatively charged linear glycosaminoglycan composed of repeating disaccharide units of N-acetylglucosamine and Dglucuronic acid. Excess hyaluronan is found in several tumor types including pancreatic, breast, and prostate cancers [4][5][6][7] and its accumulation has been shown to be a factor associated with poor prognosis for cancer patients [5,7,8]. Hyaluronan accumulation leads to increased tumor interstitial fluid pressure and poor perfusion [6,9] explained by continuous tumor cell secretion of hyaluronan and its substantial absorption of water molecules (∼15 per disaccharide) [10]. Elevated 2 BioMed Research International tumor interstitial fluid pressure and poor perfusion can be normalized by hyaluronan depletion from the tumor [6,9]. In addition, hypoxia, often found in advanced solid tumors and characterized by upregulation of hypoxia-inducible factor-1 (HIF-1 ) [11], has been associated with increased hyaluronan accumulation [11,12].
The hyaluronan molecule, composed of 2,000-25,000 disaccharides, with molecular weight of 1-10 million Da, is synthesized at the plasma membrane by three transmembrane synthases (HAS1-3) and is simultaneously extruded to extracellular space. Hyaluronan interacts with several binding proteins and can be incorporated into the extracellular matrix or bound to its cell surface receptors. Hyaluronan binding to its best-characterized receptor, CD44, induces activation of the PI3K-Akt and MAP kinase pathways and promotes tumor cell proliferation, invasion, and chemoresistance [13,14]. Accumulation of intracellular hyaluronan is also found in several cell types [15][16][17] where it has been suggested to be a result of hyaluronan endocytosis and degradation [18] or activation of intracellular hyaluronan synthesis [17,19]. Hyaluronan is degraded mainly by the hyaluronidase-1 (HYAL1) and hyaluronidase-2 (HYAL2) enzymes [20]. In addition to hyaluronidases, exoglycosidases and reactive oxygen species are known to participate in the degradation of high molecular weight hyaluronan to smaller fragments [21]. Recently, KIAA1199 has been suggested to be a hyaluronan binding protein and may be involved in hyaluronan degradation that is independent of CD44 and HYAL enzymes [22].
Hyaluronan synthases share 57-71% identity at the amino acid level [23] but have different expression patterns, differ in their enzymatic properties, and are differentially regulated [24]. Overexpression of any of the HAS genes in COS-1 and rat fibroblasts leads to the formation of a pericellular hyaluronan coat, where HAS1 produces smaller coats compared to HAS2 or HAS3 [25,26]. The HAS2 isoform has been most studied and is the only isoform required for embryonic development [27]. HAS2 is believed to be the main HAS in epithelial cells and has been reported to mediate epithelial-mesenchymal transition (EMT) [28,29]. Overexpression of HAS3 leads to the formation of long plasma membrane protrusions [30] which have recently been associated with the increased release of hyaluronancoated and plasma membrane-derived microvesicles [31]. Moreover, overexpression of HAS3 has been reported to induce misorientation of the mitotic spindle and disturbed organization of epithelium [32], which is associated with malignancies [33]. Recently, HASs have also been suggested to form homodimers and heterodimers which may affect their function and regulate their activity [24,34].
In human pancreatic cancer, 87% of tumors contain high levels of hyaluronan [4][5][6]. Similarly, extensive hyaluronan deposition is also found in a transgenic mouse model (LSL-Kras G12D/+ ; LSLTrp53 R172H/+ ; and Pdx-1-Cre (KPC)) of pancreatic adenocarcinoma [4,6]. The importance of hyaluronan in the pancreatic cancer microenvironment is further supported by the finding that pancreatic cancer cells encapsulated within hyaluronan gel produce faster growing tumors and are more metastatic than cancer cells with no hyaluronan in a mouse model [35]. Suppression of hyaluronan synthesis by downregulation of HASs has been previously shown to inhibit the growth of implanted breast, prostate, squamous cell carcinoma, and osteosarcoma tumors [15,[36][37][38]. Similarly, hyaluronan synthesis inhibitor 4-methylumbelliferone or its derivatives suppress metastasis of several tumor types [38][39][40][41].
In agreement with previous findings, enzymatic removal of hyaluronan by pegylated human recombinant hyaluronidase (PEGPH20) leads to suppression of tumor growth and metastasis and enhanced delivery of chemotherapy in hyaluronan-rich tumor models of prostate, lung, and pancreatic cancer [5,9,42]. In the KPC mouse model of pancreatic adenocarcinoma that closely resembles human disease, PEGPH20 suppressed tumor growth, increased drug delivery, and increased overall survival when used in combination with gemcitabine compared to gemcitabine monotherapy [4,6]. Increased drug delivery of gemcitabine was associated with stromal remodeling, reduction of tumor interstitial fluid pressure, expansion of intratumoral blood vessels, and ultrastructural changes in tumor endothelium, characterized as formation of fenestrae in tumor endothelium [4,6].
Over the years, HAS2 has been the focus of most research in this area and has been widely associated with malignant transformation and aggressive tumor growth [28,29,43]. However, elevated HAS3 protein levels have also been associated with ovarian cancer [44], and overexpression of HAS3 promotes tumor growth in a preclinical model [45]. Regulation and possible differential mechanisms of HAS2-and HAS3-mediated tumor growth are not completely understood. To date there are no reports comparing the roles of HAS2 and HAS3 in pancreatic cancer. In this study, we explored the biological consequences of HAS2 and HAS3 overexpression in BxPC-3 pancreatic cancer cells and in xenograft tumor models. HAS3 overexpression led to increased accumulation of extracellular hyaluronan that was associated with faster tumor growth and enhanced response to PEGPH20. Deposition of extracellular hyaluronan was associated with loss of adhesion proteins from the plasma membrane that was inhibited by hyaluronan depletion. These results are further supported by the finding that more plasma membrane E-cadherin was observed in KPC tumors as well as in a human non-small cell lung cancer (NSCLC) patient biopsy after PEGPH20 therapy.
Hyaluronidase-Sensitive Particle Exclusion Assay.
To visualize aggrecan-mediated hyaluronan pericellular matrices in vitro, particle exclusion assays were performed as previously described [9,42], with some modifications. Subconfluent cultures were treated with 1 mg/mL bovine nasal septum proteoglycan (Elastin Products, Owensville, MO) for 1 h at 37 ∘ C, followed by incubation with vehicle or 1,000 U/mL recombinant human PH20 (rHuPH20, Halozyme Therapeutics, San Diego, CA) as a negative control for 2 h at 37 ∘ C. Glutaraldehyde-fixed mouse red blood cells were added to the cultures, which were then imaged with a phasecontrast microscope coupled with a camera scanner and SPOT advanced imaging program (Version 4.6, Diagnostic Instruments, Sterling Heights, MI). Particle exclusion area and cell area were measured and relative hyaluronan coat area was calculated using the formula: matrix area-cell area (expressed as m 2 ).
Hyaluronan Assay.
To analyze hyaluronan secretion, tissue culture supernatants were collected from subconfluent cultures after 24 h culture, and cells were trypsinized and counted for normalization. Hyaluronan concentration in the samples was quantified using an enzyme-linked hyaluronanbinding protein sandwich assay (Cat# DY3614, R&D Systems, Minneapolis, MN) according to manufacturer's instructions [42].
Isolation of Secreted, Cell Surface, and Intracellular
Hyaluronan. Isolation of secreted, cell surface, and intracellular hyaluronan was performed as previously described [16], with some modifications. The conditioned media containing secreted hyaluronan were collected from subconfluent cultures after 48 h of culture. Cells were detached and collected by centrifugation and supernatant was transferred to a clean tube. Cell pellets were rinsed with 1 × PBS and centrifuged, and supernatant was combined with the supernatant from the previous centrifugation and was designated as "cell surface hyaluronan. " Combined supernatants were incubated briefly at 100 ∘ C to inactivate trypsin. The cell pellet was rinsed, the supernatant was discarded, and the cell pellet was designated as "intracellular hyaluronan. " To ensure that no residual cell surface hyaluronan was present, some of the cell fractions were treated with 1,000 U/mL rHuPH20 for 20 min and rinsed with ice-cold 1 × PBS followed by centrifugation and collection of the cell pellet. All the samples were digested with Proteinase K at +55 ∘ C overnight, followed by heat inactivation at 95 ∘ C for 10 min and centrifugation at 12,000 rpm at 4 ∘ C for 10 min.
Pancreatic Cancer Xenograft Models.
Six-to eight-weekold nu/nu (Ncr) athymic mice, handled in accordance with approved Institutional Animal Care and Use Committee protocols, were used for xenograft studies. Mice were inoculated with 0.05 mL of 5 × 10 6 BxPC-3, BxPC-3 vector, BxPC-3 HAS2, or BxPC-3 HAS3 cells (concentration 1 × 10 8 cells/mL) peritibially in the hind leg (adjacent to the tibia periosteum). Tumor growth was monitored by ultrasound imaging (Visu-alSonics Vevo 770/2100 High Resolution Imaging System (VisualSonics, Ontario, Canada)) until the average tumor size reached 200-500 mm 3 . The animals were then divided into treatment groups and treated intravenously (i.v.) twice a week for 2-3 weeks with vehicle, 37.5 g/kg, 1,000 g/kg, or 4,500 g/kg PEGPH20. At the end of the study, tumors were collected and divided into parts; one part was fixed in 10% formalin for histology, and the other parts were frozen for later biochemical analysis. Some additional tumorbearing mice with large tumors (∼2,000 mm 3 ) received two injections of vehicle or 4,500 g/kg PEGPH20, and tumors were harvested in 10% formalin 6 h after treatment.
KPC Mouse Model.
Mouse pancreatic cancer tissue sections from KPC mice [46] were obtained from the Jacobetz et al. 2013 study [4]. As described previously, mice were treated i.v. with a single injection of 4,500 g/kg PEGPH20 once tumor volume reached 270 mm 3 , and tumors were collected 1, 8, 24, and 72 h after dosing [4].
Human Patient Samples.
Pretreatment and posttreatment biopsies were obtained from a patient with advanced NSCLC who was enrolled in a Phase 1 study of PEGPH20 given i.v. to patients with advanced solid tumors (NCT01170897). The protocol was approved by Institutional Review Board and all patients consented to study. The patient was treated with a single dose of 5.0 g/kg PEGPH20. Posttreatment biopsy was taken 7 days after i.v. treatment.
Tissue Samples.
Tissues were fixed in 10% neutral buffered formalin and processed to paraffin. Five micrometer sections were used for Hematoxylin and Eosin (H&E) staining that was performed according to a standard protocol. For hyaluronan assay, fresh frozen tumor pieces were digested with Proteinase K at +55 ∘ C overnight, then heat-inactivated at 95 ∘ C for 10 min, and centrifuged at 12,000 rpm at 4 ∘ C for 10 min.
2.11. Hyaluronan Staining. Hyaluronan in tumor tissues was localized as previously described [42], with some modifications. Briefly, five micrometer sections were deparaffinized and rehydrated, and endogenous peroxidase was blocked with Peroxo-Block solution (Invitrogen). Nonspecific staining was blocked using 2% BSA (Jackson ImmunoResearch, West Grove, PA) and 2% normal goat serum (Vector) for 1 h, followed by blocking of endogenous avidin and biotin (Avidin/Biotin Blocking Kit, Invitrogen). Hyaluronan was detected by incubating sections with 2.5 g/mL bHABP (Seikagaku, Tokyo, Japan) for 1 h at 37 ∘ C. Signal was amplified by incubation with streptavidin-horseradish peroxidase solution (HRP; BD Biosciences) and detected with 3,3 -diaminobenzidine (DAB, Dako North America, Carpinteria, CA). Sections were then counterstained in Gill's hematoxylin (Vector) and mounted in Cytoseal 60 medium (American MasterTech, Lodi, CA). Specificity of the staining was assessed by incubation of a section of each sample with rHuPH20 (12,000 U/mL) in PIPES buffer (25 mM PIPES, 70 mM NaCl, 0.1% BSA, pH 5.5) at 37 ∘ C for 2 h prior to incubation with bHABP.
2.14. Quantification of Staining. Stained sections were scanned at Flagship Biosciences LLC (Boulder, CO) and automated quantification of CC3 staining was performed using Aperio Positive Pixel algorithm v9 (Aperio, Buffalo Grove, IL) and results were normalized to the total number of pixels. Quantification of PH3 stainings was performed by Aperio Imagescope using Nuclear v9 algorithm.
Statistical Analysis.
Values are presented as mean ± S.D. or SEM. Results expressed as ratios or percent values were log transformed prior to the analysis. Statistical difference was analyzed by test, one-way ANOVA, and Tukey's post hoc test or two-way ANOVA with Bonferroni's post hoctest using Graphpad Prism 5 software (Graphpad, La Jolla, CA). Statistical difference was set at < 0.05 ( * < 0.05; * * < 0.01; and * * * < 0.001).
HAS3 Overexpression Is Associated with Increased Levels of Hyaluronan Production.
Extensive hyaluronan accumulation is found in pancreatic cancer [4,6], which is characterized by massive desmoplastic stroma, high interstitial tumor pressure, poor perfusion, and resistance to therapy [47]. Hyaluronan is synthesized by three HAS enzymes at the plasma membrane, and HAS overexpression has been associated with cancer growth [43,45,48]. In this study, we used BxPC-3 pancreatic cancer cells that overexpress HAS3 to assess the roles of this HAS isoform on pancreatic cell lines and tumors.
Human BxPC-3 pancreatic adenocarcinoma cells were engineered to overexpress HAS3, and functional consequences of hyaluronan production by HAS3 were analyzed by hyaluronan secretion and size of pericellular hyaluronan matrix. BxPC-3 cells secreted 181 ng hyaluronan to culture medium per 10,000 cells over 24 h, while hyaluronan secretion of BxPC-3 HAS3 cells was 20-fold higher, at 3,607 ng per 10,000 cells (Table 1). Similarly, relative hyaluronan matrix area was increased by 2.4-fold after HAS3 overexpression ( Table 1). Overexpression of HAS2 and HAS3 has been reported to lead to increased hyaluronan secretion and increased size of hyaluronan coat in several cell lines [25,26]. Thus, we generated and tested HAS2-overexpressing BxPC-3 cells and found that similar amounts of hyaluronan were secreted by BxPC-3 HAS2 and BxPC-3 HAS3 cells (Table 1), resulting in similar hyaluronan coat sizes. The size of hyaluronan coat on BxPC-3 cells transduced with an empty vector was also analyzed, and coat size did not significantly differ from that on parental cells (data not shown).
To further characterize BxPC-3 HAS3 cells, the quantity of extracellular and intracellular hyaluronan was analyzed. In line with earlier results (Table 1), the amount of extracellular hyaluronan, composed of secreted and cell surface hyaluronan, was 9.9-fold higher in BxPC-3 HAS3 cells compared to BxPC-3 cells (Figure 1(a)). Overexpression of HAS3 also induced a slight 2.8-fold increase in intracellular hyaluronan content (Figure 1(b)). HAS2-overexpressing cells contained a similar amount of extracellular hyaluronan as HAS3-overexpressing cells (Figure 1(a)), but their intracellular hyaluronan level was 35-fold higher than in BxPC-3 cells (Figure 1(b)). Intracellular hyaluronan was also visualized in cultures after fixation and digestion of cell surface hyaluronan. CD44 was stained in the cultures to visualize plasma membranes of the cells. Parental or HAS3-overexpressing cells contained very little intracellular hyaluronan (Figures 1(c) and 1(e)) while HAS2-overexpressing cells showed intensive accumulation of intracellular hyaluronan (Figure 1(d)). These data are consistent with previous reports that HAS2 expression has been associated with the presence of intracellular hyaluronan, induced by epidermal growth factor, keratinocyte growth factor, or hyperglycemia [16,19,49].
The results are consistent with previous reports showing that PEGPH20 can efficiently remove hyaluronan in the tumors [4,6,9,42]. Hyaluronan staining of the PEGPH20treated tumors revealed that PEGPH20 removed most of the extracellular hyaluronan (Figures 2(j)-2(l)) and reduced the amount of stroma (Figures 2(d)-2(f)), although intracellular hyaluronan in cancer cells was still apparent (Figures 2(j)-2(l)). After treatment with PEGPH20, intracellular hyaluronan was most prominent in HAS2 tumors (Figure 2(k)), in agreement with in vitro observation of intracellular accumulation of hyaluronan in HAS2-overexpressing cells (Figures 1(b) and 1(d)). The significance and origin of intracellular hyaluronan accumulation in HAS2-overexpressing pancreatic cancer cells and tumors is a subject requiring further investigation. Coexpression of HAS2 with CD44 and HYAL2 has been reported and could potentially lead to the cleavage of hyaluronan at the plasma membrane and CD44-mediated internalization and accumulation of intracellular hyaluronan [36,50,51]. Intracellular hyaluronan synthesis has also been reported in hyperglycemia and inflammatory conditions [17,19]. These results also confirm that PEGPH20 only degrades hyaluronan within the extracellular space, probably due to lack of a mechanism to enter the cells.
HAS3 Overexpression in BxPC-3 Cells Is Associated with Faster In Vivo Growth and Enhanced Tumor Growth Inhibition.
HAS2 has been widely shown to be associated with cancer [43,48]; however, much less is known about the role of * * * * * HAS3 in tumor progression. To compare the in vivo growth rate and PEGPH20 response of BxPC-3 and BxPC-3 HAS3 cells, both cells were inoculated peritibially in the hind limb of nude mice to generate tumors, and the mice were dosed i.v. twice a week with vehicle or 4,500 g/kg PEGPH20. BxPC-3 HAS3 cells generated faster growing tumors than parental cells (Figures 3(a) and 3(c)), which correlates with the amount of extracellular hyaluronan in the tumor stroma (Figures 2(g) and 2(i)). The average size of BxPC-3 HAS3 tumors was 1,905 mm 3 on day 15 after initiation of vehicle treatment while the average size of BxPC-3 tumors was 820 mm 3 (Figures 3(a) and 3(c)). This is consistent with the report that HAS3 overexpression enhances extracellular matrix deposition and tumor growth of prostate cancer cells [45] and that metastatic colon cancer cells have upregulated levels of HAS3 [52]. HAS3 protein level is higher in human ovarian cancer than in normal ovary, and the amount of HAS3-positive cancer cells correlates with hyaluronan accumulation in the stroma [44]. Overexpression of HAS2 also increased the tumor growth in a BxPC-3 model but led to less aggressive tumors than those with HAS3 overexpression (Figure 3(b)). Stromal hyaluronan has been reported to contribute to high interstitial fluid pressure, compression of tumor blood vessels, and poor drug delivery in pancreatic cancer tumors [4,6]. Figure 2: Localization and amount of hyaluronan in BxPC-3, BxPC-3 HAS2, and BxPC-3 HAS3 tumors and response to PEGPH20. Mice carrying BxPC-3, BxPC-3 HAS2, and BxPC-3 HAS3 peritibial xenograft tumors were treated twice with vehicle or 4,500 g/kg PEGPH20, and tumors were collected 6 h after the last dose. Tumor sections were stained with H&E to visualize morphology of the tumors (a-f) and with bHABP for hyaluronan (g-l). Scale bar in (a-f) and (g-l) is 500 m. Hyaluronan concentration in the tumors after two weekly treatments of vehicle, 37.5 g/kg or 1,000 g/kg of PEGPH20 ( ≥ 3/group) for three weeks was analyzed by hyaluronan assay (m). Statistical differences between the groups shown in figure (m) were tested using two-way ANOVA and Bonferroni's post hoc test ( * < 0.05; * * < 0.01; * * * < 0.001). Figure 3: Pancreatic cancer xenograft tumors overexpressing HAS3 grow faster and respond better to hyaluronan removal than HAS2 or parental tumors. To compare in vivo growth, BxPC-3, BxPC-3 HAS2, and BxPC-3 HAS3 tumor cells were inoculated adjacent to the tibial periosteum in the hind limb of nu/nu mice ( = 7/group), and tumor growth was monitored using ultrasound imaging. Once average tumor size reached 500 mm 3 , mice were treated twice a week with an i.v. injection of vehicle or PEGPH20 (4,500 g/kg) (a-c). Statistical difference was tested using repeated measured two-way ANOVA and Bonferroni's post hoc test.
BxPC
there have been no observed substantial differences in the amount of stromal hyaluronan between HAS2-and HAS3overexpressing tumors. However, consequences of HAS2 and HAS3 overexpression on vasculature are not well understood and the possibility that HAS2 and HAS3 might have distinct effects on vascular function cannot be completely ruled out. Interestingly, BxPC-3 HAS3 tumors showed a strong response to PEGPH20, which caused 86% tumor growth inhibition (Figure 3(c)). In BxPC-3 and BxPC-3 HAS2 tumors, PEGPH20 induced tumor growth inhibition of 34% and 32%, respectively (Figures 3(a) and 3(b)). Removal of hyaluronan by PEGPH20 has been shown to lead to decompression of tumor blood vessels, changed ultrastructure of tumor endothelia, and increased drug delivery to the tumor [4,6,9]. We show that intravenous injection of PEGPH20 led to similar depletion of stromal hyaluronan in HAS2-and HAS3-overexpressing tumor models (Figures 2(k) and 2(l)) suggesting no differences in drug delivery of PEGPH20 to these tumor types. Taken together, the data show that tumors with HAS3 overexpression show more aggressive tumor growth than parental tumors or tumors with HAS2 overexpression. In HAS3-overexpressing tumors, PEGPH20 removes most of the hyaluronan and has a strong inhibitory effect on tumor growth, whereas in HAS2-overexpressing tumors PEGPH20 removes the extracellular but not the intracellular hyaluronan and causes a weaker inhibitory effect on tumor growth. Our results are in line with previous work showing that depletion of hyaluronan or suppression of its synthesis leads to inhibition of tumor growth in multiple tumor models [9, 36-38, 41, 42].
Accumulation of Extracellular Hyaluronan by HAS3 Overexpression Induces Loss of Plasma Membrane E-Cadherin.
Hyaluronan is known to be involved in EMT-associated changes in cancer progression [27][28][29]. E-cadherin andcatenin are essential adhesion molecules in normal epithelium, and loss of plasma membrane E-cadherin and catenins leading to disruption of cell-cell junctions is acknowledged to be an early indication of EMT [53,54]. Because accumulation of hyaluronan by HAS3 in intercellular space has been suggested to interrupt cell-cell interactions [55], we hypothesized that hyaluronan removal would reverse changes that may have occurred in adhesion events. E-cadherin and -catenin were visualized by immunohistochemistry in vehicle-treated and PEGPH20-treated BxPC-3 and BxPC-3 HAS3 tumors. HAS3-induced extracellular hyaluronan accumulation resulted in loss of plasma membrane E-cadherin (Figure 4(a)) and increased accumulation of cytoplasmiccatenin in tumor cells of BxPC-3 tumors (Figure 4(b)). This result supports previous findings that increased hyaluronan production leads to disruption of adherens junctions and leads to EMT [28,43]. Since the majority of pancreatic adenocarcinomas have high hyaluronan accumulation [4,5]; these results are in line with the previously reported loss of membrane E-cadherin in human pancreatic adenocarcinoma compared to pancreatic intraepithelial neoplasia or normal ducts [53]. Additionally, HAS overexpression also induces loss of plasma membrane E-cadherin in breast cancer cells and spongiotic keratinocytes [28,43,55]. Overexpression of HAS2 led to the same changes in E-cadherin andcatenin adhesion proteins in BxPC-3 tumors (Figures 4(a) and 4(b)). Interestingly, in mammary epithelial cells, TGF-induced EMT is mediated by HAS2 but not by extracellular hyaluronan [29]. This suggests that HAS2 and HAS3 may mediate EMT-associated events via different mechanisms. HAS3 seems to initiate EMT-associated events by extracellular hyaluronan, while the effect of HAS2 on EMT may be additionally mediated by intracellular hyaluronan accumulation or by HAS2 function that is not dependent on hyaluronan synthesis [29]. We then tested the effect of hyaluronan removal by PEGPH20 on EMT-related events in HAS3-overexpressing tumors. Removal of extracellular hyaluronan by PEGPH20 induced translocation of E-cadherin and -catenin back to the plasma membrane (Figures 4(a) and 4(b)), and PEGPH20 showed a stronger effect in HAS3-overexpressing tumors than in BxPC-3 tumors. It has been previously shown that PEGPH20 removes most tumor-associated hyaluronan in the KPC mouse model [4], so we investigated translocation of E-cadherin in tumors from KPC mice treated with PEGPH20. Translocation of E-cadherin to plasma membrane was observed 8 h after treatment with PEGPH20, and it was most prominent in peripheral duct-like structures at the edges of the tumor (Figure 4(c)). However, the effect seems to be transient, since 24 to 72 h later the plasma membrane residence of E-cadherin started to disappear (Figure 4(c)). E-cadherin status was also studied in pretreatment and posttreatment biopsies of a lung cancer patient treated with PEGPH20 (Phase 1 study, NCT01170897). In the posttreatment biopsy, a decrease in extracellular hyaluronan (data not shown) and an increase in cell surface E-cadherin were found in comparison to the pretreatment biopsy (Figure 4(d)). Removal of extracellular hyaluronan is associated with translocation of epithelial markers E-cadherin and -catenin back to the plasma membrane, suggesting a reorganization of intercellular junctions and an inhibition of early EMT-associated events, potentially contributing to growth inhibition by PEGPH20 [43].
Hyaluronan Removal Decreases Nuclear Levels of
Hypoxia-Related Proteins. Since hypoxia has been reported to be associated with and able to induce EMT [56,57], we explored the effect of extracellular hyaluronan accumulation and PEGPH20 on hypoxia-related proteins in BxPC-3 tumors. Nuclear protein levels of HIF-1 and Snail were analyzed by Western blotting (Figures 5(a)-5(f)). Overexpression of HAS3 did not cause a major change in nuclear HIF-1 level, probably because BxPC-3 tumors already contain a low amount of hyaluronan (Figures 5(a)-5(c)). However, hyaluronan depletion by PEGPH20 decreases nuclear HIF-1 levels in both BxPC-3 and BxPC-3 HAS3 tumors (Figures 5(a)-5(c)), suggesting a decrease in hypoxic conditions after hyaluronan removal. Nuclear levels of transcription factor Snail, a target of HIF-1 , were also suppressed by hyaluronan depletion in HAS3overexpressing tumors (Figures 5(d)-5(f)). These results suggest that PEGPH20 suppresses HIF-1 -Snail signaling in tumors with high levels of hyaluronan. In agreement with our observations, previous reports have described an association of hyaluronan accumulation and tumor hypoxia [12], and that depletion of hyaluronan synthesis by 4-methylumbelliferone prevents EMT-associated changes [58]. EMT has been reported to be mediated via HIF-1 signaling [56,57], and our data suggest that removal of hyaluronan can be one step in the inhibition of this process.
Extracellular Hyaluronan Inhibits Apoptosis of BxPC-3 Tumors.
To further study the mechanism of how HAS3 overexpression favors tumor growth and leads to stronger response to PEGPH20, the amount of proliferative and apoptotic cells was analyzed in BxPC-3 and BxPC-3 HAS3 tumors. The number of apoptotic cells, assessed by CC3 positivity, was decreased in HAS3-overexpressing tumors (4fold, < 0.05), suggesting that BxPC-3 HAS3 overexpression protects cells from apoptosis ( Figure 6(a)). PEGPH20 treatment showed a trend (2.3-fold) to increase CC3-positive cells in BxPC-3 HAS3 tumors but had no effect in BxPC-3 tumors (Figure 6(a)). The modest effect of PEGPH20 on promoting apoptosis in the BxPC-3 HAS3 tumor model may be due to incomplete removal of hyaluronan from the cancer cell surface (Figure 2(l)). Since HAS3 continuously produces new hyaluronan chains at the plasma membrane, binding of newly synthesized chains to hyaluronan receptors on the cancer cell surface may induce some cell survival signaling and reduce the biological effect of PEGPH20. Alternatively, overexpression of HAS3 may have other hyaluronan-independent effects on cell survival, as reported with HAS2 on TGF--induced EMT in mammary epithelial cells [29]. Pre-PEGPH20 (d) Figure 4: HAS overexpression induces loss of plasma membrane E-cadherin and accumulation of cytoplasmic -catenin, and removal of hyaluronan translocates them to the plasma membrane. Vehicle-and PEGPH20-(2 doses; 4,500 g/kg; = 3/group) treated tumors were stained for E-cadherin (a) and -catenin (b). E-cadherin was also localized in pancreatic tumors from a KPC mouse model after 0, 8, 24, and 72 h treatment with PEGPH20 (c; = 3/group) and in human NSCLC biopsies before and after PEGPH20 therapy (d Intensities of the bands in the cropped blots were quantified using Image-Pro Analyzer 7.0 software and normalized to the intensity of the housekeeping protein ((c) and (f)). Data were plotted as mean ± S.D., and statistical difference between the groups was tested with test ( * < 0.05; * * < 0.01; and * * * < 0.001).
Cell proliferation, assessed by PH3, was not increased by HAS3 overexpression or with PEGPH20 treatment (Figure 6(b)). Overexpression of HAS2 and PEGPH20 treatment in HAS2-overexpressing tumors did not show a major effect on the number of CC3-and PH3-positive cells. These results suggest that, in addition to effects on stromal remodeling and early EMT-associated changes, accumulation of extracellular hyaluronan after HAS3 overexpression may protect pancreatic cancer cells from apoptosis. Hyaluronan removal alone may not be sufficient to induce substantial cell death in this model, but PEGPH20 may be effective in combination with chemotherapy as reported previously in the prostate cancer model [9] and in the transgenic mouse model of pancreatic cancer [6]. Consistent with our work, hyaluronan has also been previously associated with cell survival and protection of apoptosis in colon carcinoma cells [59]. In conditional Has2 transgenic mice that develop mammary tumors, Has2 overexpression decreases apoptosis but also increases the proliferation of neu-initiated tumors [43]. Differential results on proliferation may be explained by the fact that the net effect of HASs and hyaluronan on proliferation depends on several factors including cell type, cell density, and the final concentration of hyaluronan around the cells [ Figure 6: Effect of HAS2 and HAS3 overexpression and hyaluronan removal on apoptosis and proliferation in BxPC-3, BxPC-3 HAS2, and BxPC-3 HAS3 tumors. Vehicle-treated and PEGPH20-treated (2 doses; 4,500 g/kg; = 3/group) BxPC-3, BxPC-3 HAS2, and BxPC-3 HAS3 xenograft tumor sections were stained for CC3 and PH3 to visualize apoptotic and proliferative cells, respectively. Number of positive cells per tumor section was analyzed using Aperio Positive Pixel v9 and Nuclear staining algorithms, respectively, ((a) and (b)). Data were plotted as mean ± S.D., and statistical difference between the groups was tested with two-way ANOVA and Bonferroni's post hoc test ( * * < 0.01).
proliferation and increases apoptosis in combination with gemcitabine compared to gemcitabine alone [4,6]. Although PEGPH20 alone might not have strong effects on proliferation and apoptosis, it causes remodeling of the tumor microenvironment and could sensitize tumors to chemotherapy. The results from this study shed a light on potential application of extracellular hyaluronan and HAS3 protein expression as an indication for stromal hyaluronan depletion by PEGPH20.
Conclusions
In recent years, the importance of the tumor microenvironment in cancer progression has been increasingly recognized, and stromal components including hyaluronan have become attractive targets for cancer therapy. Hyaluronan is a major component in the stroma of many solid tumors [5,8], and its accumulation is known to promote malignant transformation and tumor growth [28,43,61]. The most extensive hyaluronan accumulation is found in pancreatic cancer [4,6], and enzymatic depletion of hyaluronan in combination with chemotherapy is currently being investigated in patients with advanced pancreatic adenocarcinoma. However, there is very little information about mechanisms leading to hyaluronan accumulation in desmoplastic response of pancreatic cancer and the roles of different HASs in these dramatic changes of tumor microenvironment.
In this study, we demonstrate that overexpression of HAS3 in pancreatic cancer cells results in more aggressive tumors with extracellular hyaluronan accumulation, whereas overexpression of HAS2 is associated with moderate-growing tumors and both extracellular and intracellular accumulations of hyaluronan. An increase in extracellular hyaluronan induces loss of plasma membrane E-cadherin and accumulation of cytoplasmic -catenin, indicating disruption of epithelial cell adhesion and an early stage of EMT. PEGPH20 depletes extracellular hyaluronan and leads to strong inhibition of tumor growth in HAS3-overexpressing tumors. Tumor growth inhibition is associated with decreased nuclear levels of hypoxia-related proteins and translocation of Ecadherin and -catenin to the plasma membrane. The fact that removal of hyaluronan also causes E-cadherin translocation to the plasma membrane in the transgenic mouse model and in a hyaluronan-rich human NSCLC biopsy sample further justifies the relevance of the hyaluronan-rich tumor model and highlights the role of extracellular hyaluronan in the early EMT-associated events.
Further knowledge about the mechanisms and consequences of hyaluronan accumulation in pancreatic cancer and effects of hyaluronan removal by PEGPH20 will increase our understanding of the role of hyaluronan in the development of malignancies and will enable the development of new biomarkers and therapies. | 7,417.2 | 2014-07-24T00:00:00.000 | [
"Biology",
"Chemistry",
"Medicine"
] |
Generalized ’t Hooft anomalies on non-spin manifolds
We study the mixed anomaly between the discrete chiral symmetry and general baryon-color-flavor (BCF) backgrounds in SU(Nc) gauge theories with Nf flavors of Dirac fermions in representations ℛc of N -ality nc, formulated on non-spin manifolds. We show how to study these theories on ℂℙ2 by turning on general BCF fluxes consistent with the fermion transition functions. We consider several examples in detail and argue that matching the anomaly on non-spin manifolds places stronger constraints on the infrared physics, compared to the ones on spin manifolds (e.g. 𝕋4). We also show how to consistently formulate various chiral gauge theories on non-spin manifolds.
Introduction
Anomaly matching conditions provide a rare exact constraint on the infrared (IR) behavior of strongly coupled gauge theories [1]. To study the matching of anomalies, one probes the theory with nondynamical (background) gauge fields for its anomaly-free global symmetries. Any violation of the background gauge invariance due to the resulting 't Hooft anomalies should exactly match between the ultraviolet (UV), usually free, and IR descriptions of the theory. In the past, these consistency conditions have been applied to "0-form" symmetries, acting on local fields. For example, anomaly matching was instrumental in the study of models of quark and lepton compositeness in the 1980s (see the review [2]) or of Seiberg duality in the 1990s [3].
Recently, it was realized that the scope of anomaly matching is significantly wider than originally thought [4][5][6]. Turning on general background fields -corresponding to global, spacetime, continuous, discrete, 0-form, or higher-form symmetries, consistent with their faithful action -was argued to lead to new UV-IR anomaly matching conditions. We refer to them as "generalized 't Hooft anomalies." The study of these generalized JHEP04(2020)097 anomalies is a currently active area of research with contributions coming from the highenergy, condensed matter, and mathematical communities. We do not claim to be in command of all points of view and only give a list of references written from a (largely) high-energy physics perspective and pertaining to theories somewhat similar to the ones discussed in this paper [7][8][9][10][11][12][13][14][15][16][17][18][19].
Summary: We continue our study [20] of the generalized 't Hooft anomalies in SU(N c ) gauge theories with N f flavors of Dirac fermions in representations R c of N c -ality n c . These theories have exact global discrete chiral symmetries. Considering these theories on T 4 and turning on the most general 't Hooft flux [21] backgrounds for the global symmetries, consistent with their faithful action, we found a mixed anomaly between the discrete chiral symmetry and the U(N f )/Z Nc baryon-color-flavor, or "BCF", background. We showed that matching this BCF anomaly imposes new constraints on possible scenarios for IR physics, in addition to those imposed by the "traditional" 0-form 't Hooft anomalies. When these theories are coupled to axions, the axion theory is also constrained by anomalies [22].
In this paper, we consider the fate of the BCF anomalies in the same class of theories, but now formulated on non-spin manifolds. We are motivated by the study of QCD(adj) [23], which showed that 't Hooft anomalies in theories with fermions on non-spin backgrounds impose additional constraints. It is known that manifolds that do not permit a spin structure [24,25] can accommodate theories with fermions, but only if appropriate gauge fluxes are turned on [26]. These fluxes can correspond to dynamical or background fields, as in the recent studies [23,[27][28][29]. We focus on the canonical example of non-spin manifold, CP 2 . It has the advantage of allowing for an explicit (and pedestrian 1 ) discussion of the salient points. We describe in detail how to turn on background U(N f )/Z Nc fluxes on CP 2 and derive the resulting BCF anomaly on non-spin backgrounds. The final result of our analysis is that the BCF anomaly matching conditions on CP 2 are equal or stronger than those obtained on T 4 . We use several examples to show that the BCF anomaly on CP 2 further constrains various scenarios for the IR dynamics.
Organization of this paper: in section 2.1, we define the class of theories we study. In section 2.2, inviting the reader to also consult appendices A and B, we explain how to turn on 't Hooft fluxes on CP 2 for the baryon, color, and flavor gauge fields, consistent with the faithful action of the global symmetries in the representation R c .
In section 2.3, we temporarily divert to show how to put chiral gauge theories in non-spin backgrounds; however, we leave their study for the future.
In section 3, we study the mixed 't Hooft anomalies of the discrete chiral symmetry with the BCF fluxes on CP 2 , discuss the conditions imposed on the IR spectrum, and compare with the case of T 4 studied previously.
JHEP04(2020)097
In section 3.1, we present several examples. In section 3.1.1 we discuss QCD(adj). Our intention is to use the present study to investigate the various scenarios for IR behavior, whose consistency has been recently elaborated upon in [16,19,23,28,[30][31][32]. In section 3.1.2, we study an SU(6) gauge theory with a single Dirac flavor in the two-index antisymmetric representation and, in section 3.1.3, its generalization to SU(4k + 2) with a single flavor of two-index symmetric or antisymmetric representations. In both cases, we argue that scenarios for IR physics consistent with the 0-form 't Hooft anomalies are further constrained by studying them on CP 2 . In particular, we focus on exotic phases 2 with massless composite fermions, and argue that the TQFT which must accompany the massless composites has to reproduce a more restrictive anomaly on CP 2 .
Appendices A and B contain many relevant formulae regarding CP 2 and fermions. At the end of appendix B, we find several classes of theories which can be formulated on CP 2 by turning on of only dynamical gauge backgrounds, i.e. by only modifying the gauge bundles summed over. These gauge theories share a feature common with examples discussed in [27,28]: they have only bosonic gauge invariant operators and can be taught as emergent descriptions near quantum critical points of purely bosonic systems.
2 Baryon-Color-Flavor (BCF) 't Hooft fluxes on CP 2 for vector-like theories In this section, we describe in great detail (in conjunction with appendices A and B) how to introduce background fluxes in the baryon-number, color, and flavor directions on CP 2 . We carry out our construction for vector-like theories. However, this setup can be easily adapted for chiral theories (such as the Standard Model), as we show at the end of this section.
Vector-like theories
We consider SU(N c ) gauge theories with N f flavors of Dirac fermions transforming in a representation R c of N-ality n c . 3 The gauge group that faithfully acts on the fermions is Zp , where p = gcd(N c , n c ); thus, the fermions are charged under a Z Nc p subgroup of the center of SU(N c ). After modding out the redundant symmetries, we find that the 0-form global symmetry of the theory is where T R is the Dynkin index of the representation, R and dim(R) is its dimension. Here, we assume that Z 2 dim(R f )T R c is a genuine symmetry of the theory; thus, it cannot be absorbed in the continuous part of G global (this can be checked on a case by case basis).
Z 2 above denotes fermion number and Z Nc p is in the center of SU(N c ). 2 The examples of sections 3.1.2 and 3.1.3 were also studied in refs. [16,19], which argued that an IR gapped phase with unbroken global symmetries cannot occur. 3 The N -ality of a representation R of SU(N ) is the number of boxes of the Young tableau of R modulo N .
JHEP04(2020)097
In addition, the theory has a 1-form center symmetry Z (1) p that acts on non-contractible Wilson loops, provided that gcd(N c , n c ) = p > 1. Notice that the ultraviolet fermions are taken to transform in the defining representation of the flavor group SU(N f ), and hence, we should use n f = 1. Nevertheless, we keep the N -ality of the fermions under SU(N f ) an arbitrary integer for the sake of generality.
Generalized 't Hooft fluxes on CP 2
Next, we turn on 't Hooft fluxes (twists) in the baryon-number, color, and flavor directions, which are compatible with CP 2 and at the same time lead to consistent transition functions. See appendix A for a collection of relevant formulae for CP 2 .
We first address the compatibility condition. As we point out in appendix B, background gauge fields (both abelian and nonabelian) on CP 2 need to be (anti)self-dual, otherwise they will have a nonvanishing energy-momentum, and hence, backreact on the manifold. In order to achieve the (anti)self-duality, we take the gauge fields to be proportional to the Kähler 2-form K of CP 2 , eqs. (A.2), (A.9): where T a stands for the color, flavor, or baryon-number generators, and C a are constants that will be determined momentarily. Second, we come to the problem of defining a consistent gauge theory with matter fields on a manifold M. Let G be a direct product of semi-simple Lie groups and Ψ a fermionic matter field transforming under specific representations of G. A quantum field theory of Ψ is described in terms of a collection of covers {U i } of M (in {U i }, Ψ is denoted Ψ i ), along with transition functions g ij ∈ G, defined on the overlap U i ∩ U j and relating Ψ i to Ψ j such that g B,R c ,R f ij are the transition functions of the baryon, color, and flavor groups, while g L ij is the transition function associated to the spacetime Lorentz group. The matter field in general will transform under representation R c of the color group and representation R f of the flavor group. However, only the N-ality of the representations will matter in what follows. Consistency requires that the transition functions satisfy the cocycle conditions on the triple overlap U i ∩ U j ∩ U k : (2.5) The above cocycle condition does not necessary imply that the strong conditions g a ij g a jk g a ki = 1 should be met for each of the transition functions in (2.4), where a refers to the baryonnumber, flavor, color, or Lorentz groups.
JHEP04(2020)097
Let g c ij and g f ij be the transition functions in the defining representations of the color and flavor groups, respectively. One, then, may relax the condition (2.5) to the following set of conditions on the triple overlap. In this expression n c (n f ) is the color (flavor) N-ality, n ijk ) are integers modulo N c (N f ), while the factor e −iπ that appears in the last cocycle condition cancels the minus sign arising from parallel transporting the spinor fields around appropriate closed paths in CP 2 , see appendix B and [24][25][26].
Thus, the U(1) B bundle provides the flux that is necessary to render the fermions well-defined on the non-spin manifold. As a side remark, we note that this is by no means is the unique choice to put spinors on CP 2 : one could also use the fluxes in the color (or flavor) directions to perform the same job. Examples of using only gauge backgrounds (i.e. modifying only the gauge bundles being summed over in the path integral) are known in the literature [27][28][29] and we give a few more at the end of appendix B; a common feature of gauge theories where this can be done is their possible interpretation as emergent descriptions near quantum critical points in theories of only bosons [28].
The consistency conditions (2.5) or (2.6) guarantee that the Dirac index will always be an integer. Since the Dirac index counts the number of the fermion zero modes in a given gauge/gravity background, the integrality of the index is a necessary condition for the consistency of a given theory in the background of baryon-color-flavor 't Hooft fluxes in CP 2 . The integrality of the index will be manifest in all the examples we discuss in this paper.
Having all the ingredients necessary to turn on compatible fluxes on non-spin manifolds, we now choose the color and flavor fluxes in the Cartan directions of the respective groups. Using (2.2) we write: Here H c/f are the fundamental representation Cartan generators of SU(N c/f ), obeying tr H a H b = δ ab , and ν are the weights of the corresponding defining representation, ν a · ν b = δ ab − 1 N (where N stands for N c or N f ). The fluxes (2.7), with integer m c and m f , are compatible with the cocycle conditions (2.6), see (B.7), and the Dirac index is integer in their background. The topological charges are given by (2.8)
JHEP04(2020)097
Then, substituting (2.7) into (2.8) and using CP 2 K∧K 8π 2 = 1 2 , we find: (2.9) Adding to this list the gravitational topological charge of CP 2 we finally obtain the Dirac index: which is an integer for all the examples we consider below. Before moving to examples, it is instructive to compare and contrast the above results with the BCF fluxes on the four-torus T 4 that we considered before [20]. CP 2 has one two-cycle CP 1 , and hence, we were able to turn on fluxes along this single cycle (the color and flavor fluxes are labeled by m c,f in (2.9)). In contrast, T 4 has six two-cycles (it suffices to turn on fluxes in the 1-2 or 3-4 planes, respectively, hence we have two integers m 12 and m 34 that label the fluxes). 4 Since there are more ways to turn on fluxes on T 4 compared to CP 2 , this may imply that putting the theory on T 4 can give us more constraining conditions on the IR spectrum. We will see in the next section that this is not true: although CP 2 has only one cycle, it always imposes conditions that are either stronger or at least as strong as the conditions we obtain by putting the theory on T 4 .
Comment on chiral theories and the Standard Model with ν R
Here, we slightly divert from our main presentation to note, for the sake of completeness, that by turning on global anomaly-free U(1) fluxes, chiral gauge theories can also be formulated on non-spin manifolds.
As an example, consider an SU(5) gauge theory with 5 * and 10 left-handed Weyl fermions: 5 λ in the anti-fundamental and ψ in the two-index anti-symmetric representations. This theory has an anomaly-free global U(1) that acts on the fermions as ψ → e i2πα ψ and λ → e −i2π(3α) λ. Then, one can easily check that the flux For the sake of completeness, we give Q c,f,B on T 4 [20]: 5 For a discussion of its conjectured IR dynamics, see [33].
JHEP04(2020)097
is consistent with the cocycle condition (2.5) for both ψ and λ. This can be seen by considering the consistency condition (B.7) on CP 2 for fermions in these two representations, taking into account their different U(1) charges and SU (5) representations. One can also check the consistency by calculating the Dirac indices for both ψ and λ: using which are integers. Notice that the total number of upper minus lower SU(5) indices of the zero modes is a multiple of 5 (and the total number of zero modes is even for odd m c ), so that a gauge invariant "'t Hooft vertex" using the zero modes can be written. Let us also mention that the Standard Model can be formulated on a non-spin manifold, provided that right-handed neutrinos are added. 6 In this case one can turn on a fractional flux in the global U(1) B−L in order to cancel the e iπ ambiguity that results from putting the quarks and leptons on CP 2 . By computing the indices, as above, it is easy to see that gauge and Lorentz invariant terms can be constructed out of the zero modes. The U(1) B−L can further be promoted to a gauge symmetry, broken by a charge-2 Higgs. For related discussions see [34,35] as well as the remarks on the Spin(10) grand unified theory in [27].
In the two examples mentioned in this section, formulating the theory on CP 2 does not lead to new 't Hooft anomalies of the type discussed here, as these theories only have continuous chiral symmetries whose anomalies are matched irrespective of the integrality of the topological charges. 7 Further study of chiral theories is left for the future.
Anomalies in the background of BCF fluxes on CP 2
We now return back to our main theme and examine the fate of the axial symmetries of vector-like theories as we put them in the background of BCF fluxes. In order to reduce notational clutter, we assume that the theory enjoys a genuine discrete Z qg axial global symmetry, which becomes anomalous in the background of BCF fluxes. We denote by D c,f,B,G the anomaly coefficients that accompany the color, flavor, baryon-number, and gravitational topological charges. The UV values of these coefficients, D c UV , D f UV , D B UV , D G UV , are equal to twice the pre-factors that multiply Q c,f,B,G , respectively, in the Dirac index (2.11): these are group-theoretical values and they do not depend on whether we turn on integer or fractional fluxes or whether we put the theory on spin or non-spin manifolds. To summarize, upon performing a global Z qg axial transformation on the fermions, the UV partition function acquires the phase
JHEP04(2020)097
where J D is the Dirac index (2.11). This phase is a manifestation of a 't Hooft anomaly between the 0-form Z qg symmetry and a general BCF background. Now, we assume that the 0-form ("traditional") 't Hooft anomalies, which correspond to integer values of Q c,f,B,G , can be matched by a set of fermion composites deep in the IR on a spin manifold. Upon performing a Z qg transformation in the IR, the partition function transforms as IR are the anomaly coefficients computed using the IR spectrum of composites. Since we are matching a discrete anomaly, the coefficients D c,f,B,G need not be exactly matched between the UV and IR. Instead, D c,f,B are matched modulo q g : for integers c,f,B . The coefficients D G are matched only modulo q g /2: there is an integer G such that This is true since the gravitational topological charge of a spin manifold is an even number. 8 Now, we would like to check whether the same set of IR composite fermions can also match the BCF anomaly as we turn on fractional fluxes on a non-spin manifold. Before doing that, we first note that if a non-spin manifold admits an elementary spinor Ψ, then by virtue of (2.5) and (2.6) a composite of these spinors can always be defined. Also, one can easily see the spin-charge relation of the composites: a fermion (boson), made of an odd (even) number of Ψ, carries an odd (even) charge under U(1) B .
Thus, using (3.1), (3.2), (3.3), and (3.4), we obtain the matching condition: 5) or in other words for all fractional charges Q c,f,B,G given in (2.9) and (2.10). The condition (3.6) can be translated into the following set of conditions on c,f,B,G , which can be obtained by turning on and off the fluxes in the various directions: JHEP04(2020)097 The importance of the above conditions is as follows: if no integers c,f,B,G that satisfy (3.7) can be found, then composite fermions cannot solely match the BCF anomaly. Thus, either the composites do not form in the IR, or they are accompanied by a partial breaking of Z qg , due to some higher dimensional fermion condensate that leaves the continuous flavor symmetries intact, and/or an IR TQFT. For example, setting 9 c = 1, it is straightforward to check that no integers c,f,B,G exist that satisfy (3.7) if N f ≥ 2 and one of the following two conditions are met: We call the inequalities (3.8) the "no-go condition" on the composites (we stress that they apply provided that N f ≥ 2 and recall that n f = 1). In the special case N f = 1, one needs to replace (3.8) by other sets of conditions that we do not quote here; they can be checked on a case by case basis using the first and last conditions in (3.7). Now a few comments are in order: 1. The first three conditions (3.7) are functions of c,f,B , while the fourth condition is a function of two variables only, G and B . Therefore, if c,f,B can be found to satisfy conditions (i) to (iii), then it is always trivial to find G ∈ Z that satisfies condition (iv).
2. Given 1 above, one expects that turning on gravitational background does not alter the conditions that are needed to find a set of composites in the IR matching all anomalies. At this point, it is instructive to compare the set of conditions (i) to (iii) in (3.7) with those that result from turning on BCF fluxes on T 4 , as was considered before 10 [20]. Although the two sets of conditions appear to be unrelated, they give the exact same no-go condition (3.8).
3. However, as we shall show in the examples in section 3.1, putting the theory on a nonspin manifold can give rise to a more restrictive phase in the partition function, and hence, imposes more constraints on the IR TQFT that accompanies the composites.
4. As in [13,14], we can also turn on a SU(N f ) invariant mass term that breaks SU(N f ) L × SU(N f ) R down to the diagonal vector subgroup. We will take the mass to be smaller than the strong-coupling scale of the theory and also introduce a θ parameter. Now, we examine how the partition function transforms under a shift of θ by multiples of 2π, i.e., we ask whether the theory suffers a θ- 9 Notice that gauge invariant composites have c = 1 in the vectorlike theories we consider: using D c IR = 0, since the composites are color singlets, we have c = 10 For the sake of completeness, we recall that the conditions (3.7) are replaced on T 4 by: where Q is the smallest integer that makes Q NcN f ncn f an integer.
JHEP04(2020)097
periodicity anomaly. To this end, we introduce, in addition to the θ term, general background field dependent counter terms. The topological part of the Lagrangian becomes where the coefficients of the counterterms, Θ f , Θ B , Θ G 2 , are general real numbers. They can, however, depend on θ and we demand that they shift by 2πZ under 2πr shifts of θ, so that they do not destroy the θ periodicity in backgrounds with integer Q c , Q f , Q B and even Q G . In other words, we have that under θ → θ + 2πr (where r ∈ Z), ∆L top. = 2πrQ c + 2πsQ f + 2πtQ B + 2πu 2 Q G , where s, t, u ∈ Z. Finally, we ask whether the transformation of the counter terms can compensate for the phase of the partition function under shifts of θ in the BCF background fluxes on CP 2 , i.e., we demand that under θ → θ + 2πr, L top. → L top. + ∆L top. , with ∆L top. = 2πZ. Carrying out this exercise, we find that the requirement ∆L top. = 2πZ (the absence of a θ-periodicity anomaly) is met for general BCF fluxes if and only if conditions (3.7) are satisfied after replacing c,f,B,G → r, s, t, u. Therefore, the conditions that exclude massless composites are the exact same conditions that give rise to θ-periodicity anomaly: they are given, for N f ≥ 2, by the same conditions (3.8) found earlier in [14]. The anomaly implies that as one varies θ between 0 and 2π, the IR theory should either have domain walls or an IR TQFT that saturates the anomaly.
Examples
In this section, we consider two examples of vector-like theories and check whether putting them on non-spin manifolds and turning on the most general background fluxes imposes further restrictions on various scenarios for their IR dynamics. Many aspects of what we find have been previously recognized in [16,19,23,28,31], especially in the framework of QCD(adj), our first example below. Nonetheless, we include it in order to show how it fits in the present more general and explicit framework.
QCD(adj)
As a first example, we consider QCD(adj), an SU(N c ) Yang-Mills theory endowed with N f massless Dirac flavors in the adjoint representation. The Dirac fermion is equivalent to two undotted Weyl massless fermions ψ,ψ, both transforming in the adjoint representation. The global symmetry of this theory that we shall utilize is where we included the 1-form Z Nc center symmetry that acts on Polyakov loops. The massless Dirac theory above is equivalent to the theory of 2N f massless Weyl adjoints 11 λ i , which has a larger global SU(2N f ) chiral symmetry, containing the SU(N f ) L × SU(N f ) R × U(1) B shown above. While studying the BCF anomaly on non-spin manifolds, however, we shall make use of the backgrounds (2.7) for the symmetry (3.10).
JHEP04(2020)097
This class of theories has been extensively studied in the continuum [36][37][38] and on the lattice [39][40][41][42][43][44][45][46][47], for general theoretical interest, but also because it includes theories of interest for model building beyond the Standard Model. The usual lore is that these theories will either flow to an IR conformal field theory or break their global symmetries, including the discrete chiral symmetry Z 4NcN f . However, more exotic scenarios have recently been discussed in [23,28,[30][31][32]. 12 In [30], we conjectured that the theory with N c = 2 and a single N f = 1 Dirac fermion will form a massless composite, schematically given by (λ) 3 , a doublet under the enhanced SU(2N f ) = SU(2) flavor symmetry, accompanied by the breaking Z 8 → Z 4 , due to an SU(2) invariant four-fermion condensate. This IR scenario has to be supplemented by a TQFT that matches a mixed anomaly between the 0-form discrete chiral and 1-form center symmetries on non-spin backgrounds [23], further studied in [16,19,31].
Another exotic scenario, applicable to all N c , N f , is the proposal of [32], where the IR phase of the theory contains (N 2 c − 1) × 2N f massless fermions (essentially providing a gauge invariant copy of the UV fermion spectrum) which can be thought of as created by operators of the form: This class of composites match all the 0-form anomalies. In addition, there is a TQFT that matches the discrete chiral-center anomaly. Clearly this is also required by the "no-go condition" (ii.) from (3.8) as gcd(N c , n c = N c ) = N c > 1. It will be instructive to check whether putting QCD(adj) on CP 2 can impose further constraints on the above IR scenarios. To this end, we first examine the transformation of the partition function in the UV under the Z 4NcN f discrete chiral symmetry. The index (2.11) is now given by where Q c,f,B are given in (2.9) after setting n c = 0 and n f = 1. This index is always an integer for all m c and m f , as can be easily checked. Then, under a Z 4NcN f transformation the partition function acquires the phase 13 (3.13) Thus, Z UV transforms by a Z 2NcN f phase for general values of the background BCF fluxes. 12 We stress that while comparing the results in these references to the ones given here, one should keep in mind that N f in this paper denotes the number of Dirac, not Weyl flavors. Thus, the discussion here applies to even numbers of Weyl flavors. 13 The
JHEP04(2020)097
Now, we first examine the IR scenario [30] for N c = 2 and a single Dirac fermion N f = 1. The IR composite Dirac fermion has unit charge under U(1) B and charge 3 under the Z 8 discrete chiral symmetry. 14 The Dirac index in the IR is obtained by setting Q c = Q f = 0 in (2.11), which gives J D = 1 2 m B (m B + 1). Thus, we find (Z IR | Z 8 ) CP 2 → e i 2π×3 8 m B (m B +1) , and hence, from (3.13) we find the ratio (3.14) We note that on a non-spin manifold, this is a Z 4 phase, while it is a Z 2 phase on a spin manifold. On T 4 , the computation follows the same steps, taking SU(N c ) 't Hooft fluxes (see footnote 4), with Q c = mm 2 , Q f = 0, and taking Q B = m b (m, m , m b ∈ Z), we have The fact that the UV and IR partition functions with massless composite fermions transform differently under Z 8 means that the massless composites cannot be all there is in the IR. In particular, as (3.14), (3.15) show, there is a mixed anomaly between the discrete chiral and center symmetries (the 't Hooft fluxes m, m , m c ) which cannot be matched by the IR fermions. This was already recognized in [30], where it was proposed that there is spontaneous breaking of the chiral symmetry, Z 8 → Z 4 , by a four-fermion condensate detλ i λ j , 15 and that domain walls, via a TQFT coupled to the background fields and describing the two Z 8 → Z 4 vacua, match the mixed discrete-chiral center anomaly.
Consider, however, a chiral transformation in the unbroken Z 4 . A look at (3.14) and (3.15) shows that an unbroken-Z 4 transformation (a Z 8 transformation applied twice) generates no phase on T 4 , but does give rise to a Z 2 phase on CP 2 . The DW theory, however, is blind 16 to the unbroken Z 4 group and only matches the anomalies for the broken symmetries, generated by odd powers of e i 2π 8 . Thus to match the anomaly of the unbroken Z 4 group [23], the scenario proposed in [30] has to be modified. The need for such modification is only visible -as (3.14), (3.15) show -when the theory is placed in consistent non-spin backgrounds. It was argued that one would need to supplement the IR with an extra emergent TQFT and an explicit construction of this TQFT as an emergent Z 2 gauge theory matching the anomaly of the unbroken Z 4 on non-spin manifolds (giving rise to the Z 2 phase) was given [16,19,23,31].
Next, we examine the scenario of [32]. The massless composites (3.11) have unit U(1) B and Z 4NcN f charges, hence the index in the IR is Recall that U(1)B is really the third component of the enhanced SU(2) flavor symmetry of the two-Weyl theory and that the massless fermion is an SU(2) doublet. 15 The determinant is taken in the 2-dimensional space of Weyl flavors. 16 A theory with two vacua and domain walls between should be described, in the IR, by a Z2 TQFT with Euclidean Lagrangian i 2 2π φ (0) (da (3) + . . .), see [48] for a recent discussion. Here, φ (0) and a (3) are compact 0-form and 3-form gauge fields (dφ (0) and da (3) have periods 2πZ when integrated over appropriate cycles) and the dots denote background field couplings. Under the action of the broken Z8 generators, φ (0) shifts by π, but is inert under the unbroken Z4 generators.
JHEP04(2020)097
. Thus, we find, proceeding as above and taking m B = 0 with no loss of generality, that (3.16) Again, we find that this phase is half the phase one obtains from the mixed anomaly between the discrete chiral and center symmetries on spin manifolds. Ref. [32] proposed that a higher-dimensional condensate breaks Z 4N f Nc → Z 4N f , but as in the above N f = 1, N c = 2 example, this is not sufficient to match the anomaly of the unbroken Z 4N f symmetry on CP 2 (it is clear, by applying (3.16) N c times, that this is a Z 2 -valued anomaly). Thus, we conclude that an additional emergent TQFT, argued to also be an emergent Z 2 gauge theory [16], has to exist in the IR to match the anomaly of the unbroken Z 4N f symmetry on CP 2 . To summarize, in both scenarios [30,32], the IR theory consists of three decoupled sectors: massless composite fermions, a Z Nc TQFT due to the spontaneous chiral symmetry breaking (with N c vacua and domain walls), and an emergent topological Z 2 gauge theory. Here, we shall not speculate on the likelihood of this scenario and simply refer the reader to [47] for the up-to-date status of the lattice studies.
SU(6) with a Dirac fermion in the two-index anti-symmetric representation
As our second study of the new anomaly, we consider SU(N c = 6) vector-like theory with a single Dirac spinor with R taken to be the two-index antisymmetric representation (N -ality n c = 2). We denote its two undotted Weyl-fermion components as ψ,ψ, transforming in R and R, respectively. Recalling (2.1), the global symmetry of this theory is where we modded by the Z 3 , the discrete group that acts faithfully on fermions, and the Z 2 subgroup of the Lorentz group, while the 1-form center symmetry Z 2 should be understood as acting on topologically nontrivial Wilson loops.
A possible phase of the theory is one where a bilinear fermion condensate ψ ψ forms. This condensate preserves the vectorlike U(1) B but breaks Z 8 down to Z 2 . The theory is gapped and in the deep IR the anomaly is matched by a Z 4 TQFT describing the four ground states of the theory. This number of vacua is consistent with the constraints on gapped phases of such theories recently derived in [19]. This is also the breaking pattern expected when the theory is coupled to an axion [22].
In what follows, we study the viability of a more exotic scenario for the IR physics, namely the possibility to match the anomalies via a single massless composite Dirac fermion of the form 17 O ∼ (ψ) 3 ,Õ ∼ (ψ) 3 , which has charge 3 under both U(1) B and Z 8 . It is a JHEP04(2020)097 simple exercise to check that all the 0-form anomalies are matched by the O composite. Using (i) in (3.7), ignoring (ii), (iii), and (iv) since we are dealing with a single Dirac fermion, one can easily show that there is no integer c that satisfies (3.7). Hence, additional IR data to a massless fermions spectrum is needed.
Next, we check whether O matches the BCF 't Hooft anomaly on CP 2 . We will also compare the result with that of the BCF anomaly on a spin manifold. To this end, let us examine the change of the partition function under a global Z 8 chiral transformation in the background of the BC fluxes on CP 2 . From (3.1), using (2.9), (2.10) with m f = 0, and recalling that the anomaly is twice the Dirac index (2.11), J D = 5 2 m c (m c + 1), we find in the UV: In the IR, the Dirac index 18 for the composite O is J D = 1 + m c 2 (m c + 3), thus we find Therefore, the ratio between the Z 8 chiral transformations of the partition function in the UV and IR theories in the same BC background (2.7) is (3.20) implies that there is a π 2 phase mismatch between the UV and IR 't Hooft anomalies on CP 2 . This phase is obtained even if we completely turn off the SU(N c ) 't Hooft fluxes by setting m c = 0, hence the anomaly is solely due to putting the theory on a non-spin manifold, i.e., there is a mixed anomaly between the 0-form Z 8 discrete chiral symmetry and the U(1) B − gravity background required to put the theory on CP 2 . The mismatch (3.20) indicates that a single composite in the IR cannot by itself match this mixed anomaly. In addition to the composite, the theory has to be supplemented by partial breaking of Z 8 and/or an IR TQFT.
It is also important to compare the situation with the BCF anomaly on a spin manifold. One can repeat the above exercise on T 4 to find, in the background of BC fluxes (recall (3.15)) instead of (3.20) on CP 2 . The π phase mismatch can also be obtained as the result of a mixed anomaly between Z 8 and the 1-form Z 2 center symmetry [15]. In both CP 2 and T 4 cases (3.20), (3.21) we find that one needs to supplement the theory with an emergent IR 18 The IR composite only couples to the gravitational and baryon number backgrounds (2.7), hence Q c = Q f = 0. In addition, since the baryon charge of O is 3, the formula for the index (2.11) has to be modified by multiplying QB by 3 2 and taking dim R f dimRc = 1.
JHEP04(2020)097
TQFT in order to match the phases in (3.20), (3.21). The T 4 UV/IR phase mismatch (3.21), for the broken Z 8 generators, could be due to domain walls from the spontaneous breaking Z 8 → Z 4 by a (ψψ) 2 -condensate (recall also that Z 2 ∈ Z 4 is fermion number). This, however would not match the nontrivial Z 2 -valued anomaly in the unbroken-Z 4 transformation of the partition function on a non-spin manifold (3.20). Thus, we conclude that, once again, putting the theory on a non-spin manifold gives more constraints on the IR physics, by requiring an extra TQFT to match the anomaly of the unbroken Z 4 symmetry on CP 2 (the phase to be matched is, again, a Z 2 phase). The results of [16] imply that such a Z 4 and Z (1) 2 -center symmetric TQFT exists: the anomaly inflow action is nontrivial if one assumes Z 8 and Z (1) 2 unbroken symmetries (precluding the existence of a symmetric gapped phase [16]), but trivializes for the case of unbroken Z 4 and Z (1) 2 . See also the discussion of the more general case near eq. (3.30) in the following section.
SU(4k + 2) with fermions in the two-index (anti)-symmetric representation
Here, we generalize the SU(6)-theory analysis to SU(4k + 2) with a single Dirac fermion in the two-index symmetric (S) or anti-symmetric (AS) representation. 19 The conclusion, with regards to an IR phase with composite massless fermions, is essentially the same as in the SU(6) theory of section 3.1.2. Below, we give the details for completeness. We turn on color and baryon-number fluxes and use (2.11) to calculate the Dirac index and dim S,AS = 1 2 (4k + 2)(4k + 2 ± 1)), we find from which one can readily find that the partition function receives the following phases upon performing a discrete chiral symmetry transformation Z 2(4k+2±2) : (3.23) As above, we focus on the anomaly constraints on an exotic scenario for the IR physics. We assume that the 0-form anomalies are saturated in the IR by a set of massless composites. This can be achieved in the AS case by a single composite O ∼ (ψ) 2k+1 and single anti-compositeÕ ∼ (ψ) 2k+1 , while in the S case we need 20 3 + 4k composites O ∼ (ψ) 2k+1 and anti-compositesÕ ∼ (ψ) 2k+1 , possibly with appropriate insertions of derivatives and/or gluonic fields. Since all the IR composites are color singlets, only the baryon flux will contribute to the Dirac index: Notice that SU(4k) with fermions in the two-index S or AS does not admit color-singlet fermions in the IR. Hence, we exclude this case from our discussion. 20 To match the 0-form anomalies involving Z 2(4k+4) .
JHEP04(2020)097
for each of the symmetric and anti-symmetric Dirac composites, and we used the fact that the U(1) B charges of the composites is 2k + 1. Using this information, we obtain the following phases in the partition function upon performing a discrete chiral transformation: where we used the fact that we need 3 + 4k composites in the symmetric case. Finally, after some algebra we obtain the ratios: This phase mismatch between the UV and IR implies that turning on BC fluxes on CP 2 rules out the set of composites as the sole spectrum in the IR. For the S case, we obtain a Z 8 -valued anomaly on CP 2 for odd values of k, and a Z 4 -valued one for even values of k, while for the AS case we obtain a Z 4 phase for odd-k and a Z 8 phase for even-k. Before we continue with studying the implications of (3.26), let us contrast the situation on CP 2 with that on T 4 . In the latter case we can turn on general color and baryon fluxes in the 1-2 and 3-4 planes: Then, the Dirac index in the UV is given by for the S and AS cases, respectively. In the IR the composites are color singlets, they have charge 2k + 1 under U(1) B , and therefore, the index is Repeating the above steps, we obtain the following phases upon performing a Z 2(4k+2±2) discrete chiral transformations in the BC fluxes: Here, the phase we obtain is the exact same Z 2 phase one encounters from the discretechiral/1-form Z 2 -center anomaly. The symmetry breaking scenario consistent with the above massless composite spectrum is as follows. For the case of symmetric tensor (S) representation, we assume a nonvanishing (ψψ) 2k+2 condensate (with all other condensates zero) breaking the chiral symmetry Z 2(4k+4) → Z 4k+4 . The anomaly inflow 5d action has the form with A (1) a 1-form gauge field for Z 2(4k+4) and B (2) a 2-form gauge field for the Z 2 center symmetry. 21 The chiral variation of (3.30) reproduces the Z 2 -valued mixed anomaly (3.29).
As in the composite-fermion QCD(adj) scenarios discussed in the previous section, there are three decoupled sectors in the IR: massless composite fermions, domain walls and multiple vacua due to the symmetry breaking, and a TQFT to match the anomaly of the unbroken chiral symmetry. As before, we shall not dwell on the likelihood of these exotic IR phases appearing in the nonabelian gauge theories under consideration.
Comments on future studies
In this section, we studied a few examples illustrating the utility of the mixed chiral/BCF anomaly on non-spin backgrounds. Our main focus was on exotic phases where massless composite fermions saturate the "traditional" 0-form 't Hooft anomalies. The main lesson we take is that the new generalized 't Hooft anomalies on both spin and non-spin manifolds yield further constraints.
It is clear that generalized 't Hooft anomalies will also have implications on the physics of "vanilla" phases where fermion bilinears obtain expectation values maximally breaking the chiral symmetries. As the analysis [23] of SU(2) QCD(adj) with a single Dirac flavor showed, the structure of the IR theory, its domain walls, and confining strings can reflect the anomalies in an intricate way. It would be interesting to understand the implications of anomaly matching for similar phases in more general theories, including chiral theories or the ones studied in [14]. Constructing the IR TQFTs that must accompany the various exotic phases mentioned here is also of interest (we also note that their UV origin remains mysterious). Anomalies should also have implications for the finite temperature phase structure, as in [5,7,[50][51][52].
JHEP04(2020)097 space, C 3 , passing through the origin. CP 2 can be described by the complex coordinates Ξ = (ξ 1 , ξ 2 , ξ 3 ) = (0, 0, 0) (here ξ 1,2,3 ∈ C) modulo the identification Ξ ≡ λΞ for any complex number λ = 0. One can cover CP 2 with three patches U i (i = 1, 2, 3, where U i covers ξ i = 0) such that the transition functions on the overlap U i ∩ U j are holomorphic. CP 2 is a Kähler manifold, with a Kähler 2-form given by where ∂ is defined as ∂f ≡ α ∂f ∂z α dz α (and similarly for∂) and K is the Kähler potential: where z 1,2 cover one of the patches U i . Taking The Kähler 2-form (A.1) is closed, dK = 0, and co-closed, δK = 0, and is associated to the metric tensor g αβ : Therefore, we immediately find Now, one can set z 1 = x + iy and z 2 = z + it to find that the metric on CP 2 can be written in the Fubini-Study form: where r 2 = x 2 + y 2 + z 2 + t 2 and σ x,y,z are the left-invariant 1-forms on the manifold of the group SU(2) = S 3 , obeying dσ x = 2σ y ∧ σ z (plus cyclic). The latter are given in terms of the x, y, z, t coordinates by: For our explicit calculations of appendix B, we introduce polar coordinates r, θ, φ, ψ
JHEP04(2020)097
where 0 ≤ r < ∞, 0 ≤ θ < π, 0 ≤ φ < 2π, 0 ≤ ψ < 4π. The 1-forms σ x,y,z are now σ x = − cos ψ sin θdφ + sin ψdθ 2 , σ y = − cos ψdθ + sin θ sin ψdφ 2 , One also can write the metric in terms of the vierbein 1-forms as ds 2 = e a e b η ab , where η ab is the flat Euclidean metric. Then, by inspecting (A.5) one immediately finds: In terms of the vierbein (A.8), the Kähler 2-form (A.1) is from which one can see that K is anti-self-dual K = −K ( 1230 = 1). We use the Kähler form K in polar coordinates in the calculations of fluxes and topological charges in appendix B. In particular, note that CP 2 K ∧ K = 8π 2 2 . The Fubini-Study metric (A.5), explicitly written using polar coordinates (A.6), is To study the points at r → ∞, one can introduce a new coordinate u = 1/r and observe that at u = 0 there is a S 2 of area π (the metric is well behaved at u = 0; the singularity apparent in the first two terms of (A.10) at 1/r = u → 0 is only a coordinate one, see [53]). The Ricci tensor of the Fubini-Study metric (A.10) is R ab = 6δ ab , so it is a solution of the Einstein's equation R ab − 1 2 δ ab R = −Λδ ab with the energy-momentum tensor being that of a cosmological constant Λ = +6. This holds for the form of K given in (A.2), with dimensionless coordinates z α . If, instead of (A.2), we take K = 6 Λ log 1 + Λ 6 2 α=1 z αzα , we shall find R ab = Λδ ab , for arbitrary Λ.
Thus the compact manifold CP 2 has a size scaling as Λ − 1 2 . It can be taken to have any size, in particular it can be larger than Λ −1 QCD , the inverse strong-coupling scale of the gauge theory. Taking Λ → 0 approaches an infinite volume limit. As in the T 4 case, this is the limit of interest from the point of view of constraining infinite volume nonperturbative dynamics via anomaly matching.
B Gauge fields and fermions on CP 2
In order to turn on a U(1) gauge field (which can be embedded into SU(N c ), see below) of two-form strength F on CP 2 , one needs to ensure that the field will not backreact on the manifold, and hence, destroy CP 2 . This can be achieved by demanding that F is an (anti)self-dual 2-form field, since in this case the field has a vanishing energy-momentum JHEP04(2020)097 tensor. 23 Therefore, the simplest way to find a consistent solution of the Einstein-Maxwell equations on CP 2 is by writing F in terms of the Kähler 2-form as F = CK for some constant C ∈ R. Below, we will see that defining spinors on CP 2 demands that C be quantized in half-integer units.
It is well known that fermions are ill-defined on CP 2 ; we say that CP 2 is a non-spin manifold. Briefly, 24 to see that spinor fields Ψ are not globally well defined, one considers a family of closed contours γ(s), with s ∈ [0, 1] parameterizing the different contours. This family of contours wraps the S 2 in CP 2 , such that γ(0) and γ(1) are the trivial contours. Then one considers the parallel transport of tetrads, and the corresponding uplift to spinors, along each contour belonging to this family. The SO(4) holonomies corresponding to parallel transporting tetrads along the family γ(s), considered as a function of s, form a closed non-contractible loop in SO(4) (recall that γ(0) and γ(1) are trivial contours). Correspondingly, the uplift of the SO(4) holonomies (for the s = 0 and s = 1 curves) to its double cover Spin(4), responsible to transporting the spinors, differ by minus sign. Schematically, one obtains Ψ(s = 1) = e iπ Ψ(s = 0), showing the global inconsistency (recalling that γ(0) and γ(1) are both the trivial contour) in defining spinors. 25 One can also see the problem of formulating spinors on CP 2 by computing the index of a Dirac spinor on CP 2 : The fractional value 1/8 one obtains for an integer-valued quantity (the Dirac index) is another manifestation of the failure of CP 2 to accommodate spinor fields. One can define spinor fields on CP 2 if one turns on a U(1) gauge bundle that eats up the iπ phase in (B.1), which renders the spinors well-defined [26]. In this case one finds that the e iπ factor in (B.1) gets modified to: where e is the U(1) charge of the fermions and we used Gauss' law. Then the minus sign that arises from parallel transporting the spinors can be cancelled by the minus sign arising from propagating the U(1) charges. Thus, one can consistently define charged spinors in this U(1) background. This generalized spin structure is called a spin c structure.
To obtain the quantization condition on the U(1) flux, we use F = CK, as discussed above, along with the expression of the Kähler 2-form in (A.9). We take the limit r → ∞ and integrate eq. (B.3) over the S 2 parametrized by θ and φ, recall (A.10). We find 23 The kinetic term is CP 2 F ∧ F , which, using (anti) self-duality of F , becomes ± CP 2 F ∧ F . The latter is a metric-independent topological term, and hence, its energy-momentum tensor vanishes identically. 24 For more detail see [24][25][26]. 25 In a more mathematical language, the second Stiefel-Whitney class of CP 2 is non-zero, indicating that there is a sign ambiguity when spinors are parallel-transported around some paths in CP 2 [54]. Thus, the quantization condition is eC = m + 1 2 with m ∈ Z. Without loss of generality we take e = 1 and conclude that the necessary condition to define spinors on CP 2 is to turn on the quantized monopole field F = m + 1 2 K . (B.5) As described in the main text, we also consider turning on the color, flavor, and baryon backgrounds (2.7), reproduced here for convenience Notice that these are embedded into the Cartan subalgebras of SU(N c ) and SU(N f ) and represent a generalization of the BCF 't Hooft flux backgrounds on T 4 studied in [20]. When the U(1) background F = CK is replaced by (B.6), we obtain, instead of (B.1), for Ψ of unit charge under baryon number, in a representation of N c -ality n c and N f -ality n f , Ψ(s = 1) = e where the last equality follows from the fact that the fractional part of the eigenvalues of H c · ν c is −1/N c (and similar for c → f ). Thus the background (B.6), or eq. (2.7) of the main text, is consistent with parallel transport on CP 1 . The Pontryagin number of the U(1) bundle, using CP 2 K ∧ K = 8π 2 2 , is given by which combines with (B.2) to give the full Dirac index in the combined U(1) and CP 2 background which now has integer values. 26 Likewise, the Dirac index for the fermions of (B.7), in the background (B.6), is JHEP04(2020)097 also given in (2.11) of the main text, which is also an integer. Here, Q B = 1 8π 2 F B ∧ F B and Q c/f = 1 8π 2 tr F (c)/(f ) ∧ F (c)/(f ) , explicitly given by (B.11) Finally, we note that one can use equations (B.7), (B.10), (B.11) to identify gauge theories that can be consistently formulated on CP 2 without turning on global symmetry backgrounds, i.e. by only modifying the conditions on the gauge bundles being summed over in the path integral. Constructions of this type were recently used to uncover a new SU(2) anomaly [27] on non-spin manifolds (note that in our examples all fermions can be given gauge invariant mass and there is no analogue of the new SU(2) anomaly).
The simplest such case [28] is that of an SU (2) theory with N f Dirac fundamental flavors. To see this from our equations, take N c = 2, m c = 1, n c = 1, Q B = Q f = 0, and check that (B.7) holds and (B.10) is an integer (for any single flavor). This SU(2) QCD(F) with N f flavors was interpreted in [28] as emerging near a quantum critical point of a theory of only bosons (heuristically, this is because all gauge invariant operators are bosonic).
Other examples (involving both SU(2) and other gauge groups) are discussed in [27,29]. Within the class of theories considered in this paper (specified in section 2.1) the ones that do not require global symmetry backgrounds to be consistently formulated on CP 2 must obey where the second condition, the integrality of the index, should hold once the first is obeyed. We have not exhaustively studied the solutions of the above conditions for general n c , R c and will only note a few simple cases. The first is QCD(F) with N f Dirac flavors and an SU(N c = 2k) gauge group. As in the SU(2) theory of [28], it is easy to see that all gauge invariant operators are bosons (or that (B.12) holds). The second set of theories where (B.12) is easily seen to hold is QCD(S/AS) with N f S/AS Dirac flavors and an SU(N c = 4k) gauge group. As in the other examples, here also all gauge invariants (e.g. baryons and mesons) are bosons.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 13,214.6 | 2020-04-01T00:00:00.000 | [
"Physics"
] |
Rheological and Mechanical Characterization of Dual-Curing Thiol-Acrylate-Epoxy Thermosets for Advanced Applications
Mechanical and rheological properties of novel dual-curing system based on sequential thiol-acrylate and thiol-epoxy reactions are studied with the aim of addressing the obtained materials to suitable advanced applications. The crosslinking process is studied by rheological analysis in order to determine conversion at gelation and the critical ratio. These parameters are used to discuss the intermediate material structure for each acrylate proportion and their possible application in the context of dual-curing and multi-step processing scenarios. Results from dynamo-mechanical analysis and mechanical testing demonstrate the high versatility materials under investigation and revealed a wide range of achievable final properties by simply varying the proportion between acrylate and thiol group. The intermediate stability between curing stages has been analysed in terms of their thermal and mechanical properties, showing that these materials can be stored at different temperatures for a relevant amount of time without experiencing significant effects on the processability. Experimental tests were made to visually demonstrate the versatility of these materials. Qualitative tests on the obtained materials confirm the possibility of obtaining complex shaped samples and highlight interesting shape-memory and adhesive properties.
Introduction
Crosslinked polymeric materials are used in many application fields because of their excellent thermal and mechanical properties (i.e., aviation, automobile, structures or coatings) [1]. The formation of crosslinked thermosetting network is a non-reversible process involving drastic changes in polymer and network structures which leads to relevant restrictions in shape designs. Recently, the increasing need to reach complex shape designs to cater for the high demand of complex shaped smart materials (i.e., bio-inspired devices or shape-changing materials [2]), is focusing the attention on alternative curing techniques which allow to overcome thermosets limitations in shape. In this context, dual-curing polymer systems have attracted a growing interest as they represent a versatile approach for better controlling network formation and properties during processing [3,4].
Dual-curing processing is a promising methodology to develop thermosetting polymers taking advantage of two compatible and well-controlled crosslinking reactions [5]. These reactions can be simultaneously or sequentially triggered by different stimuli as well as difference in reaction kinetics. Sequential dual-curing allows to obtain materials with two different sets of properties after the first curing reaction (intermediate material) and second curing reaction (final material). This kind of processing is becoming attractive due to the possibility to attain complex three-dimensional structures by means of the accurate control of the materials properties in the intermediate state (i.e., low T g , low crosslinking density and high deformability).
Successful sequential dual-curing processing requires that: (i) both polymerization reactions are selective and compatible so that no undesired inhibition or reactivity effects take place; (ii) they can be triggered using different stimulus such as UV light [6] or temperature [7], or else they have sufficiently different reaction rates so that they can be controlled from a kinetics point of view; (iii) the properties of the final and intermediate stages can be custom-tailored changing the composition of the formulation.
Thermosetting dual-curing materials have been obtained from the combination of a large variety of reactions, but "Click" reactions represent one of the most effective tools to obtain such systems. "Click chemistry" defines a class of reactions that are highly efficient, orthogonal and selective [8]. Moreover, they reach high yields, they can react in mild or solvent-less conditions and they can be applied to broad range of compounds [9][10][11]. For those reasons, "click" reactions are widely used in thermosets preparation and are fit for combination in dual-curing procedures. In particular, thiol-click reactions have attracted great interest due to their advantages (high conversion, solvent-free, oxygen resistance, etc.) [11][12][13][14] which makes them suitable to prepare crosslinked polymers in a fast and efficient way.
Michael-type addition reactions are currently used in dual-curing processing because of the variety of commercially available nucleophiles (Michael donors) and activated double bond compounds (Michael acceptors) that can be used in such processes. One of the most common and clickable Michael donors are thiols. Although thiol-acrylate addiction has a good combination of reactivity, versatility and cost, the final thermosetting structures exhibit week mechanical and thermal properties which makes them unable to fit the standards required for advanced applications [15]. On the other hand, thiol-epoxy "click" reaction, which is also used in dual-curing systems, leads to soft functional materials with enhanced mechanical properties such as high deformability, resistance at break, high impact resistance and adhesion. All these properties make them suitable for advanced applications as coatings [16], adhesives [17] and shape-memory actuators [18]. The combination of these reactions in dual-curing procedure is already reported in the literature: Konuray et al. [6] presented a novel photolatent dual-cure thiol-acrylate-epoxy system where the first curing reaction is the thiol-Michael reaction triggered by UV light (a photobase generator was used as catalyst), and the second reaction is a thiol-epoxy activated at higher temperature. Jin et al. [7] prepared thiol-epoxy-acrylate hybrid polymer networks combining nucleophilic thiol-acrylate Michael addition and thiol-epoxy reactions in a one-pot simultaneous dual-cure catalyzed by 1,8-diazabicyclo [5.4.0]undec-7-ene (DBU) at 80 • C. In our previous work [19] a new thiol-acrylate-epoxy sequential dual-curing system, where both reactions are activated using a single thermal catalyst, was developed. Although thiol-acrylate Michael addition and thiol-epoxy "click" reaction are both thermally-activated within a similar temperature range, the former's faster reaction kinetic ensures the sequentiality of the process.
Suitable choice of monomer feed ratio, structure and functionality makes it possible to obtain an intermediate thiol-acrylate conformable network after the first curing step. This network could be processed in complex shape designs and then, thanks to the presence of unreacted thiol and epoxy groups, a second crosslinking reaction could be triggered leading to the final materials. Therefore, complex-shaped thermosets can be achieved thanks to the two-step processing. Since thermosetting networks formed by thiol-acrylate reaction generally exhibit flexible structures, the addition of a further thiol-epoxy crosslinking process could enhance the final properties of the thermosets. Thermal and mechanical properties of intermediate and final thermosets can be easily tailored controlling the ratio between acrylate and thiol groups (r a ). Therefore, it is possible to obtain intermediate materials with properties ranging from liquid-like to gelled solid-like covering a wide range of possible applications. The critical ratio (r c ), defined as the lower r a at which gelation occurs within the first crosslinking process, sets a boundary line between liquid and solid-like intermediate materials. Therefore, this parameter must be accurately evaluated to address each resulting material to the adequate application. Further analysis of the chemorheological behaviour of these dual formulations, taking into consideration the complex effect of temperature and curing progress on the viscosity during processing [20,21], would also be highly valuable in the simulation and optimization of other processing scenarios [22].
Such design capabilities can be exploited to produce multi-layer assemblies with controlled layer thickness and complex shape, with the purpose of creating complex-shape mechanical actuators [18], making use of the good wetting properties of intermediate lightly crosslinked or nearly gelled materials and the good adhesion obtained after the thiol-epoxy reaction.
The aim of this work is to characterize the mechanical, thermal and rheological properties of a novel thiol-acrylate-epoxy dual-curing system. Mixtures at various thiol-acrylate ratios, covering the entire range between 0 and 1, were prepared and final thermosets properties were studied. The evolution of rheological properties during the curing process were monitored to determine the actual critical ratio for this system. After that, the main mechanical properties (hardness, flexural and tensile modulus, tensile strength and deformation at break), of the totally cured materials (final thermosets) were evaluated in order to highlight the high versatility of the applications of the dual-curing system developed. To characterize the storage time of the intermediate materials, the evaluation of the latency of the second curing reaction was carried out on two different formulations, one with a r a > r c and one with r a < r c . The progression of the thiol-epoxy was evaluated in terms of residual heat (DSC), complex viscosity η* (rheometer) and tensile module E t . Lastly, qualitative and visual demonstrations of different applications are presented to show the possibility of this systems to be exploited for two-stage adhesive technologies and for the preparation of complex-shaped shape-memory actuators.
Formulations have been coded as TCDDAxx where xx is the r a (00 and 10 correspond to respectively thiol-epoxy and thiol-acrylate pure formulations). The compositions of the investigated formulations are listed in Table 1. The dual-curing process was monitored using a rheometer AR-G2 TA Instruments, (New Castle, DE, USA), equipped with an electrical heated plate device (EHP) and 20 mm parallel plate geometry.
The evolution of the storage (G ) and loss modulus (G") was monitored through dynamo-mechanical experiment at 30 • C for 8 h. The oscillation amplitude was set at 0.2% and the frequencies at 0.5, 1.75 and 3 Hz. Gel point was determined as the tanδ crossover at the three different frequencies and the first gelled formulation (r c ) was defined as the formulation with the lower r a that shows a gel point within the first curing process. Gelation during dual-curing procedure was also tested step-wise as follows: first curing stage 3 h at 30 • C with an amplitude of 3%; after that a temperature ramp from 30 to 60 • C at 2 • C/min (with the same oscillation amplitude) and finally second curing stage 2 h at 60 • C with an amplitude of 0.2%. Three different frequencies of 0.5, 1.75, 3 Hz were continuously measured during the whole procedure. The experimental r c was compared with the theoretical value obtained for ideal step-wise processes using the Flory-Stockmayer theory, as follows: where f acrylate and f thiol are the average functionality of acrylate and thiol monomers. Complex viscosity (η*) of the intermediate materials were recorded as function of angular frequency ω (rad/s) at constant deformation in the range of linear viscoelasticity, obtained from constant shear elastic modulus (G ) in a strain sweep experiment at 1 Hz, always at 25 • C.
Mechanical Properties
Final thermosets were analysed with a TA Instruments DMA Q800 (New Castle, DE, USA) equipped with 3-point bending clamp (15 mm) to characterize the relaxation process. Prismatic rectangular samples (15 × 6 × 2.5 mm 3 ) were analysed in oscillation mode at 1 Hz, 0.1% of strain amplitude and imposing a temperature ramp of 3 • C/min from −20 to 120 • C. The T g was determined as the tanδ peak temperature, glassy (E g ) and rubbery (E r ) moduli were determined at 0 and 100 • C, respectively.
Mechanical properties were tested at room temperature to investigate the processability of final materials at normal or usual operating temperatures.
The Flexural modulus (E) of final materials was determined with the same apparatus by means of a force ramp at a constant rate of 1 N/min in controlled-force mode. The slope (m) within the linear zone of force-displacement curve was obtained. E was calculated in accordance with the following equation: where L is the support span and w and t are the width and the thickness of test sample respectively. Tensile properties of dog bone shaped samples (80 mm × 25 mm × 1.5 mm) were obtained by tensile test on a Shimadzu AGS-X 10 kN (Kyoto, Japan) testing machine at 10 mm/min speed, according to ASTM D638-14 (ASTM International, West Conshohocken, PA, USA, 2014) standard. Shore hardness was measured with an Affri durometer type D (Shore-D hardness) according to ASTM D2240-15 (ASTM International, West Conshohocken, PA, USA, 2015) in samples of 4 mm thickness. Ten measurements were done in each sample and the average result is presented.
Latency Test on Intermediate Materials
Latency tests were performed for TCDDA02 and TCDDA06 intermediate thermosets at two different storage conditions (5 and 22 • C). Samples at different storage times were tested using a differential scanning calorimeter (DSC) Mettler DSC-821e (Mettler-Toledo, Greifensee, Switzerland), calibrated using an In standard (heat flow calibration) and an In-Pb-Zn standard (T calibration). Samples of approximately 10 mg were placed in aluminium pans with pierced lids and cured in the oven. After the first curing stage, samples were stored in a climatic chamber at 5 and 22 • C and were periodically tested by DSC dynamic analysis from −20 to 200 • C with a heating rate of 10 • C/min under N 2 atmosphere. The residual heat of the second curing step has been used as measure of the curing degree reached by the samples during the storage.
The evolution of the intermediate materials properties, during a representative storage time interval, were monitored by measuring at T room the tensile modulus for sample TCDDA06 and the viscosity for sample TCDDA02, since the latter leads to a liquid-like intermediate.
Rheological Analysis
Rheological analysis is accepted as a reliable tool to determine the gelation of thermosetting systems. The critical ratio of our system was determined to be r c = 0.33 using Equation (1). In light of the approximations made and taking into account previous experience [23], we expected the real critical ratio to be higher than the theoretical one, so we started the rheological analysis from r a = 0.4. The curing process was monitored at an isothermal temperature of 30 • C according to the results obtained in our previous work [19]. Figure 1 shows the rheological monitoring of the isothermal curing process of three formulations with r a values above the theoretical r c , TCDDA04, TCDDA045 and TCDDA05. As shown in these figures, the network formation during the polymerization process results in a drastic increase of both storage (G ) and loss (G ) moduli. Two polymerization processes are visible in all three figures since the increase in moduli shows a two-step trend. For all three samples thiol-acrylate reaction occurs within the first 180 min (as verified with DSC, results similar to those reported previously [19], not shown) leaves a reaction tale overlapped with the thiol-epoxy which proceed at a low rate because of the low curing temperature. Along the process, the material goes through a substantial transformation: at the beginning the material has liquid-like behaviour and G" is higher than G . As the reaction takes place, structure starts to develop leading to molecular weight increase, resulting in an increase of G" and also G . When gelation takes place, a network develops, and the material start to behave as solid (G > G"). Figure 1a shows that no gelation is detected in the first three hours of curing of formulation TCDDA04, and the gel point is clearly visible only after the second crosslinking process is started. Gelation occurs right before the end of the analysis and G" remains higher than G . Raising the r a to 0.45, gelation occurs after 194 min ( Figure 1b) and it seems to take place right after the first curing reaction ends. Finally, as shown in Figure 1c, TCDDA05 formulation has the gel point within the first curing process and during the first 3 h of curing. As a result, the r a of 0.45-0.5 can be established as the actual r c . As already reported for other dual-curing systems [23], the effective critical ratio (r c = 0.45-0.5) is higher than the one obtained theoretically from the Flory-Stockmayer relationship. Deviation from the ideal step-wise behaviour could be explained by the action of intramolecular cyclization which would lead to a delay in gelation phenomenon [24,25]. The presence of impurities in the S4 (reported by Sigma-Aldrich to be lower than 5%) could reduce the actual functionality of the curing agents resulting in higher conversion at gelation.
The rheological behaviour for samples TCDD04 and TCDDA05 was also monitored during a dual-curing procedure divided as follows: a first isothermal step at 30 • C for 180 min a temperature ramp at 2 • C/min from 30 to 60 • C and, finally, a second isothermal step at 60 • C until G and G" reach a plateau. The results can be observed in Figure 2 (TCDD04) and Figure 3 (TCDDA05). Starting with TCDDA04, Figure 2a shows that, G" remains higher than G along the first curing stage thus, the material is still acting as liquid since the network is not already formed. G" decreases during the heating ramp due to the decrease in viscosity with temperature, and so does G . Crossover of G and G" is clearly visible in the second stage, within the thiol-epoxy crosslinking process. The evolution of tanδ during the first and the second stage of curing is presented, respectively, in Figure 2b,c. As expected, tanδ crossover is only visible about 30 min after the second stage has started. In the case of formulation TCDDA05, Figure 3a shows that, for crossover of G and G" takes place at the very end of the first curing stage, but the tanδ crossover is clearly observed in the first curing step (Figure 3b), therefore confirming that an intermediate gelled material could be obtained with a proportion of 0.5. It can also be observed in Figure 3a a slight decrease in G" during the heating ramp from 30 to 60 • C, while G remains fairly constant and even increases due to the existence of an incipient network that is developing.
Mechanical and Thermomechanical Analysis
In this section, thermomechanical properties and the mechanical behaviour of the final thermosets are discussed. The aim of this characterization is to evaluate the properties which determine the final use of the material in real applications. Figure 4 presents the evolution of storage modulus E and tanδ with temperature during the network relaxation of the different materials, obtained by a DMA temperature sweep analysis. Table 2 summarizes some relevant parameters associated with the network relaxation. As expected, raising the proportion between thiol and acrylate results in lower T g s for the final materials because of the softening effect of the thiol-acrylate network. The shape of the tanδ peak during the material relaxation can be correlated with its network structure: the higher and narrower the peak of tanδ, the more homogeneous and mobile the network structure [26]. Similar values of FWHM around 11 • C were obtained with both thiol-epoxy and thiol-acrylate pure formulation since these networks were built with the same crosslinking functionality. Dual formulations present broader tanδ transition paths due to the higher heterogeneity of the network, which is made up of two different crosslinking reactions. Belmonte at al. [27] reported a strong relationship between the sharpness of the relaxation process and the rate of the shape-recovery process. Materials with higher FWHM values are unfavourable for shape-memory applications because broader relaxation profile leads to a slowdown of the recovering process. In general, broad transitions lead to an undesired anticipation of the softening of the materials with respect to the expected T g . It can also be observed that the relaxed modulus E r decreases with increasing acrylate ratio, as a consequence of the higher mobility of the TCDDA network in contrast with the DGEBA network, rather than a difference in crosslinking density. In contrast, the glassy modulus E g increased with increasing acrylate ratio, suggesting that a better chain packing and stronger intermolecular interactions take place in acrylate rich formulations. Tentatively, this can be rationalized in terms of the higher mobility of the TCDDA monomer in comparison with DGEBA, and the higher presence of polar carbonyl groups. Flexural Modulus (E) for all the samples was calculated from 3-point-bending test by means of Equation (2). As can be observed in Table 2, some samples present a very low value of E because their relaxation processes lie around room temperature (see Figure 4). Although these results are not representative values of Young Moduli in glassy state for these samples, they provide an effective measure of the material behaviour at the temperature of usage. Low flexural moduli result in high conformability at room temperature making these materials suitable for advanced applications based on soft materials (i.e., soft robotics). Table 2. Thermomechanical data collected from DMA (dynamic mechanical analysis). Coefficients of variation less than 2% for thermomechanical data and 5% for Flexural Modulus.
Sample
T Tensile behaviour of final materials was also evaluated from tensile test in a universal testing machine. Tests were made at room temperature and the data obtained is presented in Table 3. Significant differences in terms of tensile modulus are observed between the thermosets above and below the r c . This trend is due to the shift of the glass transition towards T room when increasing the acrylate proportion, with the consequent softening of the materials. The effect of the acrylate proportion is clearly visible in the stress-strain curves in Figure 5. When r a = 0 (sample TCDDA00), a sudden fracture appears right after the yielding point and almost no plastic deformation is observed. When the acrylate ratio r a is raised, a gradual increase in network deformability is observed in samples TCDDA02 and TCDDA04 (Figure 5a). A drastic change in tensile behaviour at T room is observed in samples with r a > r c (Figure 5b). For TCDDA06 (r a = 0.6), T g ≈ T room and network relaxation occurs during the strain-stress experiment. Network relaxation allows a higher dissipation of stress by viscous friction of polymer chains, reaching higher deformability and a relative high strength value at break with strain hardening at the end, as commonly observed in the programming of shape-memory thermosets at temperatures close to their relaxation temperature. However, when r a > 0.6 (samples TCDDA08 and TCDDA10), it is observed that T g < T room . In consequence, this stress-absorbing network relaxation mechanism is no longer operative, and therefore it is the relaxed network structure the main responsible for the mechanical response. A low elastic modulus is measured and, given the limited stretching ability of the polymer chains, a low stress at break σ max is obtained. Consequently, increasing the epoxy-thiol content in the final network, higher strength at break can be obtained. The same effect is also visible in terms of strain energy density as it can be appreciated in Table 3. The differences in network relaxation state at T room results in an increase of the material capability to absorb energy during deformation which achieves the highest value for TCDDA06 and then drastically decrease for TCDDA08 and TCDDA10. A similar trend is observed in hardness measurements; hence the same conclusions are applicable to this property. From the analysis of the data reported in Table 3 and Figure 5, it can be observed that the behaviour of TCDDA02 and TCDDA08 is very close to that of TCDDA00 and TCDDA10, respectively, suggesting that small deviations from the pure networks produce minor effects on final thermosets properties. This analysis confirms that a wide range of final properties can be obtained by varying the composition of the formulation: the thiol-acrylate proportion in the network has been proved to have a softening effect on the final network, resulting in lower T g and higher capability of absorbing energy during application deformation process. An interesting combination of these two reactions is present in the 0.6 proportion between acrylate and thiol: a highly deformable final material is be obtained together with an intermediate gelled network that make it suitable for application in which two-step processing is required (with a highly conformable manufacturing step in the intermediate one). On the other hand, acrylate proportions between 0.2 and 0.4 lead to a decrease in T g with respect to the characteristic thiol-epoxy T g , without a major loss in terms of mechanical properties (final strength, E t , hardness). In this case, these materials could be exploited for two-stage application in which a viscous intermediate is required, such as adhesives or coatings.
Latency and Storage Time
The evaluation of latency of the second reaction represents a crucial analysis in dual-curing procedure. The storage time of the intermediate materials is strictly related to the amount of time required to activate the second reaction at T room (or at different storage temperature), without significant alteration of the intermediate material properties. For that purpose, we chose to analyse the storage stability of TCDDA02 and TCDDA06 formulations.
To begin with, the first curing stage was carried out in the oven at 40 • C for 2 h. After that, they were stored at 22 and 5 • C in a controlled climatic chamber. The samples were analysed by DSC at determined storage times in order to monitor the evolution of residual heat with storage. As shown in Figure 6a, intermediate TCDDA06 material experiences no significant changes in residual heat during 10 hours of storage at 22 • C. After that, a drastic decrease in residual heat is observed and the samples take six days (144 h) to react completely curing at storage conditions. On the contrary, TCDDA02 shows a rapid decrease in residual heat during the first six hours of storage and it reaches a plateau after 24 h. In this case, even if the thiol-epoxy reaction proceeds faster thus considerably reducing the storage time, it never reaches the complete curing because vitrification takes place during storage. Main differences between TCDDA02 and TCDDA06 can be explained by the higher epoxy content of TCDDA02, which enhances the reactivity of the epoxy-thiol reaction [28] and is therefore less stable. In fact, premature activation of the epoxy-thiol reaction might also take place during the first curing step. A practical consequence of this is that controlled curing sequences are more easily achieved in dual formulations with a moderate or low epoxy content. In addition, the fully cured TCDDA02 material has a higher T g , clearly above storage temperature (see Table 2), therefore explaining vitrification during storage. In contrast, the lower of T g of TCDDA06 (see Table 2, calorimetric T g is even lower [19]) makes it possible to react completely under storage conditions.
Adhesive Bonding
The characteristic two-stage manufacturing process achievable with dual-curing system can be exploited for applications such as dry bonding adhesives [29]. Intermediate materials with r a = 0.2 can be suitable to be used as viscous adhesive which can be easily spread on the adhesion surface. After the application, the second curing stage can be performed resulting in an extremely good final adhesion of the surfaces (Figure 8a). The curing temperature (and time) can be adjusted to the thermomechanical properties of the materials that we want to adhere. In this case, we can take advantage of the dual-curing procedure to reduce the shrinkage, but difficulties in final thickness control arise with thicker adhesive layers because of the viscosity in the intermediate stage. Using TCDDA06 formulation, intermediate solid materials sheets with different thickness can be obtained and used as solid-like adhesives that can be easily adjusted to the shape of the adhesion surface. As shown in Figure 8b, a precise control of the final thickness is thus achieved thanks to the gelled network formed during the first curing stage. The final thickness of the layer remains close to the designed thickness (the thickness of the mould used to prepare each sheet). The effective strength of the bonding and the relation between strength and adhesive layer thickness will be analysed in depth in a future work. The high conformability of the intermediate material with r a = 0.6 can also be exploited for different kind of adhesion joint such as external joining of tubes or pipes [19]. The precise control of the thickness also results in the possibility of stick together different shapes, adjusting the thickness to the complexity of the shapes. In Figure 9, different thickness intermediate films of TCDDA06 were placed in-between two ABS pieces in order to stick them together in a one-piece-sample.
Shape-Memory Devices
Finally, the shape-memory behaviour of these materials has been qualitatively investigated taking advantage of the ease of processing of the intermediate material into complex shapes. Final spring-shaped samples were obtained with a two-step curing process of TCDDA06. The uncured formulation was poured into a thin silicon tube with the aid of a syringe (Figure 10a) and both ends of the tube were sealed. Afterwards it was cured through the first curing stage, with the tube acting as a mould, and intermediate materials in the form of wires were obtained after removing the silicon tube. The wires were wrapped around a cylindrical rod (Figure 10b) and cured for the second stage. As shown in Figure 10c, finished springs of controlled dimensions (perfect circular cross-section) were obtained with this procedure highlighting the capability of this material of being processed in complex shapes in the intermediate state. TCDDA02 wires were prepared removing the material from the silicon tube after the second curing step. Here again, excellent surface finish is obtained, with a constant cross-section in the wire along its whole length, as seen in Figure 10d. The shape-memory properties of the spring and the wire shapes obtained were qualitatively tested. Springs prepared with TCDDA06 formulations were programmed at T g + 20 • C in a compressed spring (Figure 11a). Conversely, TCDDA02 wire was programmed in the shape of spring rolling up the heated sample around a cylindrical rod (Figure 11b). Figure 11. Programming of TCDDA06 and TCDDA02 samples: (a) TCDDA06 with a permanent spring shape is programmed in the form of a compressed spring. (b) TCDDA02 with a permanent shape of wire is rolled up around a cylindrical rod and a temporary spring shape is obtained.
Shape fixation was obtained in both situations cooling the deformed samples down to 5 • C. The permanent shape can be recovered in the oven by heating to 75 • C for TCDDA02 and 60 • C for TCDDA06 (T g + 20 • C) as shown in Figure 12, leading to a complete recovering of the initial shape. As can be seen in Figure 12a the compressed spring recovered perfectly the initial length and the actuation seemed perfect to be exploited for a shape-memory actuator. However, the capability to give a work-output during the recovery process is somehow limited by the low stiffness of the final material. Some partially constrained recovery tests were done but the compressed spring sample was only able to recover is initial shape if a light weight was applied. This problem could be overcome by increasing the functionality of the epoxy resin thus obtaining final materials with tailorable stiffness and without affecting the high deformability of the intermediate state. This problem will be object of further studies. Figure 12. (a) TCDDA06 shape-recovery process to permanent spring shape (b) TCDDA02 shape-recovery process to permanent wire-shape.
Conclusions
In this work dual-curing thiol-acrylate-epoxy system was rheological and mechanically characterized and potential applications of the obtained materials were proposed. Rheological analysis was performed to determine the actual critical ratio for this system which defines the material behaviour in the intermediate stage. Mechanical and thermomechanical characterization of the materials resulting from the complete curing was performed by means of dynamo-mechanical analysis, Flexural 3-point-bending, tensile test and Shore-D hardness. Stability of the intermediate materials during storage were also evaluated by monitoring the advancement of the reaction in terms of thermal and mechanical properties.
Rheological analysis of the curing process showed that gelation takes place within the first curing stage when the acrylate-thiol ratio is r a > 0.45-0.5. A wide range of both solid-like and liquid-like intermediate materials can be obtained varying the r a respectively above and below the r c . Thermomechanical characterization shows a network softening effect (decrease in flexural and E r moduli) together with a toughening effect in the glassy state (E g ) with the increase of r a .
Characterization of final thermoset mechanical behaviour at room temperature reveals that a wide range of properties can be obtained varying the proportion between acrylate and thiols group. An interesting combination of the properties of the two network is obtained with 0.6 proportion: final materials with high deformation at break with a gelled intermediate state is obtained. Moreover, slight variation from the pure thiol-epoxy mechanical properties were observed when a small thiol-acrylate proportion is added to the network, meaning that a two-stage curing procedure can be obtained without significant worsening the final materials properties. These evidenced features make this system suitable to be exploited for a large variety of advanced application such as shape-memory actuator and two-stage adhesives.
Storage stability of the intermediate material, between the first and second curing stage, was investigated at storage temperatures of 25 and 5 • C and analysed with DSC and mechanical or rheological characterization. Differences in storage stability were found to be strictly related to the acrylate/epoxy proportion in the mixture. Formulations with a lower acrylate content have lower intermediate stability because the higher content of epoxy groups enhances the reactivity of the second stage thiol-epoxy reaction. Anyway, a relevant amount of time, in which processability of the materials was not affected, is observed for both formulations.
Visual qualitative examples have been presented to demonstrate the possibility to obtain complex shapes for shape-memory actuators and two-stage adhesives with controlled adhesive layer thickness and excellent adhesion. | 7,711.2 | 2019-06-01T00:00:00.000 | [
"Materials Science"
] |
The influence of temperature and photoperiod on the timing of brood onset in hibernating honey bee colonies
In order to save resources, honey bee (Apis mellifera) colonies in the temperate zones stop brood rearing during winter. Brood rearing is resumed in late winter to build up a sufficient worker force that allows to exploit floral resources in upcoming spring. The timing of brood onset in hibernating colonies is crucial and a premature brood onset could lead to an early depletion of energy reservoirs. However, the mechanisms underlying the timing of brood onset and potential risks of mistiming in the course of ongoing climate change are not well understood. To assess the relative importance of ambient temperature and photoperiod as potential regulating factors for brood rearing activity in hibernating colonies, we overwintered 24 honey bee colonies within environmental chambers. The colonies were assigned to two different temperature treatments and three different photoperiod treatments to disentangle the individual and interacting effects of temperature and photoperiod. Tracking in-hive temperature as indicator for brood rearing activity revealed that increasing ambient temperature triggered brood onset. Under cold conditions, photoperiod alone did not affect brood onset, but the light regime altered the impact of higher ambient temperature on brood rearing activity. Further the number of brood rearing colonies increased with elapsed time which suggests the involvement of an internal clock. We conclude that timing of brood onset in late winter is mainly driven by temperature but modulated by photoperiod. Climate warming might change the interplay of these factors and result in mismatches of brood phenology and environmental conditions.
INTRODUCTION
The timing of life-history events, such as flowering in plants, insect emergence, and reproduction, with respect to the changing abiotic and biotic conditions of the environment is critical for most organisms (Van Asch & Visser, 2007;Visser, Both & Lambrechts, 2004). In temperate regions, environmental conditions during winter are important drivers of phenology (Williams, Henry & Sinclair, 2015) as organisms need to cope with low temperature conditions and often drastically reduced resource availability. Most ectotherms hibernate in a state of dormancy at different stages of development. Endothermic mammals generally keep their body temperature actively above ambient temperature, but often go into a state of reduced metabolism, i.e., hibernation or daily torpor, to reduce energy expenditure and tend not to reproduce during winter (Körtner & Geiser, 2000). Due to their capability of social thermoregulation, honey bees (Apis mellifera L.) are able to maintain colonies over the whole year (Jones & Oldroyd, 2006), using a strategy analogous to hibernation in mammals. Much like mammals that undergo hypothermic phases during hibernation, the honey bee colony is effectively heterothermic. When the colony experiences cold stress the workers of a colony tend to remain relatively inactive and cluster up densely in the so-called winter cluster to reduce colony heat loss (Southwick, 1985), while individual workers actively produce heat by flight muscle shivering to keep the cluster core temperature above ambient temperature (Esch, 1964;Stabentheiner, 2005). In brood rearing honey bee colonies, the degree and accuracy of thermoregulation is exceptionally high (Fahrenholz, Lamprecht & Schricker, 1989;Jones et al., 2004;Kronenberg & Heller, 1982). This is necessary as the larvae of honey bees require a higher and more stable temperature than workers to survive and develop well. Even minor deviations from the optimal temperature-window during development can lead to decreased fitness in adult workers (Jones et al., 2005;Tautz et al., 2003). Thermoregulation is highly energy demanding (Stabentheiner, Kovac & Brodschneider, 2010). To save resources while foraging is not possible, honey bee colonies refrain from large-scale brood rearing during temperate zone winters. Anticipating resource availability in spring, colonies resume brood rearing already in late winter. The timing of brood onset is critical for colony fitness (Seeley & Visscher, 1985). Premature brood onset increases the risk of starvation before spring bloom and can lead to increased loads of the brood parasite Varroa destructor. Late brood onset, on the other hand, decreases the ability to exploit spring bloom. In both ways, wrong timing of brood onset can result in reduced colony growth, colony reproduction, and increased mortality during hibernation. Emergence from hibernation before new resources are available is also seen in several mammal species. Increased risk of predation and starvation are hazarded in order to reproduce early so that the offspring has sufficient time to develop and build up resource storages or fat-tissue before the next winter (Körtner & Geiser, 2000;Meyer, Senulis & Reinartz, 2016).
To date, very little is known on how honey bee colonies achieve an optimal timing of brood onset and which environmental factors are used as predictive cues during winter. Across many taxa increasing ambient temperature and length of photoperiod serve as cues to time phenological events like emergence after hibernation or reproduction (Bradshaw & Holzapfel, 2007;Körtner & Geiser, 2000;Visser, 2013). In addition, endogenous circannual clocks can control the timing of hibernation (Körtner & Geiser, 2000). Nothing is known about the role of internal clocks for timing of brood onset in honey bees. But it is generally assumed that ambient temperature does affect brood rearing activity in honey bee colonies in winter and it has been shown that photoperiod can affect brood rearing activity in summer (Kefuss, 1978). Empirical evidence for effects of ambient temperature or photoperiod on brood rearing in winter, however, is still lacking. This is probably because tracking the status of brood rearing within the winter cluster is difficult and generally highly invasive. We argue that a new method to detect brood rearing without disrupting the winter cluster is necessary to increase our understanding of the phenology of brood rearing activity in honey bee colonies. In light of ongoing climate change, well-founded information on the impact of environmental conditions on honey bee phenology is critically needed if we want to assess potential consequences of climate change for one of the most ecologically and economically important pollinators (Potts et al., 2016). Climate change and especially changing winter conditions have already been shown to alter timing of life history-stages in many organisms (Williams, Henry & Sinclair, 2015) and resulting mismatches with the environment can lead to severe fitness losses in wild bees (Schenk, Krauss & Holzschuh, 2018).
In this study we demonstrated that tracking the daily temperature variation within the winter cluster allows to draw conclusions on the state of brood rearing in a minimally invasive way. We applied this method to investigate the effects of ambient temperature, photoperiod and elapsed time on the brood rearing status within the winter cluster of honey bee colonies. We expected ambient temperature to have a major effect on timing of brood onset that is modulated by photoperiod and elapsed time.
Study organism
Twenty-four equally sized colonies of A. mellifera carnica (Pollmann, 1879) headed by sister-queens were established in July 2014. Queens were artificially inseminated with 8-10 ml sperm of 10 drones all belonging to the same drone population in cooperation with the Institut für Bienenkunde, Oberursel, Frankfurt University. Artificial swarms with 600 g of workers and a queen were placed into two-storied miniPlus-hive boxes with 12 empty wax-sheet frames and fed with sugar syrup (Apiinvert; Südzucker, Mannheim, Germany) during August to October 2014 to enable comb construction and ensure sufficient honey stores. Colonies were treated against the brood parasite V. destructor using Bayvarol Ò -strips (Bayer AG, Leverkusen, Germany) for six weeks in August and September 2014 as a precaution measure. No visually noticeable signs of common diseases were detected during two-weekly colony monitoring until September 2014. It was confirmed that all colonies successfully reared worker brood before hibernation and all colonies were adjusted in September 2014 to make sure that they contained approximately the same amounts of workers and honey stores. All colonies were placed into two environmental chambers in December 2014 (12 colonies in each chamber) and kept at 0 C daily mean temperature with daily oscillation from -3 C during midnight to +3 C at noon and under constant short-day conditions of 8 h photoperiod. Within the environmental chambers, each colony was connected to a separate flight arena with an individually controllable LED light source (36 cold white (6500 K) LEDs and six UV-LEDs; ∼2000 lx illuminance), diffused with a sandblasted glass cover (Fig. 1). Honey bees could enter the flight arena via a short tunnel. The tunnels were covered with reflective aluminium foil to increase the amount of light that passes from the arena into the hive box to be perceived by honey bees in the winter cluster. To identify effects of ambient temperature and photoperiod on brood rearing activity, individual temperature and light regimes were started at 28th January. All applicable institutional and national guidelines for the care and use of animals were followed. Honey bee colonies were placed into experimental hive boxes, based on two miniPlus-styrofoam boxes, each with six comb frames. Hive boxes were connected to a third styrofoam box that served as flight arena via a short tunnel. (B) An array of LEDs was installed into each flight arena and allowed to implement individual light regimes for each colony. (C) A thermo-sensor was installed into the wax of the second to fourth comb on both hive levels in a way that allowed to track temperature on both sides of the comb. (D) Within each hive level thermosensors on consecutive combs were installed in alternating order, either between the left and the middle third of the comb or between the right and the middle third of the comb. This pattern was reversed on the other hive level to maximize the area covered by thermo-sensors. This allowed to keep track of thermoregulatory activity within the experimental colonies at relative high spatial resolution without disturbing or disrupting winter clusters. A photoelectric barrier within the tunnel between hive box and arena connected to a data logger allowed to track honey bee traffic between hive box and arena. All colonies were placed into two dark environmental chambers. A wire mesh bottom in the flight arena and hive box and metal lid on the flight arena top facilitated temperature exchange through convection and conduction to make sure that the honey bee colonies were not isolated from ambient temperatures. Photo credit: Fabian Nürnberger.
Full-size DOI: 10.7717/peerj.4801/ fig-1 Temperature regimes To investigate the effects of ambient temperature on brood rearing in honey bee winter clusters, colonies were distributed equally into two temperature treatments ( Fig. 2): (a) In environmental chamber A the temperature remained at constant cold conditions of 0 ± 3 C for 78 days after the start of the experiment as a control.
(b) Imitating a spell of warm weather, ambient temperature in environmental chamber B was gradually upregulated to 11 ± 3 C after day 30 and after a warm period of 15 days, ambient temperature dropped again to cold conditions of 0 ± 3 C. Figure 2 Temperature and light regimes. At the start of the experiment, 24 honey bee colonies within experimental hive boxes were distributed equally among two environmental chambers (environmental chamber A and B) that differed in ambient temperature regime. Each colony was connected to its own flight arena with individually controllable light regime and distributed among three different light regimes (constant, increasing and peaking photoperiod) independently from ambient temperature regime. This allowed us to test for effects of ambient temperature and photoperiod in isolation as well as for interacting effects on brood rearing activity in honey bee winter clusters. At day 78 and day 75 respectively the experiment was terminated and all colonies were released from the environmental chambers and placed outside on a meadow at the campus of the University of Würzburg at 6th March 2015.
Light regimes
To check for effects of total photoperiod and photoperiod changes on brood rearing activity colonies were assigned to three different photoperiod regimes ( Fig. 2): (a) Constant photoperiod: short-day conditions with an 8 h light to 16 h dark cycle (8:16 LD), which reflects the minimum day length in Central Europe and served as control treatment.
(b) Increasing photoperiod: steadily increasing duration of photoperiod, starting at 8:16 LD with daily increase of 2 min 40 s, which is a simplified but realistic scenario for Central Europe between winter and summer solstice.
(c) Peaking photoperiod: photoperiod starting at 8:16 LD with a steady increase in photoperiod of 10 m 40 s each day for 45 days to a maximum of 16:8 LD, followed by a steady decrease of 10 min 40 s each day until the end of the experiment. This additional experimental scenario was introduced to allow examination of effects of photoperiod change independently from photoperiod duration.
Tracking of comb temperature
Comb temperature in each colony was tracked by eight thermo-sensors (Maxim Integrated DS1921G-F5 Thermochron iButton; 0.5 C resolution) that were embedded into the central wax layer of combs to keep track of winter cluster activity (Fig. 1). Temperature was measured in 3 h intervals. At each interval, the sensor that measured the highest temperature was considered as being closest to the center of the winter cluster and used in the statistical analyses as measure for comb temperature. When the in-hive temperature was up regulated to over 30 C and the daily variation was not higher than 1.5 C, colonies were defined as brood rearing (Kronenberg & Heller, 1982). Ambient temperature for each colony was tracked via a thermo-sensor in the respective flight arena.
Statistics
The statistical software R version 3.4.0 (R Core Team, 2017) was used for data analysis. For each observation day colonies were classified as brood rearing if the comb temperature was stable with a daily amplitude of comb temperature 1.5 C. A linear-mixed effects model was used to test for the effects of ambient temperature and comb temperature variability on mean comb temperature. Data was square root transformed to meet requirements of normal distribution. A contrast matrix was used post hoc to test for differences between individual factor levels. We used a generalized linear mixed-effects model for binomial data to test for interacting effects of temperature phase and light regime on the proportion of days during which brood rearing occurred in colonies for each temperature phase and light regime combination. Only data from environmental chamber B was used to analyse interactions between the environmental factors. Temperature in chamber A remained constant at all times, making its data inadequate to assess interactions. Differences between individual levels of factors were tested post hoc using Tukey's test. The effect of photoperiod duration on the proportion of colonies that were rearing brood for each day was tested, using a generalized linear mixed-effects model for binomial data. A linear mixed-effects model was used to test for effects of direction of photoperiod change on probability of brood rearing. We used a linear mixedeffects model to test for the effect of the direction of change of photoperiod on the probability of brood rearing. Only data from colonies that were kept at constant low temperature conditions was used to test for effects of photoperiod duration or direction of change of photoperiod on brood rearing status. The effect of time spent within the experiment on proportion of colonies that reared brood was tested using a generalized linear-mixed effects model for binomial data. This was done for a subset of colonies under constant cold and short-day conditions, as well as for all colonies, regardless of treatment combination. Colony ID was included as random factor in all models. Benjamini-Hochberg correction for multiple testing was applied for all post hoc tests (Benjamini & Yekutieli, 2001). Model residuals were inspected visually to confirm normality and homoscedasticity. Sample sizes and degrees of freedom were based on numbers of observation days. For all models, a significance level (a) of 0.05 was considered.
One colony under constant cold temperature and peaking photoperiod conditions was removed from the statistical analyses because the temperature profiles revealed that it was still rearing brood at the beginning of the experiment and continued to rear brood during the whole experiment. Three colonies within environmental chamber A and one colony within environmental chamber B were removed from the analyses because they died early in the experiment. This left the treatment combination of constant cold temperature and increasing photoperiod with only two colonies. As data from all colonies within chamber A were combined to analyze effects of photoperiod, this should not have compromised statistical analysis. All other treatment combinations were left with at least three colonies. Four colonies were lost during the second half of the experiment. Observation days from these colonies were included into the analyses until temperature profiles became unstable and eventually dropped to ambient temperature level. A total of 1,325 observation days from 19 colonies contributed to the statistical analysis.
Variability of comb temperature
Stability of comb temperature and mean ambient temperature had interacting effects on mean comb temperature measured in the winter cluster (interaction: stability of comb temperature  mean ambient temperature: F 1, 1271.85 = 8.26; p = 0.004; n = 1,325 observation days from 19 colonies; Fig. 3). When comb temperature was stable (i.e., daily amplitude of comb temperature 1.5 C) mean comb temperature was significantly higher than when comb temperature was variable (i.e., daily amplitude of comb temperature >1.5 C; z = 6.19, p < 0.0001) and no longer affected by ambient temperature (Tukey's post hoc test: z = 1.60, p = 0.111). This state of stable comb temperature was considered a strong indicator of brood rearing activity. Stable comb temperature was used to identify brood rearing activity in colonies for all following analyses. When comb temperature was variable, mean comb temperature was negatively correlated with ambient temperature (Tukey's post hoc test: z = -3.35, p = 0.001). Colonies were considered to not rear significant amounts of brood in this state.
Effects of ambient temperature and light regime on brood rearing activity
There was a significant interaction between the effects of ambient temperature and light regime on the proportion of days during which colonies reared brood (i.e., daily amplitude of comb temperature 1.5 C; data from environmental chamber B; temperature conditions  light regime: F 4, 34 = 2.26, p < 0.023; n = 752 observation days from 11 colonies; Fig. 4). Under short-day conditions, the probability of brood rearing increased when ambient temperature was increased (11 ± 3 C; Tukey's post hoc test: z = 4.34, p < 0.001). A drop of ambient temperature back to 0 ±3 C after the warm period did not significantly reduce the brood rearing activity (Tukey's post hoc test: z = -1.85, p = 0.146). Surprisingly, there was no significant effect of ambient temperature on brood rearing under conditions of increasing or peaking photoperiod.
Under constant low temperature conditions of 0 ± 3 C within environmental chamber A the duration of photoperiod had no significant effect on the proportion of colonies that reared brood (F 1, 570 = 0.10, p = 0.755; n = 573 observation days from eight colonies; Fig. 5). The direction of change of photoperiod had no significant effect on the proportion of days during which colonies reared brood (F 2, 8.09 = 1.72, p = 0.238; n = 573 observation days from eight colonies; Fig. 6). Independent of the tested environmental factors, the proportion of colonies that reared brood (i.e., daily amplitude of comb temperature 1.5 C) significantly increased over time in both a subset of colonies that were all kept at constant cold and short-day conditions without further environmental cues (F 1, 222 = 3.81, p = 0.045; n = 225 observation days from three colonies; Fig. 7) as well as across all treatments (F 1, 1320 = 24.47, p < 0.0001; n = 1,325 observation days from 19 colonies).
DISCUSSION
We demonstrated that tracking comb temperature with thermo-sensors is a valuable minimally invasive method to track brood rearing activity in honey bee hives, even during winter. Applying this method, we could show that onset of brood rearing in honey bee winter clusters is affected by environmental conditions. In our experimental setting, colonies were more often found to rear brood after ambient temperature was increased than during the preceding cold period. Neither duration of photoperiod nor the direction of daily change of photoperiod alone had a significant effect on brood rearing activity within winter clusters. However, the light regime did affect the response of winter clusters to temperature changes. There was only a significant response to temperature increase in colonies that were kept at constant short-day. While interacting effects of different abiotic conditions could help to minimize the risk of premature brood onset, our results suggest that increasing winter temperatures and more frequent spells of warm weather due to global climate change could result in advanced timing of brood onset. This might cause mismatches with the environment with negative consequences for honey bee colony fitness and pollination services. Independent of the measured environmental factors, onset of brood rearing also became more probable with time, which could indicate the involvement of an internal clock. This study is, to the best of our knowledge, the first where individual and combined effects of ambient temperature and photoperiod on honey bee winter cluster activity were investigated under controlled conditions. Our experimental design allowed us to keep track of honey bee colony thermoregulation and thereby brood rearing activity under defined environmental conditions and without disturbing the colonies. We provide an alternative approach to earlier studies which were either extremely invasive (Avitabile, 1978) or not conducted under winter conditions (Fluri & Bogdanov, 1987;Harris, 2009;Kefuss, 1978). Indirectly detecting brood rearing by tracking thermoregulatory activity via thermo-sensors within the comb wax allowed us to investigate honey bee colonies under winter conditions without severely affecting honey bee behavior and colony health. By analyzing patterns of daily comb temperature variation, we could identify days where colonies performed intensive thermoregulation. A daily comb temperature amplitude within the winter cluster of maximally 1.5 C, despite a considerably higher ambient temperature amplitude, was accompanied by an increase of mean comb temperature to more than 30 C. Further, in this state mean comb temperature was not affected by mean ambient temperature. Such conditions were previously measured in the presence of capped brood within the winter cluster (Kronenberg & Heller, 1982). When colonies rear brood, the cluster core temperature is highly important and needs to be stable to allow for a proper development of brood (Jones et al., 2005;Tautz et al., 2003). We conclude that daily temperature amplitude measured within the winter cluster is a good predictor for brood rearing activity. It is important to keep in mind that the spatial resolution of temperature data was limited and small brood nests might not have been detected in all cases. In fact, even in temperate zones continuous brood rearing during winter could be common, albeit at very limited extent (Avitabile, 1978;Harris, 2009;Szabo, 1993). Once the brood nest grows and colonies start to rear brood at considerable amounts, this can be expected to be reflected in the temperature data obtained from our experimental setting. Although some uncertainty about the status of the colony will remain, we argue that this indirect method is preferable over the much more invasive method of disrupting the cluster to visually assess brood status.
In our experiment brood rearing activity was rarely detected under cold environmental conditions (i.e., -3 to +3 C). Once ambient temperature increased, colonies were more often found to rear brood. The effect of ambient temperature on brood rearing activity is not surprising. The energy demand of thermoregulation necessary for brood rearing increases with decreasing ambient temperature (Kronenberg & Heller, 1982). As the resources needed to fuel thermoregulation are strongly limited, honey bee colonies should refrain from brood rearing under cold environmental conditions (Seeley & Visscher, 1985;Southwick, 1991). With increasing ambient temperature thermoregulation, and hence brood rearing, becomes less cost intensive and more viable, even when colonies need to solely rely on storages. Ambient temperature was previously also shown to have a strong effect on timing of increased thermoregulation after hibernation in ants of the Formica-group (Rosengren et al., 1987) as well as timing of hibernation and emergence in mammals (Körtner & Geiser, 2000;Meyer, Senulis & Reinartz, 2016;Mrosovsky, 1990;Ruf et al., 1993). After colonies started to rear brood, a drop of ambient temperature did not immediately cause them to stop. Pheromones released by honey bee larvae are known to stimulate brood rearing and associated behaviors in workers (Pankiw et al., 2004;Sagili & Pankiw, 2009). Hence, the mere presence of brood might have stimulated the workers to continue brood care and keeping the brood combs warm, even when mean ambient temperature was as cold as 0 C. This may cause honey storages to run out quickly and leave colonies starving. It is possible that, once triggered, only a disruption of honey or pollen stores will ultimately force a stop of brood rearing activity. It is important to note that, despite a relatively large increase of ambient temperature, the proportion of days during which we detected brood rearing activity in our experiment only increased by about 30%. This reaction was weaker than expected and suggests that further factors are involved in the timing of brood onset.
Our data revealed that photoperiod in isolation had no effect on brood rearing activity. Neither duration of photoperiod nor direction of change of photoperiod affected brood rearing under cold conditions. It might be possible that honey bees are not able to measure photoperiod when densely packed within the winter cluster. It has been suggested for mammals which hibernate in shelters and therefore have limited access to day light, that ambient temperature would be the most appropriate stimulus or zeitgeber for timing of emergence after hibernation (Davis, 1977;Körtner & Geiser, 2000;Michener, 1977;Mrosovsky, 1980;Murie & Harris, 1982). However, in our experiment light regime did alter the response of honey bee colonies during warmer conditions, when winter clusters were probably less dense and workers could leave the cluster. Adult emergence, reproduction and oviposition in the marine midge Clunio marinus is also known to be controlled by two environmental factors that need to occur in unison (Kaiser & Heckel, 2012). Increasing ambient temperature affected brood onset only at constant short-day conditions of 8 h photoperiod, but not in the other two light regimes in which photoperiod was considerably longer (about 12-18 h, depending on light regime) and increasing. These findings are not in line with suggestions that a short photoperiod elicits cannibalization of eggs and hence inhibits brood rearing activity (Cherednikov, 1967;Woyke, 1977). Several studies proposed that, irrespective of current duration of photoperiod, an increase in photoperiod has a positive effect on brood rearing activity while a decrease of photoperiod negatively affects brood rearing (Avitabile, 1978;Kefuss, 1978). The inhibitory effect of photoperiod treatments with increasing photoperiod on brood rearing under warm conditions in our study does not support these findings. However, most of the previous studies that investigated the effect of photoperiod on brood rearing activity either did not investigate brood rearing activity in winter (Fluri & Bogdanov, 1987;Kefuss, 1978) or did not control for other environmental conditions that might have affected brood rearing activity like ambient temperature (Avitabile, 1978;Fluri & Bogdanov, 1987). It was previously shown that brood rearing activity in colonies that were kept at constantly low mean ambient temperature of 6 C were not affected by photoperiod (Harris, 2009). This is in line with our findings, that photoperiod matters only under warm conditions. Fluri & Bogdanov (1987) failed to find an effect of photoperiod under warm conditions, but investigated the effect of artificial shortening of the photoperiod in summer when colonies were already rearing large amounts of brood. Under these circumstances the effect of photoperiod might be reduced (but see Kefuss, 1978). Due to the experimental settings, we cannot disentangle if it was the longer duration of photoperiod or the fact that photoperiod increased that reduced brood rearing under warm conditions. It also remains to be investigated if a decrease of photoperiod during a warm period would affect brood rearing. Our results indicate that photoperiod was used as additional cue and might help to prevent premature brood onset due to spells of warm weather. However, according to our hypothesis, short photoperiod was expected to inhibit brood rearing while increasing photoperiod should have promoted brood rearing activity and not vice versa. This illustrates that further experiments on combined effects of temperature and photoperiod are needed. It is important to note that honey bees show considerable geographical variation with a number of subspecies and locally adapted ecotypes (Meixner et al., 2013). We used A. mellifera carnica as it is one of the most commonly used subspecies in central Europe and of high economical relevance. It is highly productive and expected to increase brood rearing activity relatively fast once conditions seem favorable. To which extent other subspecies and ecotypes might differ in their reaction to environmental cues remains to be investigated.
In addition to photoperiod and ambient temperature also elapsed time affected brood rearing in the honey bee colonies. Brood rearing activity was detected with increasing frequency over time and we observed brood rearing activity in one colony even at constant short-day and cold conditions. This suggests that colonies recommence brood rearing at some point regardless of environmental conditions. It has been shown for mammals that a circannual rhythmicity underlies the timing of hibernation and seasonal torpor, which can be entrained by photoperiod, ambient temperature and foodavailability, but does not rely on these external zeitgebers (Collins & Cameron, 1984;Heldmaier & Steinlechner, 1981;Körtner & Geiser, 2000;Mrosovsky, 1986;Steinlechner, Heldmaier & Becker, 1983;Wang, 1988). Timing of honey bee brood rearing activity might also be controlled by an internal clock. The queen is the only individual of a colony that can live for several years and thus feature a true circannual clock. Previous work has shown that not only egg-laying activity but also the size of queen ovaries changes over the seasons, which might be controlled by an endogenous rhythm (Shehata, Townsend & Shuel, 1981). Potential changes in queen pheromone releases related to an increasing ovary size might then prime the colony's workers for brood caring in late winter as queen pheromones are involved in the regulation of worker tasks (Slessor, Winston & Le Conte, 2005). Another reason for increased probability of brood onset over time might be the build-up of moisture within colonies. It was proposed that the humidity in colonies affects brood rearing activity and brood may serve to bind moisture generated by the metabolic activity of colonies which may otherwise be harmful (Omholt, 1987). Humidity within the different colonies might have varied and was not tracked during the experiment. The availability of resources might be another highly important factor for timing of brood rearing. Colonies that were supplemented with pollen in spring were previously found to start brood rearing earlier in the year (Mattila & Otis, 2006). It has also been shown that the nutritional status of individuals and food-availability can affect the response to environmental cues for timing of hibernation in mammals (Norquay & Willis, 2014;Ruf et al., 1993) and the thermoregulation in Formica-ants (Rosengren et al., 1987).
CONCLUSIONS
We conclude that brood rearing activity in hibernating honey bee colonies is highly sensitive to climatic conditions. Ambient temperature seems to be an important trigger for brood onset, but responses to temperature can be modulated by photoperiod. Climate change and associated more frequent warm weather events during winter (IPCC, 2014) have the potential to disrupt the synchronization between the seasonal timing of brood onset in honey bee colonies and flowering phenology. This can have profound negative consequences for colony fitness. | 7,320.6 | 2018-05-25T00:00:00.000 | [
"Biology"
] |
Understanding structural variability in proteins using protein structural networks
Proteins perform their function by accessing a suitable conformer from the ensemble of available conformations. The conformational diversity of a chosen protein structure can be obtained by experimental methods under different conditions. A key issue is the accurate comparison of different conformations. A gold standard used for such a comparison is the root mean square deviation (RMSD) between the two structures. While extensive refinements of RMSD evaluation at the backbone level are available, a comprehensive framework including the side chain interaction is not well understood. Here we employ protein structure network (PSN) formalism, with the non-covalent interactions of side chain, explicitly treated. The PSNs thus constructed are compared through graph spectral method, which provides a comparison at the local and at the global structural level. In this work, PSNs of multiple crystal conformers of single-chain, single-domain proteins, are subject to pair-wise analysis to examine the dissimilarity in their network topologies and in order to determine the conformational diversity of their native structures. This information is utilized to classify the structural domains of proteins into different categories. It is observed that proteins typically tend to retain structure and interactions at the backbone level. However, some of them also depict variability in either their overall structure or only in their inter-residue connectivity at the sidechain level, or both. Variability of sub-networks based on solvent accessibility and secondary structure is studied. The types of specific interactions are found to contribute differently to structure variability. An ensemble analysis by computing the mathematical variance of edge-weights across multiple conformers provided information on the contribution to overall variability from each edge of the PSN. Interactions that are highly variable are identified and their impact on structure variability has been discussed with the help of a case study. The classification based on the present side-chain network-based studies provides a framework to correlate the structure-function relationships in protein structures.
Introduction
The newly synthesized protein sequences in the cell adopt unique three-dimensional structures (Anfinsen, 1973) to perform their functions. The native structures thus obtained are stabilized by various non-covalent interactions like van der Waals, electrostatic interactions and hydrogen bonds. However, flexibility in its structure allows for it to perform function (Frauenfelder et al., 2007). For instance, a complex function such as open-close motions for transporting ligands across cell membranes or a simple function such as binding of a ligand, require the rearrangement of atomic interactions within the protein structure.
Structures of proteins have been determined using methods like X-ray crystallography, Cryo-EM and NMR in different functional forms. In the case of crystal structures, they are also determined in different crystalline states, different crystallization conditions, etc. Three dimensional coordinates, thus obtained, represent snapshots in various conditions. Even though some of the native conformations of a protein are not crystallisable or have not yet been crystallised, differences observed in the available tertiary structures under various conditions reflect intrinsic flexibility in its overall structure. Depending on the inherent dynamics of the protein, variations in the 3D structure of a protein may be as small as subtle sidechain variation or a very large deviation of the backbone conformation.
The pioneering work of GN Ramachandran (Ramachandran et al., 1963) on the (φ-ψ) map that describes the backbone conformation, has played a key role in our understanding of the protein structure. A refined structure can be generated by providing information on sidechain conformations. With an increase in high resolution protein structural data, rotamer libraries of sidechain conformations have become available for modelling protein structures (Dunbrack, 2002). A recent study of sidechain conformational preferences in monopeptides (Rose, 2019), provides insights akin to Ramachandran (φ-ψ) map of secondary structures. Thus, the allowed and the preferred conformations of the backbone polypeptide chain and that of the connected sidechains are well understood. Another crucial factor required to understand the global topology of protein structures, is the interaction between neighbouring sidechains. The present study focuses on the interactions between spatially proximal sidechains, which provide a global sidechain connectivity map in protein structures.
The alteration of protein structure due to dynamics is characterized by variation of inter-residue interactions. The information of interresidue interactions is vital to understanding protein function and is used in studying protein folding and stability (Gromiha and Selvaraj, 2004;Baker, 2000), homology detection (Bhattacharya et al., 2021), prediction of protein structures (Yang et al., 2020), and several other aspects. From a topological perspective, intra-protein interactions between spatially proximal residues can be represented on a graph using edges where the residues are represented as nodes. The node-edge representation of a protein structure is commonly known as a protein structural network (PSN). PSNs are used to analyse structural organisation in proteins based on topological distance, nature of interactions, solvent accessibility, geometry, charge, energy and many other features (Vijayabaskar and Vishveshwara, 2010;Bhattacharyya et al., 2015). It allows for the survey of non-covalent interactions like hydrogen bonds, ionic, hydrophobic, and van der Waals interactions (Vishveshwara et al., 2002). There are several advantages of using PSNs to gather structure and functional information such as analysing subtle conformational changes due to ligand binding or for identifying communication paths of allosteric effects (Costanzi, 2016;Guarnera et al., 2017;Brinda and Vishveshwara, 2005;Amitai et al., 2004). Aside from intra-protein interactions they can also be used in the investigation of protein-ligand and protein-protein interactions (Taylor, 2013).
Several tools have been developed to compare and quantify the difference between PSNs (Schieber et al., 2017;Faisal et al., 2017;Malod-Dognin and Pr z ulj, 2014). The connectivity information in the form of binary matrix is computationally easy to handle even for large number of comparisons. Hence, the strength of interactions has often been digitised as zero or one based on a selection criterion. Physics based approach such as the percolation transition point is a method used for selecting the optimal edge-weight to make the matrix binary. In the comparison of PSNs, graph-spectra based methods are very useful as they depict the global arrangement of nodes and their connectivity with minimal loss of information (Deb et al., 2009). Several methods have employed the use of graph spectra for the comparison of protein structures Parekh, 2014a, 2014b;Bhattacharyya et al., 2013).
Advancement in algorithms and computing power has led to the development of graph spectral methods to handle weighted matrices. This allows us to analyse PSNs using graph spectral features, incorporating the edge differences at the local level and the differences in modes of clustering at the global level. One such approach is the comparison of networks using the network similarity score (NSS) , which also serves to quantify the dissimilarity between a pair of PSNs. NSS can provide the alterations in spatial proximity between sequentially non-adjacent residues along with any alterations in the clustering of residues at the global (tertiary structure) as well as at the local level (sidechain interactions), making it robust and sensitive to minute changes between the compared networks. Thus, NSS is sensitive to minute changes in networks, making it a robust method to perform quantitative comparison between near-similar protein structures. The protocol has been earlier employed in the validation of protein structure models and for protein structure comparison (Gadiyaram et al., 2019). We have extensively made use of this method in the current work for studying dissimilarity between structural networks of proteins and have termed the measure as the network dissimilarity score (NDS).
The main focus of this work is to characterise the extent of diversity in structures of a protein (or its ligand-complex) under varying conditions obtained from multiple crystal conformers, using the network formalism. We analyse inter-residue interactions within each protein by employing PSN that are constructed from coordinates of all non-hydrogen atoms from multiple crystal structures available for a given protein. The deviation in backbone is measured using the conventional root mean square deviation (RMSD) and changes in structural network are studied using the NDS. It is known that the 3D structure of some proteins may have several regions that are rigid while other regions, generally relating to their function, may show mobility (Burra et al., 2009). Likewise, in the analysis of protein structures that are independent of external interactions we observe that the nature of structural variability can range from strongly rigid behaviour to being highly dynamic and undergoing large conformational changes. These structure variations within the protein have been studied, since they are also determinant factors of their function. Based on these studies, we have categorised the protein structures into several groups and have discussed their implications. The methodology is described in the next section and the Results and Discussion are presented in section 3 and 4 respectively.
Multiple crystal structures of single domain monomeric proteins
The dataset is assembled by collecting all full-length structures of single chain protein structures from the protein data bank (PDB) (Berman et al., 2000) that are obtained using X-ray diffraction. A selection criterion of better than 3 Å resolution with Rfree and Rwork better than 30% and 25% respectively is considered. Any chain with more than a single domain (as defined on SCOPe) (Fox et al., 2014) , (Chandonia et al., 2019) is not included. All structures with missing residue information or mutations in the non-terminal regions of the sequence are removed. An adequate number of structures for each protein is necessary to study structural variability. A threshold of five PDB entries for a protein is included for further analysis. The dataset assembled consists of 913 PDB entries of 56 proteins, with the number of crystal conformers for each protein ranging from six to fifty-nine. Supplementary Table 1 lists the details of all proteins in the dataset.
Protein structural network construction
The protein structural network (PSN) of a crystal conformer is constructed from the 3D structure coordinates retrieved from the protein data bank (PDB). The amino acid residues in the structure are considered as nodes and undirected weighted edges are drawn between each pair of interacting residues based on the strength of their interactions. The edgeweight between nodes in the PSN is equivalent to the fraction of the number of contacts made by proximal atoms (between ith and jth residues of the given protein), with respect to the maximum number of such contacts that are found between the pair of corresponding amino acids from the entire dataset. Such a ratio translates the interaction strength between the two connected residues into edge-weight between the two corresponding nodes. A proximity-based measure of edge-weight (I ij ) between any two sequentially non-adjacent residues i and j is computed using Equation (1) (Bhattacharyya et al., 2015). where, any pair of atoms from non-adjacent residues that are within a distance of 4.5 Å are considered to be proximal atoms. The highest possible number of proximal atoms between all amino acid types is determined from PSN of all structures in the dataset. The edge-weights obtained for the structural network are stored as an adjacency matrix, which are further used for network comparison. The network images illustrated in this work are drawn on PyMOL (DeLano and Pymol, 2002) using protein cartoon diagrams.
Structure and network comparison
Multiple structures of a protein are subject to pairwise comparison with respect to the backbone structure and their all-atom network. The most common method used for the comparison of protein structures involves measuring the structural divergence between two superimposed atomic coordinates commonly known as the root mean square deviation (RMSD). In this work, the structural divergence between a pair of conformers is calculated as the root mean square deviation of C-alpha atoms, computed using the TM-align tool (Zhang and Skolnick, 2005). The pairwise comparisons result in quantifying the divergence between the backbone conformation of all structures of a protein.
It should be noted that the method of calculating RMSD has several limitations (Li et al., 2020). For instance, in a pair of close to identical structures that vary only in a small random coil or turn regions or a single flexible terminus, the structural comparison can result in a large RMSD. Likewise, a small alteration in the core of the structure or the inter-domain region may significantly impact the resulting structure deviation more greatly than the deviation in loops or terminus. Constant efforts are being made to address these limitations (Kufareva and Abagyan, 2012). On the other hand, graph spectra-based methods that include sidechain orientations consider the change in interactions and global connectivity. This involves the aspect of clustering of nodes and quantifying a match between the clusters. A graph spectra-based network comparison tool termed network dissimilarity score (NDS), mentioned in the introduction section, is employed in this work. This method also quantifies the dissimilarity in the local and global node clustering between a pair of networks. Node clustering represents grouping of nodes with respect to the edges present in the network. Nodes in each cluster (or group) are more connected among themselves compared to that with nodes of other clusters. Changes in local node clusters take place according to changes in edge-weight. In other words, residue grouping changes locally with respect to changes in strength of interactions between residues. These changes in local residue clusters result in overall structural change, which is referred as global clustering change.
An in-house python program is used to compute the NDS between any pair of PSN. The score is calculated by obtaining its three components -EDS, EWCS and CRS. The edge difference score (EDS) directly calculates the difference in their edge weights. The correspondence score (CRS) and eigen value weighted cosine scores (EWCS) are calculated using the spectra of their networks, capturing local and global clustering changes of residues in the PSN respectively. Using these components, NDS is formulated as in Equation (2).
Equation 2 The NDS can range from a value of zero, that indicates absolute identical networks, to a value of ffiffiffi 3 p that indicates dissimilarity to the extent of no match between the networks. More details regarding the significance of the components of NDS can be referred from Gadiyaram et al. (2017) 2.4. Solvent accessibility and secondary structure based sub-networks A network that contains a subset of the nodes and edges of the original network makes a sub-network. All PSN are decomposed into subnetworks by choosing specific nodes and edges based on a given criteria. Two types of sub-networks are defined, the first decomposition is based on solvent accessibility of residues and the other is based on secondary structures.
Sub-networks based on solvent accessibility: naccess tool (Hubbard and Thornton, 1993) is used to compute solvent accessibility of residues in the protein structure. A relative accessible surface area (RSA) threshold of 7% is used in recognising solvent-accessible residues. Residues with RSA lower than 7% are considered as buried in the protein structure. Fig. 1 shows the three sub-networks that are derived for each conformer. E-E: A sub-network with exposed residues as nodes and edges among themselves (solvent-accessible sub-network). B-B: A sub-network with buried residues as nodes and edges among themselves (solvent in-accessible sub-network). B-E: A sub-network that is of bipartite nature such that only edges that connect buried and exposed residues are included. The NDS of sub-networks for all pairs of conformers of a given protein are computed and analysed.
Sub-networks based on secondary structure: stride tool (Heinig et al., 2004) is used to assign secondary structures to each residue. Residues that form secondary structures such as helix and strands are considered as ordered residues and the remaining are considered as non-ordered residues. The PSN is decomposed into three sub-networks, similar to sub-networks based on solvent accessibility. O-O: A sub-network of edges between nodes of ordered residues, N-N: A sub-network of edges between nodes of non-ordered residues and O-N: A sub-network of only edges between nodes of an ordered and non-ordered residue. All pairs of sub-networks of a protein are subject to NDS analysis.
Variance of edge weights across an ensemble
The multiple crystal conformers of a protein constitute an ensemble. The mathematical variance in edge-weights describe variation of the spatial proximity and inter-connectivity of corresponding residues with respect to other residues across the ensemble. The edge-weight variance ðEV ij Þ of each edge is calculated using Equation (3) (analogous to EW-MSF that is discussed in an earlier paper) .
where, I n is edge-weight of an edge between residues i and j across the ensemble of N structures and μ is the mean of edge-weights across the N structures. The edge-weight variance, thus obtained from Equation (3), quantifies the mean and the fluctuations of all the edges in the network. This metric is used to identify the most variable interactions within the PSN, which points to the regions of protein that show higher variability in network topology.
Results
In this section, we present our results on the comparison of different conformers of chosen proteins using the conventional parameter RMSD and the parameter network dissimilarity score (NDS) obtained from the global analysis of protein structure networks (PSN). Here, we provide a classification scheme for protein structural domains, based on these analyses.
Crystal conformers of a protein present their native conformational states which have been used for the analysis of their structural variability. A dataset of 56 proteins is assembled with more than five crystal structures for each protein (913 PDB entries) that are of a resolution better than 3 Å. PSN for all the crystal conformers are constructed. All pairs of conformers of a protein are subjected to pairwise structural network comparison resulting in 12,251 network dissimilarity scores (NDS). Similarly, Cα-atom root mean square deviation (RMSD) is computed to obtain 12,251 pairwise backbone-structure comparisons. All the computed structure and network comparison scores are listed in Supplementary Table 2 (provided separately).
Structural diversity in individual domain proteins
A scatter plot of the comparison scores is illustrated in Fig. 2 where the data points correspond to RMSD on x-axis and NDS on y-axis. Examining the plot, one is provided an understanding of the extent of structural variability and diversity of conformers among individual domain proteins. Mean RMSD for the dataset is found to be 0.34 Å with a standard deviation of 0.3 Å. Mean NDS is 0.113 with a standard deviation of 0.048, as shown in Fig. 2. From the data presented on the plot of backbone and network comparison scores, a curve of best fit with maximum R-square (R 2 ) is plotted. A power equation is found to best fit the data inferring that no linear relationship exists between the RMSD and NDS. The Pearson correlation between the data is found to be 0.59, A schematic scatter plot that is used to study conformational diversity, where structure deviation information (backbone RMSD) is plot on the x-axis and network dissimilarity (All-atom NDS) is plot on the y-axis, such that the structure variability of a protein would be characterised based on the predominance of datapoints that are localised to specific area designated to each of the categories. (B) A flowchart of criteria that is used to characterise the structure variability and categorise proteins in the dataset. that supports the understanding that there is no strong linear relationship shared between the scores. It should be noted that in the scenario where we observe low RMSD scores, NDS is found to vary between a range of small to large values. This implies that, even though there is not much change in the backbone structure of a protein, the variation in sidechain interactions impart a large change into its structural network. However, the converse is not always true. As RMSD increases, NDS has only higher values. This is because variation in backbone will inevitably bring changes to the underlying sidechain interactions.
Characterisation of protein structure variability
The scatter plot is partitioned into four quadrants based on the statistical average that is computed for the entire dataset. The third quadrant, where both RMSD and NDS are lower, corresponds to the comparison of conformers that are highly superposable in terms of backbone and sidechain. Contrarily, both the scores are higher in the first quadrant. If the compared conformers have preserved sidechain interactions but vary only in backbone atom positions, i.e., RMSD is high and NDS is low, the points fall into the fourth quadrant. Also, when the backbone is preserved and there is variation only in sidechain interactions the points fall in the second quadrant.
The data points (scatter) corresponding to individual proteins are analysed. Data points in the third quadrant correspond to conformers with a preserved network and structure and the protein is known to be of rigid type. Likewise, when the scatter spread across other quadrants on the plot, they are classified as non-rigid proteins. Using the scatter from all conformers of each protein, they are grouped into five categories based on the nature of their structural variability: 1. Rigid (R) 2. Preserved network, with variation in backbone (N) 3. Variable network, with preserved backbone (B) 4. Flexible in backbone and network (F) 5. Mixed (M) Fig. 3 shows the area on the scatter plot designated for each of the categories. A protein is categorised based on whether more than 60% of its scatter falls within a specific area on the plot that is designated to each of the categories. Also, none of the data points should lie outside a permissible area (The permissible area for each category is discussed along with examples later in this section). Supplementary Fig. 1 shows the percentage of datapoints with lesser than mean NDS and RMSD of the entire dataset. Sixteen proteins are found to have more than or equal to 60% data points in the rigid area (lower than the mean of dataset). However, not all of these proteins satisfy the 'permissible extremity' criteria defined for rigid proteins, i.e., all comparison scores of the protein should be lower than the sum of mean and standard deviation of the dataset. Similar criteria are used in the segregation of proteins into each of the categories. The criteria for classifying a protein into each of the categories are discussed in detail with the help of examples.
Rigid category: In the discussed scatter plot for each protein, if more than 60% of the scatter lies in the rigid area and none of the data points regress with comparison scores greater than the sum of the mean and standard deviation of the entire dataset, the protein is categorised as a rigid protein. For example, the individual plot of Lysozyme C is shown in Supplementary Figure 2 (A). The comparison of all nine crystal structures of this protein is found to have 94.44% of the scatter in the rigid area and all points within the permissible extremity. The proteins of this category are rigid in nature with conformations of well-preserved backbone and side chain interactions. Listed in Supplementary Table 3 are ten proteins from the dataset that have been categorised as rigid along with the information of mean and standard deviation from their respective distribution of comparison scores. Nearly all proteins have datapoints in this rigid area of the plot which correspond to low conformational variations.
Supplementary Fig. 1 shows a bar plot of the percentage of data that lie in the rigid area.
Preserved network category: If the interactions (mostly sidechain) in a protein are preserved even when the backbone shows divergence, they are categorised as proteins with preserved network. In the individual scatter plot of such a protein, excluding data points in the rigid area, more than 60% of the scatter lie in the preserved network area that lies on the bottom right of the plot. Also, none of the network comparisons have NDS greater than 0.181 (sum of mean and standard deviation in NDS of the dataset). The nature of structural variability in the protein is a flexible backbone with a preserved network. Four proteins from the dataset, listed in Supplementary Table 4, fall under this category. It should be noted that the backbone deviation in all these four proteins is not significantly high. This may be since the dataset consists of only singledomain proteins. It is possible for a non-single-domain and monomeric protein to have large structural backbone deviation (domain movement) even when the network is well preserved. Such a scenario has been discussed in Ghosh et al. (2017) (refer to Figure 14 of the citation) Supplementary Figure 2 (B) shows the plot of N-acetyltransferase domain-containing protein obtained from their 18 crystal structures as an example. Excluding the data points of this protein in the rigid area, 100% of the scatter is in the preserved network area. All the data points lie within the permissible extremity and hence this protein is of the preserved network category.
Variable network category: On the individual scatter plot for each protein, excluding the data points from the rigid area, if more than 60% of the scatter lie in the variable network area and none of the structure comparisons have RMSD greater than 0.64 Å (sum of the mean and standard deviation of the entire dataset), the nature of structural variability of these proteins is of a flexible network with a preserved backbone. Eight proteins from the dataset fall under this category which have been summarised in the Supplementary Table 5. In Supplementary Figure 2 (C) we show the individual plot of Prolyl endopeptidase obtained from its twelve crystal structures as an example. Excluding the data points of this protein in the rigid area, 84.62% of the scatter lies in the variable network area. All the points in the plot lie within the permissible extremity and hence the protein is of variable network category.
Flexible category: In individual scatter plots, after excluding the data points in the rigid area, if more than 60% of the scatter from a protein has NDS and RMSD greater than the mean of the dataset then the nature of structural variability of these proteins is flexible. Twenty-one proteins from the dataset are found to be flexible and are detailed on Supplementary Table 5. The individual plot of Casein Kinase II (α-subunit) obtained from the eighteen crystal structures is shown as an example in Supplementary Figure 2 (D). Excluding the data points of this protein in the rigid area, 83.55% of the scatter of this protein is found in the flexible area of the plot. Hence this protein is categorised as a flexible protein.
Mixed category: Proteins that do not fall into any of the above categories are grouped as mixed category of proteins and the nature of structural variability of the mixed category of proteins is of the non-rigid type. The remaining 13 uncategorised proteins are classified as mixed category and are listed in Supplementary Table 6. Certain proteins that have data points beyond the permissible extremity of comparison scores make it into this category. From the individual plot of Myoglobin (Equus caballus shown in Supplementary Figure 2 (E)) it is observed that a lot of the datapoints lie in the preserved network area, however a single datapoint is found to have a significantly higher NDS score, hence this protein is categorised as mixed. Some of the proteins are classified here since one or more conformations diverge drastically from the existing space of conformations and hence the scatter of such a protein do not confine to a specific area in the plot which complicate classifying the protein into any specific category. Similarly, in the case of methionine aminopeptidase (shown in Supplementary Figure 2 (F)) it is discernible than most of the datapoints lie in the variable network area. However, a cluster of data points that depicts backbone structure deviation greater V.M. Prabantu et al. Current Research in Structural Biology 4 (2022) 134-145 than the mean and standard deviation of the dataset is observed. If a protein exists in more than one structurally deviant conformational state then the data points corresponding to the comparison appear as more than one cluster on the scatter plot of the individual protein. This is recognised in many non-rigid type proteins and is discernible from the scatter plots shown in Supplementary Fig. 2 (E &F). For example, the myoglobin protein is known to exist in an oxy and deoxy state. In the individual scatter plot of Sperm whale myoglobin (Physeter Macrocephalus), distinct clusters are observed as shown in Fig. 4 (A). In the cluster with high comparison scores (illustrated in Fig. 4 (A) using coloured boxes) the compared conformers have diverse topologies and come from different conformational states. For instance, comparing the pair of crystal structures of PDB ID: 4PNJ (deoxy state) and PDB ID: 2Z6S (oxy state) that share a NDS of 0.143 and RMSD of 0.46 Å. Likewise, in the comparison of different conformational states of the protein, it is observed that the scatter is spread across different clusters as shown in Fig. 4.
Sub-network analysis
The nature of a residue such as solvent accessibility and secondary structure are frequently conserved during evolution in order to preserve the tertiary structure of the protein and retain its function (Sitbon and Pietrokovski, 2007). The influence of these two parameters on the overall structural variability is presented in this section. To perform this study, the all-atom PSN is decomposed into sub-networks that consist of only specific elements of the network as shown in Fig. 1. The sub-networks with subsets of nodes and edges based on solvent accessibility or secondary structure are analysed.
Using the solvent accessibility information of residues in a protein structure, the all-atom PSN is decomposed into three sub-networks as detailed in the methods section (Section 2.4). The three sub-networks (B-B, E-E and B-E) are subject to pairwise network comparison. All 12,251 pairs of multiple conformers with identical protein sequence from the dataset are compared and the sub-network NDS is computed. The results of sub-network NDS obtained for each protein are illustrated using a boxplot as shown in Supplementary Fig. 3 and the scores are available in Supplementary Table 2. Fig. 5 shows the average NDS of various subnetwork comparisons from each protein. It is found that, in almost all protein instances, the sub-network of buried residues (B-B) is more strongly retained than the sub-network of exposed residues (E-E). This is fairly conventional in understanding how the buried residues and the The cartoon diagram of crystal conformer of deoxy conformational state of sperm whale myoglobin that is superposed with the oxy conformational state, where the protein cofactor (HEME) is bound to oxygen is illustrated. The reference deoxy conformational state that is superposed with the protein bound to HEME-CYN (C) and a non-HEME bound state (D) are also illustrated. (E) A table containing information of the comparison scores for the superposed structures and the bound ligands are shown. V.M. Prabantu et al. Current Research in Structural Biology 4 (2022) 134-145 connections amongst themselves are well retained whereas the solvent exposed residues have higher variation amongst their connections. This shows that the mobility of exposed residues contributes to the overall protein structural variability more than the buried residues. Also, it is observed that B-E sub-networks are found to be the most variable. Higher NDS is observed in B-E sub-networks than in E-E sub-networks in all the proteins.
In a similar kind of analysis, information of secondary structures is used in constructing sub-networks. The residues of a protein are distinguished as ordered and non-ordered residues based on whether they form Fig. 5. Trace of average NDS from the compared sub-networks, obtained based on solvent accessibility information. The sub-network that captures edges between nodes of only buried residues (B-B) has lower sub-network NDS than the sub-networks that capture edges between nodes of only exposed residues (E-E) and edges between a buried and an exposed residue (B-E). An exception is that of the Rubredoxin protein where it is found that the sub-network of exposed residues (E-E) is well retained. Fig. 6. Trace of average NDS, comparing the individual sets of network comparison scores from different kinds of sub-networks that are based on secondary structure information. It is predominantly observed that the sub-network of ordered residues (O-O) is better retained than the sub-network of non-ordered residues (N-N), except in five proteins which are Leucoterine A-4 hydrolase, Quinolate synthase A, S-hydroxynitrile lyase, NADH-cytochrome b5 reductase 3 and rRNA N-glycosidase where they have better retained sub-network of non-ordered residues (N-N). Also, in four other proteins, the sub-network of ordered and non-ordered residues (N-O) is better preserved than sub-network within secondary structures (O-O) which are Heart fatty acid binding protein, Myoglobin (Physeter macrocephalus), Peptidyl-tRNA hydrolase and Glutaredoxin. V.M. Prabantu et al. Current Research in Structural Biology 4 (2022) 134-145 ordered secondary structures such as helices and sheets or if they make up the non-ordered secondary structures such as coils and turns. Three sub-networks (O-O, N-N and O-N) are generated for each conformer as described in the methods section (Section 2.4). All pairs of sub-networks are compared and the scores are plot on boxplots as shown in Supplementary Fig. 4 and are available in Supplementary Table 2. Fig. 6 shows the average NDS of sub-networks in each protein. In most proteins the sub-networks of non-ordered residues (N-N) have higher dissimilarity than the sub-networks of ordered residues (O-O). This implies that the interactions between non-ordered residues, which are known to be more flexible than ordered residues, have greater variability than the interactions between ordered residues that make up helices and sheets. It is also interesting to note that the O-N sub-network exhibits higher dissimilarity than N-N sub-network in many cases.
Edge-weight variance in protein ensembles
The conservation of structural interactions within the protein structure is essential for maintaining its function. Consequently, perturbations that alter the intra-connectivity of amino acids can modify the stability (Worth et al., 2011;Pandurangan et al., 2017) or function of the protein (Frauenfelder et al., 2007;Redfern et al., 2008). The spatial proximity between residues in 3D structure of the protein describes the intra-connectivity of residues that are captured using edges in the PSN. Given an ensemble of PSN, variation in proximity of residues is studied by using variance in network edge-weight parameter which is discussed in detail in the methods section. Variance of the edge parameters in every protein of the dataset is computed to yield an edge variance profile. Fig. 7 shows the edges with very high variance in the discussed examples for each category. The coloured edges have a variance greater than three times the standard deviation of the variance recorded in all edges of the given protein. In descending order of the recorded variance, the top five edges are coloured in red, the next ten are coloured in yellow and all the remaining are coloured in blue. The number of such highly variable edges is lower in proteins of rigid and preserved network category whereas they are higher in proteins of network variable and flexible category.
The method has been discussed with Camphor 5-monooxygenase (Cytochrome P450 protein) as a case study. The data points corresponding to pairwise comparison of all pairs of nineteen crystal structures of Camphor 5-monooxygenase obtained from Pseudomonas putida are illustrated on a scatter plot shown in Fig. 8 (A). Since, more than 60% of the scatter of this protein has NDS and RMSD greater than the mean of the dataset, the structural variability of this non-rigid type protein is grouped under the flexible category. In Fig. 8 (A), which shows the individual plot, distinct clusters of the scatter are observed. The first cluster having low RMSD has a diversity of network dissimilarity scores which shows that the structural network of sidechain interactions is variable. The second cluster of data points, with higher NDS and RMSD than the initial cluster, correspond to the comparison of altogether different conformations. In order to identify specific residues and regions of the protein that contribute to such variability we study their edge variance profile. Fig. 8 (B) shows a cartoon diagram of the structure of camphor 5monoxygenase where edges of the PSN are coloured based on the edge-weight variance across the ensemble. Residues that are highly variable are identified by arranging the residues in descending order of their edge-weight variance. A list of the eleven most variable interactions (edges of PSN) and their details are shown on the table in Fig. 8 (C). Few of the most variable edges are observed on the C-terminal region that interacts with the core of the protein (shown in Fig. 8 (B) depicted using orange colour). It is inferred from the case that polar residues that are predominantly exposed in the structure have a higher probability of making an edge that is more variable. By performing such an analysis, we make use of the variance profile as a tool to recognise nodes in the PSN that have a greater influence on their overall variability. It will be Fig. 7. Edge-weight variance profile can describe network variations. Edges of the PSN whose variance in edge-weight is greater than three times their standard deviation are shown in different colours. In descending order of their numerical variance, the top five edges are shown in red, the next ten edges are shown in yellow and the remaining edges, if any are shown in blue. The edge variance profile of (A) Lysozyme C (B) N-acetyltransferase domaincontaining protein, (C) Prolyl endopeptidase, (D) Casein Kinase II (α-subunit) proteins are shown here. V.M. Prabantu et al. Current Research in Structural Biology 4 (2022) 134-145 interesting to analyse the functional relevance of such network variability in our future work.
Discussion
The flexibility in protein structure enables them to bind to a wide variety of molecules and undergo conformational alterations, to perform its function in living cells. The extent of deviation changes from protein to protein. Native states of proteins that are captured using structure determination methods, such as X-ray diffraction, pave the way in understanding its conformation diversity. The flexibility in atom positions across several conformations of a protein constitutes its structural variability.
The nature of structural variability may vary depending on protein function even when the fold is conserved. Hence, the structural variability of a given protein can be utilized as a metric to correlate with protein structure-function relationship. In this study, proteins have been characterised as rigid or non-rigid based on how diverse their conformers are in terms of their topologies. The mean value along with standard deviation information from the entire dataset is employed in formulating a criterion to segregate the proteins. Some of the salient features emerging from our analyses are presented below.
Proteins belonging to the family of protein kinase, catalytic subunits (SCOPe family: d.144.1.7) which are Cyclin dependent kinase 2, Casein kinase-II and Mitogen activated protein kinase are all grouped as flexible proteins. Orthologs of Myoglobin and dihydrofolate reductase are predominantly characterised as flexible although few are in the ungrouped mixed category. Mycocyclosin synthase has network variation with preserved backbone structure, while its homologs NADP nitrous oxide-forming nitric While the Heart fatty acid binding protein is found to be rigid, Retinol binding protein is flexible. The Liver fatty acid binding protein which is categorised as mixed is observed to have strong network variation. On the other hand, Adipocyte fatty acid binding protein has limited network variation even when there is backbone structure deviation.
It will be interesting to follow a similar protocol in identifying the variability across proteins with difference in sequence such as homologs.
The PSN comparison method used here is shown to be effective in capturing the variation of the overall network. What has not been discussed in detail is how this method is also effective in capturing minute differences in the network such as a change in local or global clustering. The method has been discussed in detail with the help of examples before . We revisit the method by analysing the Fiedler vectors (eigen vector corresponding to second smallest eigen value) between the oxy (PDB ID: 2Z6S) and de-oxy (PDB ID: 4PNJ) states of myoglobin. The highest absolute difference between their Fiedler vectors also correspond to the nodes in the PSN with the most change in local clustering. We identify these residues in the structure of myoglobin protein and find that they are in the vicinity of the HEME cofactor that is known to undergo a structural change between the oxy and deoxy states as shown in Fig. 9.
The analysis of sub-networks based on solvent accessibility show that, in all proteins the sub-network of buried residues (B-B) sub-network, forming the core of the protein is well preserved. It is interesting to note that the average NDS is much higher in the connections between buried and exposed residues (B-E) than in connections between exposed residues (E-E) alone. This may imply that the displacements in exposed residue pairs are more associative than the displacements between buried-exposed connections. Likewise, in the analysis of sub-networks based on secondary structure information, ordered residue connections are found to be substantially well preserved (in forty-seven proteins) as expected. However, the N-N sub-networks and the O-N sub-networks are better preserved than O-O sub-networks in five and four different proteins respectively and the corresponding proteins are listed in the legend for Fig. 6. This implies that in certain scenarios the network of connections between random coils and turns are better preserved than within secondary structures (helices and sheets). The characterization of the variability metric is likely to provide greater insights in complex situations like multidomain that include domain-domain interactions and also across homologs.
Analogous to the edge weight mean square fluctuations (EW-MSF) discussed in reference Ghosh et al. , here we have discussed the edge-weight variance procedure in the context of crystal structure ensembles and their variability. The information of the variance, points to the mobility of nodes in the multiple PSN, in other words, pointing to variability of residue positions in the native structures. Moreover, it helps in understanding the variability among different regions of the same protein. Thus, in addition to quality check programs like PROCHEK (Laskowski et al., 1993), the incorporation of features related to the side-chain conformational preferences (Rose, 2019) and the currently described variability metric, elucidating the dynamics of side-chain interactions, can contribute towards the accuracy enhancement of side chain modelling. The modelled sidechain information can improve the accuracy of CASP and other protein structure prediction methods (Jumper et al., 2021;Leman et al., 2020) that rely on available protein structure information. Cartoon diagram of myoglobin protein structure highlighting top 10 residues with highest absolute difference between Fiedler vectors of 4PNJ (de-oxy state) vs 2Z6S (oxy state). These residues are shown as red sticks.
Conclusions
The advantages of studying global and local connectivity within protein structures using graph spectral methods have been exploited in our analysis of structural variations in monomeric proteins. Ensembles of multiple crystal structures of 56 proteins are collected in a dataset for the analysis of structural variability by employing protein structural networks. The conformational diversity is described from pairwise comparison of their backbone structure and network topology that is used to group them into categories based on nature of their structural variability. Most of the proteins in the dataset are categorised as either rigid or flexible conformations. Furthermore, in certain proteins it is observed that the network of edges (mostly sidechain) may be variable even when the backbone positions are preserved, and the vice versa. This is an advantage using a method such as network dissimilarity to study sidechain connectivity rather than just by looking at the backbone structure deviation to understand diversity in protein conformations.
Sub-network analysis reveals that connections of non-ordered secondary structures and solvent exposed residues depict high dissimilarity in inter-residue interactions thus imparting less rigidity to the structure. In a case study, it is seen that edges made with polar residues that are predominantly exposed show greater variability than their counterparts. Such an analysis can also be used as a basis for understanding the variability brought about by external perturbations that may influence the structure and dynamics of a protein.
Funding statement
VG is supported by CSIR-RA fellowship. Research from NS group is supported by the following agencies or programs of the Government of India: DBT-COE, Ministry of Human Resource Development, DST-FIST, UGC Center for Advanced Study, Bioinformatics and Computational Biology centre support from DBT and the DBT-IISc Partnership Program. NS is a J.C. Bose National Fellow. SV is an Honorary Scientist of NASI (National Academy of Sciences, Allahabad, India).
Author statement
NS conceived the concept and idea for the work SV conceived the concept and idea for the work. VMP took care of all the calculations VG took care of all the calculations. All the authors were involved in formulating the Manuscript.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 10,388 | 2022-04-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Computer Science"
] |
Electrolyte and Additive Engineering for Zn Anode Interfacial Regulation in Aqueous Zinc Batteries
Aqueous Zn‐metal batteries (AZMBs) have gained great interest due to their low cost, eco‐friendliness, and inherent safety, which serve as a promising complement to the existing metal‐based batteries, e.g., lithium‐metal batteries and sodium‐metal batteries. Although the utilization of aqueous electrolytes and Zn metal anode in AZMBs ensures their improved safety over other metal batteries meanwhile guaranteeing their decent energy density at the cell level, plenty of challenges involved with metallic Zn anode still await to be addressed, including dendrite growth, hydrogen evolution reaction, and zinc corrosion and passivation. In the past years, several attempts have been adopted to address these problems, among which engineering the aqueous electrolytes and additives is regarded as a facile and promising approach. In this review, a comprehensive summary of aqueous electrolytes and electrolyte additives will be given based on the recent literature, aiming at providing a fundamental understanding of the challenges associated with the metallic Zn anode in aqueous electrolytes, meanwhile offering a guideline for the electrolytes and additives engineering strategies toward stable AZMBs in the future.
Introduction
Over the past few decades, the heavy consumption of fossil fuels for both daily use and industrial production has resulted in much environmental damages, while the increasing global population are still constantly raising the energy demands.It is DOI: 10.1002/smtd.202300268urgent to develop clean energy resources, such as wind power, [1] solar power, [2] tidal power, [3] as supplements or even replacements for the conventional energy resource from fossil fuels.However, the intermittent nature of these clean energy resources makes them difficult to be integrated into the power grid.Thereby, energy storage systems become the key technology to manage the power generation and electricity demands, which help to achieve low-carbon and sustainable development of human society.Secondary batteries are the most prominent in this regard and many secondary battery systems have emerged in the past decade, among which lithiumion batteries (LIBs), [4][5][6][7] sodium-ion batteries (SIBs), [8][9][10][11] and zinc-ion batteries (ZIBs) [12][13][14][15][16] are the most popular ones.Although the former two are studied more commonly, especially LIBs, regarding their relatively sophisticated technology and dominance in electric vehicle and consumer electronics markets, they still have a few drawbacks, e.g., the limited availability and expensive cost of lithium resources, the comparably low energy density of SIBs, and the toxicity of organic liquid electrolytes used in both LIBs and SIBs.27] Aqueous Zn-metal batteries (AZMBs), with aqueous electrolytes and metallic zinc anode, exhibit several inherent advantages over LIBs and SIBs, that the aqueous batteries manifest intrinsic safety, [16,[28][29][30][31][32][33] high ionic conductivity compared with organic electrolyte or gel electrolyte, [34] low cost, [35][36][37][38][39] facile manufacturing process, [28,31,[40][41][42][43] and non-toxicity. [16,32,44,45]Metallic Zn possess high volumetric and gravimetric theoretical capacities (5854 mAh cm −3 or 820 mAh g −1 ), [46,47] low redox potential (−0.76 V vs standard hydrogen electrode (SHE)), [48] and high overpotential for hydrogen evolution, [49,50] rendering it a preferable anode material superior to the majority of intercalation-type materials in aqueous electrolytes.Kang et al. first established the concept of zinc-ion or zinc-metal batteries in 2011, by revealing the reaction mechanism of zinc ions intercalating in MnO 2 in a mild aqueous electrolyte. [51]However, the practical application of AZMBs is still largely hampered by the interfacial issues involved with aqueous electrolytes and metal anode employed, including dendrite growth, [52] hydrogen evolution reaction (HER), [53] and zinc corrosion or passivation. [38,53]As in other metal-based batteries, the unceasing cycling of Zn plating and stripping results in unavoidable formation of dendrites in aqueous electrolytes.Although the Zn dendrites in AZMBs may not cause safety hazards of fire or explosion as those in LIBs or SIBs with organic electrolytes, their uncontrollable growth can still accelerate the capacity fading and shorten the battery life. [54]Unfortunately, the negative effects of these interfacial issues are mutually entangled and their synergetic effects may further exaggerate the problems. [55,56]hile the dendrites formed at the anode surface provide an expanded interface for HER, [55] HER tends to change the concentration of OH − near the zinc anode and facilitate their reaction, thus producing inert corrosion byproducts. [57]In return, the byproducts of corrosion and HER can induce nonuniform local electric field and concentration gradient on the coarse deposition surface, creating more zinc dendrites nucleation sites. [58]here have been a number of methods developed to address these interfacial issues involved in aqueous zinc metal batteries, such as coating a protection layer, [59][60][61][62] constructing a threedimensional (3D) [63][64][65] or 2D anode structure, [66,67] and introducing zinc alloy anode, [65] etc.However, the large-scale applications of these approaches are still limited despite their proved effectiveness in improving the interfacial stability between Zn anode and aqueous electrolytes and reversibility of metallic Zn.Instead, the engineering of aqueous electrolytes is a simple and affordable method to enhance the cycling stability of AZMBs.This strategy can be achieved by designing the electrolyte formulation and/or adding electrolyte additives, where the former is determined by the selection of zinc salts and solvent (water) concentration, while the latter depends on the modified interactions between solvents, anode surface, and Zn 2+ ions.Impressively, many cheap and environmental-friendly aqueous electrolytes and additives have been developed, further extending the intrinsic advantages of AZMBs.Herein, a comprehensive summary of the aqueous electrolyte engineering strategies will be given based on AZMBs, including the mechanisms behind electrolyte/anode interfacial issues, aqueous electrolyte formation, and electrolyte additives de- veloped so far.This article manages to provide a profound insight into the aqueous electrolyte engineering strategies and guide the future research directions of AZMBs.
Configuration and Working Principles of AZMBs
As other secondary batteries, an AZMB is composed of four essential components, namely electrodes, an electrolyte, a separator, and current collectors at each electrode of the battery that connects to the terminals of the cell, illustrated by Figure 1.The electrodes, including cathode and anode, host zinc ions and provide capacity for the battery cell, where Zn 2+ ions from Zn anode repeatedly intercalate/deintercalate in the cathode materials during the discharging/charging process.The typically used cathode materials contain a variety of compounds, including vanadium-based, [68][69][70] manganese-based, [71][72][73][74] prussianblue-based, [75] carbon-based, [76] and organic-based, [77][78][79] while the anode material in AZMBs is metallic Zn.The separator is usually a porous polymetric film that permits the pass-through of Zn 2+ but blocks the exchange of electrons in the internal circuit.The electrolyte in AZMBs acts as a conductive medium allowing Zn 2+ shuttling between electrodes.This component presents the most obvious difference between Zn-metal batteries and other metal-based batteries, that aqueous electrolytes are commonly chosen in AZMBs in contrast to the organic liquids used in LIBs or SIBs.
In 1986, it was the first time that Yamamoto's group introduced the mildly acidic ZnSO 4 aqueous electrolyte and the MnO 2based aqueous zinc battery exhibited decent cycle stability, [80] where the Zn 2+ ions from zinc anode combined with water molecules in aqueous electrolytes, thus forming Zn(H 2 O) 6
2+
to move within the ion conducting medium.The aqueous electrolytes in zinc-ion batteries can also be alkaline, that KOH aqueous electrolyte is the most traditional one. [81]Compared to neutral or weakly acidic aqueous electrolytes, alkaline aqueous electrolytes are subject to severe side reactions with MnO 2 cathode and irreversible consumption of Zn 2+ , leading to fast capacity decay and short lifespan of full-cell batteries.Therefore, recent research has shifted the primary focus on the development of neutral and mild aqueous electrolytes in AZMBs, while various salts have been studied, e.g., Zn(NO 3 ) 2 , [82] ZnCl 2 , [83] ZnSO 4 , [84] Zn(BF 4 ) 2 , [85] Zn(CH 3 COO) 2 , [86,87] Zn(ClO 4 ) 2 , [88,89] Zn(CF 3 SO 3 ) 2 , [90] and Zn(TFSI) 2 . [91]At the moment, the energy density of the 18650-type commercial LIBs has reached 250 Wh kg −1 or 670 Wh L −1 , [92] while that of SIBs usually lies in the range of merely 75-165 Wh kg −1 or 250-375 Wh L −1 . [93][96] Although AZMBs have not realized their commercial application up till now, they are still very tempting for future production and market due to their attractive advantages.
Alkaline Electrolyte
The metallic zinc anode loses two electrons to Zn 2+ ions, which react with OH − in the alkaline electrolyte and form Zn (OH) 2 as a start.Zn(OH) 2 keeps reacting with OH − that ends up with Zn (OH) 4 2− , resulting in its increasing concentration until a critical value, then forming Zn(OH) 2 at the electrolyte/electrode interface [97] and eventually turning into ZnO covering the surface of anode.This process can lead to an irreversible loss of Zn anode by forming a thick layer of electrochemically inactive ZnO. [98]uch abovementioned reactions in alkaline electrolytes can be illustrated by the following equations: (1) Furthermore, the general reaction mechanism can be described as: [99] Discharging Overall : Zn
Charging
ZnO
Neutral and Mild Electrolytes
In neutral or mildly acidic aqueous electrolytes, Zn metal anode is in a different environment from the alkaline electrolytes, where metallic Zn loses an electron and become Zn 2+ to combine with H 2 O octahedral, thus forming [Zn (H 2 O) 6 ] 2+ .Due to the large size of [Zn(H 2 O) 6 ] 2+ , this complex has to remove the outer water layer before inserting into cathode or plating on Zn anode.These following equations explain the reactions that occurred at the anode in neutral or mildly acidic electrolytes: Furthermore, the general reaction mechanism could be simply represented by the following equations: Discharging Charging While the reaction mechanisms of zinc anode under different pH environment have been introduced, the following content in this review article will primarily concentrate on the remaining challenges in neutral and mild aqueous electrolytes, emphasizing the corresponding strategies to mitigate the Zn anode instability.
The Challenges of Zinc Anode in Aqueous Electrolytes
Due to the merits of high-capacity metallic Zn anode combined with non-flammable aqueous electrolytes featured by high conductivity, AZMBs exhibit both high energy density and improved battery safety.However, a few problems still need to be tackled, e.g., depletion of active zinc mass and unstable battery cycling, which are mainly ascribed to the undesirable issues at the interface of the Zn metal anode and aqueous electrolytes.These interfacial issues include (1) the formation of zinc dendrites or "dead Zn"; (2) HER; and (3) corrosion and passivation.Figure 2 is a schematic diagram illustrating each one of them, as well as their mutual relations.Although the specific mechanisms behind these interfacial issues may be slightly different in alkaline electrolytes or neutral and mild electrolytes, they are ubiquitous in AZMBs and significantly deteriorating the cell performance and lifespan.
Zn Dendrite Growth
Metal dendrites growth may be the most notorious problem in alkali-metal batteries, that the undesirable formation and drastic growth of Zn dendrites are responsible for the rapid decline of capacity and battery failure in AZMBs.To effectively design modification strategies to mitigate or even eliminate such an issue, thoroughly understanding the mechanisms behind the process of Zn dendrites formation and growth is of great importance.In general, the formation of zinc dendrites is evoked by the ununiform electrodeposition of Zn on the nonideal surface of Zn anode during the repeated stripping/plating process.At the initial stage, the deposition of Zn 2+ is decided by the transport kinetics of Zn 2+ in liquid phase mass transfer near the anode.During charging, Zn 2+ ions near the surface of the metallic Zn anode are drawn toward the anode under the drive of electric field force, then reduced to metallic Zn with electrons obtained from zinc anode and deposited on the nucleation sites if the nucleation overpotential is overcome. [100,101]The surface-adsorbing Zn atoms can freely migrate in plane and accumulate with other constantly generated ones at the energetically favorable charge transfer sites at the anode surface, forming an undulate surface morphology consisting of multiple small bumps of deposited Zn. [102][103][104] These small bumps are the so-called initial Zn core.Meanwhile, a concentration gradient is created between the electrolyte/anode interface and the bulk electrolyte because of the deficiency of Zn 2+ ions that have been extracted to the metallic Zn, which causes a shift in potential from the equilibrium potential of electrode.Such a deviation of potential can promote the reduction of Zn 2+ near the elevated area on the surface of anode or the previously deposited Zn where the surface energy is lower. [58,105]The uneven surface after the initial deposition and formation of Zn cores, induces the nonuniform electric field, especially at the tips of the Zn protrusions.The electric field intensity at those tips is much stronger than elsewhere due to the accumulated surface charge at the area with greater curvature, [106][107][108][109] that further facilitates the deposition of Zn 2+ at the tips and rapid growth of dendrites, known as "tip effect".In terms of high surface energy, the active nucleation sites for Zn 2+ ions also include dislocation, boundaries, and surface impurities, in addition to the tips of Zn cores. [110]Therefore, Zn dendrites formation and growth process are essentially determined by the ununiform ion concentration, uneven electric field, and surface morphology of metal anode, which are directly associated to the electrolyte formulation, current density, charging time, temperature, and other side reactions occurred at the anode interface. [111]s Zn dendrites continue to grow from the anode toward the cathode, they either end up penetrating the polymer sepa-rator and causing short circuit of the battery cell, or fracture into small pieces of zinc bulk that are insulated from the electrode mass, forming "dead Zn".These fractured dendrites dispersed in the electrolyte substantially increase the interfacial impedance and reduce the Coulombic efficiency (CE) and anode capacity, while aggravating its reaction with electrolyte and generation of byproducts. [112]Meanwhile, the rampant growth of Zn dendrites provides a breeding bed for other side reactions, e.g., corrosion and HER, because of the increased specific surface area of the loose structure and rough surface of anode. [113]
Hydrogen Evolution Reactions
HER is a unique and grievous problem in AZMBs that unceasingly occurs during the rest and operation of the battery cells.Hydrogen evolution exhibits distinct mechanisms in different electrolytes.In alkaline electrolytes, it tends to occur ahead of Zn deposition owing to the lower standard reduction potential of Zn/ZnO (−1.26 V versus SHE) than that of hydrogen evolution (−0.83 V vs SHE), which water preferably obtain electrons in oxidized state. [114]In other words, Zn and H 2 O that coexist in the AZMBs with alkaline electrolytes are thermodynamically unstable and prone to react and release hydrogen. [115]The related reactions are shown as Equations 16 and 17, Intuitively, the same condition should apply in neutral and mild electrolytes, considering the standard reduction potential of Zn/Zn 2+ (−0.76 V vs SHE) is still lower than that of H + /H 2 ((0 V vs SHE), with their reactions shown in Equations 18 and 19. [116] 2+ + 2e − ↔ Zn (−0.76 V) (18) However, the reduction Zn 2+ ions happens in advance of HER in neutral and mild electrolytes because of the high HER overpotential on Zn metal, [117] sluggish surface kinetics, [113] and low activity of H + .According to the Tafel equation: = blogi + a ( equals to the HER overpotential, a is the constant value, b is called Tafel slope which is also a constant value and i stands for the current density).For different metals, the value of b almost remains unchanged (≈0.12 V), while the constant a for zinc metal in aqueous solutions is comparably high, leading to the high HER overpotential on Zn metal surface and suppression of hydrogen generation.Then again, although the rate of HER may be limited in some way in neutral and mild electrolytes, when the Zn deposition voltage surpasses the electrochemical stability window of water, the evolution of hydrogen is still unavoidable.In the commonly used electrolyte of 2 M ZnSO 4 aqueous solution, for instance, the Zn plating process takes place at a voltage lower than −0.15 V versus Zn/Zn 2+ , which is outside the electrochemical stability window of water (−0.05 to 1.7 V vs Zn/Zn 2+ ). [118]In practice, HER in neutral and mild electrolytes is also subject to kinetics factors, such as the polarization during charging, [102,119] hydrogen concentration, [120] operating temperature, [121][122][123] and current density, [124] etc.In other words, HER can also be significantly facilitated by the non-uniform Zn deposition and non-ideal anode surface.
HER deteriorates the battery cells in many ways: (1) the irreversible HER shares electrons with the reversible deposition of Zn 2+ that lowers the battery reversibility and current efficiency; (2) the unceasing generation of hydrogen consumes both zinc anode and aqueous electrolytes, resulting in reduced CE and capacity of the batteries; (3) the adsorption of evolved hydrogen gas on Zn anode surface impedes the nucleation of Zn and induces large overpotential [62] and non-uniform deposition; [125] (4) the gradual buildup of hydrogen gas raises the internal pressure of the cell, [29] causing the inflation of battery package and electrolyte leakage, [46] while the evolution of flammable gas carries the risk of fire and explosion; (5) A local alkaline environment can establish when HER depletes the protons in the vicinity of anode/electrolyte interface, that promote anode corrosion and formation of passivation layer. [120]
Corrosion and Passivation
Both chemical corrosion and electrochemical corrosion can happen to the metallic Zn anode in aqueous electrolytes.The chemical corrosion, or referred as self-corrosion, is prevalent in alkaline electrolytes owing to the more positive redox potential of HER (−0.83 V vs SHE, Equation 17) than that of Zn/ZnO (−1.26 V vs SHE, Equation 16), that the reaction between the metallic Zn with water is thermodynamically spontaneous, producing H 2 and ZnO.It should be noted that the ZnO-contained byproducts tend to adhere to and roughen the anode surface, being unstable during the large volume change of Zn anode and subject to constant growth, which significantly raises the electronic impedance at the interface.In neutral and mild electrolytes, the electrochemical corrosion mechanism is dominant during the repeated cycling of charging and discharging, which irreversibly consumes the Zn metal anode, causing the loss of its active mass and inert byproducts on the surface. [55]The irreversible loss of active mass of anode mainly attributes to the Zn dendrites growth as well as emergence of "dead Zn", as illustrated in Section 3.1.Meanwhile, during discharge, the oxidation of metallic Zn induces concentrated Zn 2+ ions near the Zn surface and attracts anions from electrolyte, triggering the parasitic reactions and formation of corrosion products upon the establishment of loose and porou [Zn(OH) 2 ] 3 •ΧH 2 O(ZSH) layer.For example, in the most widely used ZnSO 4 -contained aqueous electrolyte, the following equations describe the formation of ZSH on metallic Zn. [54] Zn ↔ Zn 2+ + 2e − (20) While the electrochemical corrosion reactions directly consume the Zn anode and electrolytes causing capacity fading, the formation of insoluble byproducts, e.g., ZSH, can reduce the amount of active nucleation sites and lead to an uneven anode surface, which results in unregulated deposition of Zn and dendrites growth.Moreover, the diffusion of ions or electrons at the anode surface is significantly restricted due to the poor conductivity of the byproducts, further raising the energy barrier for Zn deposition and lowering the CE of the battery. [126,127]orse still, in contrast to the dense and homogeneous passivation layers usually formed by corrosion in strong alkaline electrolytes, [128] byproduct layers formed in neutral and mild electrolyte cannot act as solid electrolyte interphase (SEI) layers because they formed the loose structure which showed hexagonal layered stacking.Therefore, their loose structure couldn't effectively impede the direct contact between electrolyte and zinc anode. [54,125]And a series of side reactions are still occurring at the Zn anode/electrolyte interface.So, it is important to find effective approaches to address these interfacial issues in AZMBs, not only to mitigate the severe damages they caused, but also to suppress their mutual effects which significantly accelerate the battery failures.Researchers have proposed numerous solutions to those issues in recent years.The modification approaches typically follow three types of technique routes, that are surface regulation, structure construction, and electrolyte engineering, while the former two strategies are based on the design of Zn anode.For example, Renjie Chen's group.created nitrogen (N)doped graphene oxide (NGO) to obtain a parallel and ultrathin interface modification layer (≈120 nm) on Zn foil, that the N-NGO@Zn||LiMn 2 O 4 pouch cells maintained a high energy density at depth of discharge of 36%, i.e.,164 Wh kg −1 after 178 cycles. [129]Besides, Chengjun Xu's group introduced the layerby-layer structure zinc anode (Sn/Cu/Zn) to achieve high performance zinc-ions battery, [130] where the Sn/Cu/Zn composite anode showed improved cycle performance (over 250 h at 0.5 mA cm − 2 ) than the bare zinc anode.However, both the surface regulation and structure construction strategies have their limits, for example, the complex process, resulting in the increase process time and energy consumption.Moreover, structure collapse of Zn anode may still happen after long cycling time, leading to poor cycling stability.Therefore, it is difficult for them to realize large industrial applications.On the other hand, the electrolyte formulation could be an ideal approach to make up for those disadvantages that talked above mentioned.They are easy to formulate, consuming less time and energy, leading to low cost, and exploring new and green electrolyte formulation is also comparably reasonable for companies, electrolyte formulation has great potential for large industrial application.
Aqueous Electrolytes
Currently, zinc sulfate (ZnSO 4 ), zinc trifluoromethanesulfonate (Zn(CF 3 SO 3 ) 2 ), and zinc bis (trifluoromethylsulfonyl)imide (Zn(N(CF 3 SO 2 ) 2 ) 2 ) (known as Zn(TFSI) 2 ), are most commonly used in mild electrolytes because of their stability during plating/stripping process. [131]However, water existed in the solvents tend to decompose at increased operating voltage and raises a series of problems.Therefore, the activity of water should be restrained concerning the inhabitation of parasitic reactions.[139] On the other hand, deep eutectic solvents (DES) are offered as another choice, [122,[140][141][142] that change the solvation structure of Zn 2+ and desolvation energy, largely improved compared to the highly active [Zn(H 2 O) 6 ] 2+ in normal aqueous electrolytes.The summary of aqueous electrolyte engineering will be divided into three parts in this review, i.e., the dilute/concentrated electrolyte, the WiS electrolyte, the DES electrolytes (Figure 3).They are essential for manipulating reactions occurring at the electrolyte/anode interface and have huge impacts on the performance of AZMBs.
Dilute/Concentrated Electrolytes
With extensive choices of zinc salts including ZnSO 4 , [143] ZnCl 2 , [83] and Zn (CH 3 COO) 2 , [86] etc., the concentration of solutions acts a crucial role in electrochemical properties of the aqueous electrolytes, that dilution or concentrated electrolytes are defined by different water content.Each Zn 2+ ion in dilute electrolytes is coupled to 4 to 6 water molecules, and this high concentration of water enables quick ion conduction in the solution.A low-concentration perchlorate-based salt called 3 M Zn (ClO 4 ) 2 electrolyte was investigated by Hang Zhou and colleagues.This electrolyte displayed a high conductivity of 4.23 mS cm at extremely low temperatures (−50 °C) (Figure 4a), [89] indicating its superior anti-freezing ability and stable operation.However, the overly abundant water also results in various problems, for example, the limited electrochemical stability window aroused from water decomposition.As a proof, cathode dissolution was observed in conventional dilute electrolyte (1 Mm ZnSO 4 or 1 Mm Zn(TFSI) 2 ) due to the insufficient electrochemical stability window. [75]On the contrary, the electrolyte with low water content (1 m Zn(TFSI) 2 + 21 m LiTFSI) highly suppresses the activity of water owing to the extensive coordination, that results in stable open framework of Zn storage and high reversible capacity of 60.2 mAh g −1 over 1600 cycles (Figure 4b). [75]Strong interactions between the water molecule around Zn 2+ and Zn 2+ result in a high energy barrier for the deposition of Zn 2 on the anode.Meanwhile, the excess water molecules in electrolyte would compete with Zn 2+ for electrons, thereby worsening hydrogen evolution and deteriorating the CE of the batteries.By reducing the amount of water, Bing Joe Hwang's group prepared the concentrated aqueous electrolyte [89] Copyright 2022, Wiley.b) 1 m Zn (TFSI) 2 + 21 m LiTFSI concentrated aqueous electrolyte's cycling efficiency of Zn||NiHCF batteries.Reproduced with permission. [75]Copyright 2021, Wiley.c) Diagram showing the formation of passivation layer Zn 4 (OH) 6 SO 4 •xH 2 O and the solution structure of Zn 2+ during electrodeposition in dilute electrolyte (left) and corresponding concentrated electrolyte (right).d) SEM photography of the surface morphology following 100 iterations of plating and stripping process.e) Using concentrated aqueous electrolyte, the ability of Zn on Cu foil to be stripped off the metal and the coulombic efficacy of Zn||Cu cell.Reproduced with permission. [134]Copyright 2020, Amer Chemical Society.f) Zn||stainless steel cell linear sweep voltammetry at a scan rate of 10 mV s −1 .Reproduced with permission. [146]Copyright 2023, Amer Chemical Society.efficiency of 99.21% over 1000 h under 0.2 mA cm 2 (Figure 4e), [134] while the corresponding dilute electrolyte ) only provided an average columbic efficiency of 97.54% and relatively short cycle life, indicating the alleviated interfacial stability and improved cyclability using concentrated electrolyte, as illustrated by Figure 4c.In addition to extending the lifespan of AZMBs, some of the researches have broadened the electrochemical windows of the electrolytes.A multi-hydroxyl polymer (polyethylene glycol, PEG) cosolvent was introduced by Jin zhou's group. 2 m of zinc trifluoromethanesulfonate (Zn(OTf) 2 ) in PEG/H 2 O (50 vol% PEG + 50 vol% H 2 O) was investigated and the zinc metal batteries could be operated over wide temperatures between −20 and 80 °C. [144]when the temperature is low, PEG molecules tend to absorb on the zinc metal, suppress the zinc dendrite growth, and when the temperature is high, PEG molecules help suppress the parasitic reactions.Organohydrogen electrolytes (OHEs) are prepared by swelling freeze-dried hydrogels of poly (2-acrylamide-2-methylpropane sulfonic acid)/polyacrylamide in a binary solvent electrolyte of ethylene glycol and water (EG/H 2 O, water content 10% v/v) containing ZnCl 2 /NH 4 Cl, it endowed zinc-metal batteries wide operating temperature (−30 to 80 °C) with excellent capacity retentions of 88.8% after 1500 cycles at −30 °C and 44.8% after 1000 cycles at 80 °C. [145]Hybrid electrolytes of water and a polar aprotic N, N-dimethylformamide is utilized and forms Zn 5 (CO 3 ) 2 (OH) 6 solid electrolyte interphase (SEI) on the Zn surface to achieve good performance of zinc-metal batteries over a wide temperature range.Bing Joe Hwang's group utilized a highly concentrated salt electrolyte (HCE) with dual salts (1 m Zn(OTf) 2 + 20 m LiTFSI), [146] that the electrochemical stability window was expanded up to 2.856 V, as shown in Figure 4f.Another example is the concentrated nitrate electrolyte (2.5 m Zn(NO 3 ) 2 + 13 m LiNO 3 aqueous solution) with DMA diluted, [147] and the widest electrochemical window is determined to be 3.1 V. Based on these studies, it appears feasible to raise the salt concentration of aqueous electrolytes to broaden their electrochemical stability window and improve interfacial stability.
Water-in-Salts (WiS) Electrolytes
Compared to either dilute or concentrated electrolytes, i.e., saltsin-water electrolytes, where the amount of water still greatly outnumbers that of zinc salts in solution, water-in-salts electrolytes contain very little water content, that its volume and mass are significantly lower than those of zinc salts. [148]By minimizing the free water molecules contained, WiS electrolytes manifest widened electrochemical stability windows. [149]In 2015, Suo et al. designed an electrolyte solution with incredibly high salt content, where the cation solvation shell changed because of insufficient water to neutralize the cation charge, thereby achieving ESW of 3 V and successfully suppressing hydrogen evolution and electrode oxidation. [150]Jiri Cervenka's team prepared a WiS electrolyte based on Zn(ClO 4 ) 2 for Zn g-1raphite dual-ion batteries, which has wide voltage window of 2.80 V. [151] The chaotropic ClO 4 − anions in the water-in-salts electrolyte inhibit the ability of water reacting with other compounds, meanwhile reshaping the solvation shell of Zn 2+ due to the interaction between ClO 4 − anions and H 2 O.In addition, the high concertation of Zn 2+ ions prevent Zn anode corrosion, enabling a low overpotential of Zn 2+ plating/stripping (Figure 5a) and high cut-off voltage (2.5 V Zn/Zn 2+ ) (Figure 5b) in aqueous Zn-graphite dual-ion battery.And they also applied ClO 4 -based WiS electrolyte Al(ClO 4 ) 3 as the suitable electrolyte with cathode material graphite, resulting in the wide electrochemical window of 4.0 V. [153] Unfortunately, the impact of various concentrations of ClO 4 − anions on solvation shell of H 2 O was not fully explained.Chunsheng Wang's group introduced aqueous electrolytes based on 1 M Zn(TFSI) 2 with various concentrations of LiTFSI. [152]Using Fourier transform infrared (FTIR) (Figure 5c) and nuclear magnetic resonance (NMR) spectroscopies (Figure 5d), they examined the relationship between various aqueous electrolyte concentrations and the interaction between Zn 2+ ions, H 2 O, and TFSI − .The water molecules may become severely confined within the Li + solvation structures at higher LiTFSI concentrations, which significantly reduces the amount of water near Zn 2+ ions.In order to theoretically study the Zn 2+ solvation structure in aqueous electrolytes with 1M Zn (TFSI) 2 and three concentrations of LiTFSI ((5 m, 10 m, and 20 m), Molecular Dynamics (MD) stimulations were also performed through the polarizable APPLE&P force field (Figure 5e).When the concentration of LiTFSI increased to 10 m, the water surrounded by Zn 2+ ions was replaced by anions simultaneously.
With CE close to 100%, this WIS electrolyte (1 m Zn (TFSI) 2 + 20 m LiTFSI) demonstrated the highly reversible and dendritefree Zn plating/stripping process.The hybrid Zn-LiMn 2 O 4 battery using such a WIS electrolyte exhibited 85% capacity retained after 400 cycles, while the Zn/O 2 battery manifested a high energy density of 300 W h kg −1 for 200 cycles.Recently, another highly concentrated salt electrolyte (HCE) with dual salts (1 m Zn(OTf) 2 + 20 m LiTFSI) showed outstanding capacity retention at 92% after 300 cycles and an average CE of 99.62% in Zn-LiMn 2 O 4 cells. [152]In sharp contrast to the average CE of 96.91%, the battery using low-concentration electrolyte (LCE) deteriorated quickly and short-circuited after 66 cycles. [146]
Deep Eutectic Solvents (DESs)
Although WiS electrolytes have demonstrated its broadened electrochemical stability window and enhanced interfacial stability with metallic Zn, their application is still hindered by the high cost arising from the increased amount of LiTFSI used.Deep eutectic solvents (DESs) become a promising candidate considering their similar benefits as WiS electrolytes but much lower cost.In addition, DESs can be easily formulated and tend to maintain stability in a wider range of temperatures above and below room temperature. [154]DES is composed of two or more components, which is a eutectic mixture of Lewis or Brønsted acids and bases containing a variety of anionic and/or cationic species, [155] or considered as a free-flowing solution consisting of two or more solid organic materials, that has a melting temperature lower than that of an ideal liquid mixture. [156]DESs are facile to prepare, that the molar ratio between components or water content can be adjusted to meet different requirements.The solvation shell structure of Zn 2+ ions can be altered thereby, when a specific quantity of water is introduced into DES, which results in an increased Zn 2+ ion conductivity.Mai's group developed a novel eutectic electrolyte with special solvation structure, containing ethylene glycol (EG) and ZnCl 2 , that achieved the dendritefree ZIBs with long cycle life. [157]With the optimized molar ratio of ZnCl 2 /EG (1: 4), the ionic conductivity of DES was maximized to 1.15 mS cm −1 .The EG within the DES interacted with Zn 2+ and changed their solvation structure, resulting in the creation of [ZnCl(EG)] + and [ZnCl(EG) 2 ] + complex cations.Their decomposition induced a Cl-rich organic-inorganic hybrid solid electrolyte interphase film formed on the metallic zinc anode surface (Figure 6a), that assists the reversible plating and stripping of Zn.Therefore, the Zn symmetric cell exhibited highly reversible Zn plating/stripping and long-term stability of 3200 h at 1 mA cm −2 and 1 mAh cm −2 (Figure 6b), while the polyaniline||Zn cell manifested good cycling stability of 10 000 cycles with 78% of capacity retained.Guanglei Cui's group designed a new electrolyte named "water-in-DES", that contains approximately 30 mol% H 2 O in a eutectic mixture of urea, LiTFSI, Zn(TFSI) 2 . [122]In such a "waterin-DES" electrolyte, the nature of deep eutectic solvents was inherited despite of the addition of water content (Figure 6c), that effectively suppressed the water activity due to the interaction between DES and water, while the advantages of aqueous electrolytes were also maintained benefiting the ionic conductivity and viscosity.The high-voltage Zn/LMO batteries using "waterin-DES" electrolyte demonstrated the stable cycling of 600 cycles ) Graph illustrating the connection between cut-off potentials, released capacities, and coulombic efficiencies.Reproduced with permission. [151]Copyright 2022, The Royal Society of Chemistry.c) The evolution of the FTIR spectrum between 3800 and 3100 cm −1 with rising salt concentration.d) The modification of 17 O nuclei's chemical shifts in solvent as a result of variations in salt content.e) Molecular dynamics analyses of the Zn 2+ -solvation structure.f) The cycling stablity and CE of Zn/LiMn 2 O 4 full cell in HCZE at 4 C. Reproduced with permission. [152]Copyright 2018, Springer Nature.g) The cycle performance of Zn/LMO full cell in HCE and LCE at charge rate of 0.1 C and discharge rate of 0.2 C. Reproduced with permission. [146]Copyright 2021, American Chemical Society.at 0.5 C with over 80% of capacity retained, or with over 90% retained after 300 cycles at 0.1 C (Figure 6d).Likewise, Zhanliang Tao et al. prepared a similar "water-in-DES" electrolyte based on ZnCl 2 , acetamide, and water, that had benefits of both ionic liquid and concentrated electrolytes. [141]This "water-in-DES" electrolyte changed the solvation structure of Zn 2+ and reduced its desolvation energy barrier, thereby lowering the nucleation overpotential and favoring uniform Zn nucleation (Figure 6f).The formation mechanism of ZnCl 2 -acetamide-H 2 O was investigated through spectra analysis and density functional theory (DFT) calculations, that [ZnCl(acetamide) 3 ] + was found to be tetrahedral structure (Figure 6e).The superior cycling stability of this "water-in-DES" electrolyte was proved by the Zn/Ti cell, which worked for 1000 cycles with an average CE of 98%.The corresponding Zn||PNZ full-cell battery exhibited a stable cycling for 10 000 cycles, with capacity retention of 85.7% to the initial 72.3 mA h g −1 .However, despite of the effectiveness of DES electrolytes in suppression of side reactions, most of them can only work under low current densities, typically from 0.05 to 0.5 mA cm −2 , which are much lower than that of dilute electrolytes.
Except for the abovementioned three categories of aqueous electrolytes, there are some other special aqueous electrolyte .Reproduced with permission. [157]Copyright 2022, Wiley-VCH.c) Results from Raman analysis for the LZ-DES/nH 2 O and pure LZ-DES (the Li/urea ratio of 1:3.8).d) Comparison of the Zn/LMO cell cycling ability in LZ-DES/2H 2 O at different rates to that in 0.5 m LiTFSI + 0.5 m Zn (TFSI) 2 .Reproduced with permission. [122]Copyright 2019, Elsevier.e) DFT calculations of the optimized coordination structures of [ZnCl(acetamide) 3 ] + and [ZnCl(acetamide) 2 (H 2 O)] + ,respectively.f) Schematic representations of the unaltered, electrostatic shielding mechanism and enhanced solvation shell-based Zn nucleation and growth process, respectively.Reproduced with permission. [141]Copyright 2021, Wiley-VCH.
formulations.For example, a hybrid electrolyte containing an equal amount of water and glycerol was prepared by Zhang et al., [158] that enabled the flat and smooth Zn metal surface after the repeated stripping and deposition of Zn 2+ , without dendrites formed.The added glycerol has a high affinity to the Zn metal and strong binding interactions with Zn 2+ that adjusts its solvation-shell structure in hybrid aqueous electrolyte to form (Zn (OH 2 ) 2 (C 3 H 8 O 3 ) 3 ) 2+ ions with less coordinated water.Thereby, it is effective to inhibit the side reactions at the electrolyte/anode interface.With homogeneous deposition and nucleation of Zn on metal surface, the Zn||Zn symmetric cell manifested a stable cycling of more than 1500 h at 1 mA cm −2 via addition of glycerol, while the Zn‖CaV 6 O 16 •3H 2 O (CaVO) full cells provided a high-capacity retention of 86.6% with a reversible capacity of 136 mAh g −1 after 400 cycles.
Electrolyte Additives
In addition to the electrolyte formulation, aqueous electrolyte additives can also alleviate the side reactions at the anode interface, [158] broaden the electrochemical stability window, [159] suppress the dendrite growth, [115,160] and limit the diffusion mechanism, [161] thereby improving the interfacial stability. [162]lthough additive is not necessarily the constituent part of an aqueous electrolyte, additive-contained electrolytes exhibit much longer cycle life compared to those without additives.Electrolyte additives provide a low cost, eco-friendly, and easy-operating solution to the interfacial stability issues, which favors the large-scale industrialization of AZMB.These additives can be broadly categorized into ionic additives, [163,164] organic additives, [103,[165][166][167] inorganic additives, [168,169] and metal additives. [112]Ionic additives have the most versatile functions yet simple preparation, that they can not only modify the interfacial stability, but also enhance the ionic conductivity of the aqueous electrolytes.][178] While inorganic and metal additives are less studied compared to the other two due to their limited solubility, they are proved to largely improve the performance of aqueous electrolytes in metallic zinc anode even at a very little concentration. [168,179]onsidering the complicated classification by composition, the electrolyte additives will be distinguished and introduced based on the mechanisms, mainly divided into five types: inducing nucleation on anode surface, [180][181][182] creating electrostatic shield, [134,183,184] adjusting solvation structure, [185][186][187] regulating deposition orientation, [160,[188][189][190] in situ building SEI layer [191][192][193][194] (Figure 7).97][198]
Inducing Nucleation
In most cases, the accumulation of deposited Zn 2+ ions ascribe to the uneven electric field and restricted nucleation sites on the metallic Zn anode surface.Therefore, the key to suppress formation and growth of Zn dendrite and occurrence of interfacial side reactions, lies in the establishment of uniform surface electric field and the creation of evenly spread of nucleation sites with refined nucleation size.
Jiaqian Qin et al. added a small amount of graphene oxide (GO) as an additive into the ZnSO 4 electrolyte, [199] that successfully eliminated the Zn dendrites and achieved highly stable Zn anode during cycling.Due to the preferable electrostatic interactions between GO particles and metallic Zn, as well as its firm adherence on the surface, the distribution of the electric field is homogenized.Meanwhile, the deposition of Zn 2+ is guided by the oxygen-containing polar groups on the surface of GO parti-cles, that quickly conduct Zn 2+ to the anode surface to form a uniform Zn deposition layer.As a matter of fact, both the nucleation overpotential and charge transfer resistance on the Zn anode surface were largely reduced by the GO electrolyte additives, while additional nucleation sites are induced and evenly distributed on the anode surface, as illustrated by Figure 8a.Compared to the uncontrolled electrodeposition of Zn in the pristine electrolyte, which usually results in uneven anode surface and formation of dendrites, a smooth and flat surface of Zn anode can be maintained during the battery cycling, owing to the improved nucleation process and enhanced reaction kinetics by adding GO additives.As a result, the cycle life of Zn symmetric cell with GO added was improved to five times longer than that with pristine electrolyte, while the Zn||Ti battery with GO achieved a high CE of 99.16% over 100 cycles at 1 mA cm −2 and 1 mAh cm −2 , with polarization of only 116 mV.Han et al. reported the graphene quantum dots (F-GQDs) as an electrolyte additive. [200]Under the action of highly electronegative polar groups (-OH, -COOH, -NH 2 , and -SCN), F-GQDs preferentially adsorb on the Zn anode surface to render a highly hydrophilic surface with low nucleation energy, and homogeneous distribution of electric field (Figure 8b).F-GQDs are acting as active nucleation sites for Zn 2+ , that regulate the uniform Zn deposition at a nano scale and prevent the .Reproduced with permission. [200]Copyright 2021, American Chemical Society.b) Schematic of in situ formation of F-GQDs on anode surface under the action of polar groups with abundant nucleation sites provided.c) Cycling performance of symmetric Zn cells in electrolytes with or without F-GQDs added, at 0.5 mA cm −2 & 0.25 mAh cm −2 , and 10 mA cm −2 & 5 mAh cm −2 , respectively.Reproduced with permission. [201]opyright 2023, Elsevier.d-f) SEM imaging of cross-section morphology of Zn anodes in d) pristine electrolytes after 100 h cycling, and e) EDTAadditive electrolytes after 100 h cycling, and f) EDTA-additive electrolytes after 500 h cycling.g-i) SEM imaging of surface morphology of Zn anodes in g) pristine electrolytes after 100 h cycling, and h) EDTA-additive electrolytes after 100 h cycling, and i) EDTA-additive electrolytes after 500 h cycling.j) Cycling performance of the symmetric Zn cells at 2 mA cm −2 and 2 mAh cm −2 with the corresponding.k) CE of Zn plating/stripping process.l) Cycling performance of Zn||V 2 O 5 battery at 2 A g −1 .Reproduced with permission. [202]Copyright 2022, American Chemical Society.
particle aggregation and formation of Zn dendrites.The Zn||Zn symmetric cell with F-GQDs cycled more than 1800 h at a current density of 0.5 mA cm -2 , or 450 h at 10 mA cm −2 (Figure 8c), while the Zn||Ti battery exhibited a high average CE of 99.6%, demonstrating the improved stability of Zn anode with F-GQDs additive.The Zn|MnO 2 full-cell battery using F-GQDs-added electrolytes delivered an initial specific capacity of 254.6 mAh g −1 at 1 A g −1 , with 200.1 mAh g −1 retained after 500 cycles and a capacity retention of 78.6 %.Xie et al. reported ethylenediaminetetraacetic acid (EDTA) as an electrolyte additive by creating active nucleation sites for Zn deposition. [201]Due to the Zn affinity of EDTA, it forms an adsorption layer on the Zn anode surface, that exhibited enhanced driving force for Zn nucleation and abundant nucleation spots under the action of strong chelation between EDTA and Zn 2+ .Zn 2+ ions tend to deposit in a large area of the anode surface, with reduced critical grain size from the very beginning of the Zn plating, guiding the formation of a dense and smooth deposition layer on Zn, as shown in (Figure 8d-i).As a result, the Zn||Zn symmetric cell with 0.04 m EDTA in ZnSO 4 electrolyte cycled stably for 3000 h at 2 mA cm −2 and 2 mAh cm −2 (Figure 8j), with CE of 99.68% (Figure 8k), while the Zn||V 2 O 5 full cell exhibited outstanding rate performance of 258 mAh g −1 at 2 A g −1 and improved cyclability of 199.7 mAh g −1 after 500 cycles at 2 A g −1 (Figure 8l).Besides, other than alleviating the dendrite growth, additives may also have the ability of widening the operation electrochemical window.For example, dimethyl sulfoxide(DMSO) additive was added into a type of multi-component crosslinked hydrogel electrolyte proposed by Bingang Xu's group, the full cell equipped with this type of electrolyte could be operated under −40 °C to 60 °C.
Electrostatic Shielding Effect
Some of the electrolyte additives can establish an electrostatic shield in the vicinity of protuberance on the uneven anode surface, that adsorbs on the high local current density sites prior to the Zn 2+ , thus engendering the electrostatic repulsion to approaching Zn 2+ and shifting the deposition to the adjacent flat regions.
A cationic surfactant-type electrolyte additive, tetrabutylammonium sulfate (TBA 2 SO 4 ), was first proposed by Changbao Zhu et al. [178] During the plating process of Zn, the nonredox TBA + ions preferentially adsorb near the humps on surface of Cu substrate, forming a zincphobic shielding film that regulates the initial Zn nucleation and suppresses the formation of dendritic morphology due to the electrostatic force against hydrated Zn 2+ in the electrolytes.In this way, both lateral diffusion of Zn 2+ and "tip effect" are largely restricted (Figure 9a), that a uniform Zn deposition with flat surface can be observed on the Cu foil, in sharp contrast to the highly dendritic and mossy Zn protrusions grown without TBA 2 SO 4 added.The electrostatic shielding effect of the zincphobic TBA + cations was also verified by DFT calculations, that the depositing Zn 2+ ions have to overcome an energy barrier of ≈0.55 eV to pass through the TBA + shielding layer (Figure 9b).Likewise, tetramethylammonium sulfate (TMA 2 SO 4 ) electrolyte additive was reported by Dunmin Lin et al., [184] which also effectively inhibited the stacking of deposited Zn on the Zn anode surface, as well as side reactions of corrosion and HER, by shifting the Zn 2+ deposition from the protrusion tips to other regions.The surface morphology of Zn foil was investigated via SEM imaging, that a flat surface with well-ordered Zn deposition layers was observed after cycling in TMA 2 SO 4 -added ZnSO 4 aqueous electrolyte for 100 h, presenting the best result compared to others containing different aqueous electrolytes additives (Figure 9c-h).The long-term cycling stability of symmetric Zn||Zn cell was demonstrated after adding a small amount of TMA 2 SO 4 (0.25 mm L −1 ) into the ZnSO 4 electrolyte, exhibiting a cycle life of 1800 h at 0.5 mA cm −2 with 0.5 mAh cm −2 (Figure 9i).The Zn||TMA 2 SO 4 @ZnSO 4 || MnO 2 full cell delivered a high initial capacity of 181.3 mAh g −1 at 0.2 A g −1 , with 98.72% of capacity retained after 200 cycles, as shown in Figure 9j.Reported by Hu et al., [203] the inorganic rare earth metal type electrolyte additive, cerium chloride (CeCl 3 ) was applied in AZMBs to alleviate the dendrite growth based on the electrostatic-shielding mechanism.By adding 2 g L −1 CeCl 3 in the 2 m ZnSO 4 aqueous electrolyte, a dynamic electrostatic shield was established on the anode surface, where the Ce 3+ acted as competitive cation against Zn 2+ , regulating the Zn 2+ deposition to the flat region for an even surface of Zn metal.This phenomenon was proved via finite element modeling (FEM) simulation, that Zn 2+ are moved to adjacent flat regions because of the electrostatic repulsion of concentrated Ce 3+ near tips (Figure 9k).The Zn symmetric cell with ZnSO 4 +CeCl 3 electrolyte exhibited an enhanced cycling stability of 2600 h at 2 mA cm −2 , with CE of 99.7%, while the LiFePO 4 ||Zn full cell delivered an adequate discharge capacity of 84 mAh g −1 at 2 C with 80% of its initial capacity retained after 400 cycles, (Figure 9l).
Adjusting Solvation Structure
Instead of shuttling between the electrodes in a single-ion form, the Zn 2+ ions tend to coordinate with six free H 2 O molecules in aqueous solutions and exist in a configuration of [Zn (H 2 O) 6 ] 2+ cluster.The active water molecules released from this solvation structure of hydrated complex, in the electrodeposition of Zn 2+ , can easily trigger various side reactions at the electrolyte/anode interface and deteriorate the interfacial stability. [204]Thereby, modifying the solvation structure of hydrated Zn 2+ is essentially effective to alleviate the side reactions and dendrite-free Zn anode.
By adding 0.2 wt.% PAM in 1 m ZnSO 4 , the Zn||Cu cell delivered a high CE over 99.65% under 2 mA cm −2 , 2 mAh cm −2 , as well as stable cycle life over 1300 h (Figure 10c).Wang et al. introduced a small amount of silk peptide as electrolyte additives to mitigate the parasitic reactions at the electrolyte/anode interface. [196]Verified by DFT calculations shown in Figure 10d, the highly soluble silk peptide contains abundant polar groups, such as -COOH and -NH 2 , that strongly interact with Zn 2+ and reduced the coordinated active H 2 O and SO 4 2− (Figure 10e).In this way, the side reactions of Zn anode are essentially suppressed, and a uniform and stable Zn deposition process is ensured as observed in Figure 10g.Benefiting from the electrostatic shielding effect of the silk peptide anchored on the anode surface, as well as the isolation between Zn and aqueous electrolyte (Figure 10f), the symmetric Zn||Zn exhibited stable cycling of 3000 h at 1 mA cm −2 and 1 mAh cm −2 with CE of 99.7% (Figure 10h).
Regulating Electrodeposition Orientation
Because the direction of crystal growth largely determines the surface morphology and dendrites formation, it is critical to control the Zn deposition orientation regarding its hexagonal close-packed (hcp) structure with high anisotropy platelets. [219]pecifically, as the typical Zn (101) and Zn (110) crystal planes are mostly vertically positioned against the Zn surface which benefits the dendrite growth, the (002) plane benefits the even Zn deposition due to the flat atom arrangement and even charge distribution. [220]By regulating the crystallographic orientation of electrodeposited Zn, the stability of Zn anode can be largely improved during the battery cycling. [221,222]ecently, Yingjin Wei's group reported a unique colloidal aqueous zinc electrolyte prepared by adding the oleic acid (OA) into a 2 m ZnSO 4 solution. [223]Instead of directly interacting with Zn 2+ and H 2 O or changing the solvation structure of Zn 2+ , such a ) n ] 2+ penetrating the TBA + cationic shielding on the surface of Zn metal anode with potential energy change with varying distance.Reproduced with permission. [178]Copyright 2020, American Chemical Society.c-h) SEM imaging of Zn anodes after cycling for c) 0 h, d) 100 h in pristine electrolytes without additives, e) 100 h with TMA 2 SO 4 , f) 100 h with TMAAc, g) 100 h with TMACl, and h) 100 h with TMANO 3 .i-j) Cycling performance of Zn symmetric cells at 0.5 mA cm −2 with 0.5 mAh cm −2 and Zn||TMA 2 SO 4 @ZnSO 4 || MnO 2 full cells at 0.2 A g −1 , respectively.Reproduced with permission. [184]Copyright 2022, Academic Press Inc., Elsevier Science.k) finite element modeling (FEM) simulation of Zn deposition after 2 min in Ce 3+ added electrolytes.l) Cycling performance of LiFePO 4 ||Zn full cell in ZnSO 4 +CeCl 3 electrolyte at 2 C. Reproduced with permission. [203]Copyright 2022, Wiley-VCH. 2− -polymer" bonding network and adjustment of space charge region of Zn anode with different functional polymer additives.c) Cycling performance of Zn anodes in 1 M ZnSO 4 aqueous electrolytes with and without 0.2 wt.% PAM additive at 2 mA cm −2 , 2 mAh cm −2 ; Reproduced with permission. [162]Copyright 2021, American Chemical Society.d) DFT calculations of bonding energies between Zn 2+ and different polar groups on silk peptide chains.e) Solvation structure of hydrated Zn 2+ with or without silk peptide.f) DFT calculation of adsorption energy of H 2 O and silk peptide on Zn (002) surface.g) In situ optical imaging of cross-section morphology of Zn deposition through time at 10 mA cm −2 .h) Cycling performance of Zn symmetric cells at 1 mA cm −2 and 1 mAh cm −2 .Reproduced with permission. [196]Copyright 2022, Wiley-VCH.
"temporary electrolyte additive" induces a stable OA spread on the metallic Zn anode surface due to the strong polar effect of -COOH group, that effectively guides the layer-by-layer deposition of Zn(002) horizontally on the anode surface, as shown in Figure 11a.While Zn platelets usually fail to deposit horizontally along the preferred (002) crystal plane because of the lat-tice mismatch between Zn (002) and polycrystalline Zn foil, the OA molecules can cling to the (002) surface of deposited Zn (Figure 11b), resulting in a flat and smooth anode surface.Meanwhile, the firm OA layer with hydrophobic alkyl chain protect the Zn metal anode from immediate contact with water in an aqueous electrolyte, further mitigating the side reactions at the Reproduced with permission. [223]Copyright 2023, Elsevier.e) Schematic of Zn deposition in ZnSO 4 (left) and ZnX 2 -PEG300 (right; X = Cl, Br, or I) electrolytes.f) SEM imaging of the large hexagonal Zn platelet formed in ZnI 2 -5%PEG300 electrolytes.g) XRD pattern of Zn deposits after growing for 24 h at 4 mA cm −2 .h) Cycling performance of Zn||Zn symmetric cells with different electrolytes at 25 mA cm −2 , 3.2 mAh cm −2 .Reproduced with permission. [224]Copyright 2022, Wiley-VCH.
anode/electrolyte interface.As a result, the Zn||Cu asymmetric cell using OA-added ZnSO 4 electrolyte manifested an ultra-long stable cycling over 3340 cycles and high CE of 99.63% (Figure 11c), while the Zn‖MnO 2 full cell provided a high discharge capacity of 215.1 mAh g −1 at 1A g −1 with capacity retention of 98.9% after 1100 cycles (Figure 11d).A short chain PEG300 was used as an additive in ZnI 2 electrolytes, which effectively regulated the crystal growth structure and deposition orientation of Zn on the metal anode surface, illustrated by Figure 11e, owing to the affinity of instantly formed PEG-Zn 2+ -aI − (a = 1,2,3) complexes to the (002) Zn crystal facets. [221,224]Compared to the sphere-like crystal structure of Zn grown in ZnSO 4 electrolytes, Zn deposits in ZnI 2 -PEG300 electrolytes are prone to form in a large hexagonal platelet shape (Figure 11f) and grow along the direction of 2D substrate with (002) surface exposed (Figure 11g).Thereby, the reversibility of Zn anode in ZnI 2 -PEG300 is largely improved compared to that in ZnSO 4 electrolytes, that the Zn symmetric cell (Figure 11h) with PEG300 added showed a stable cycling for over 4000 cycles and 1200 h at 25 mA cm −2 and 3.2 mAh cm −2 .
In Situ Forming SEI
There is another type of electrolyte additives decorating the metal anode surface through the in situ establishment of robust SEI layer, which not only isolates the Zn anode with water molecules in aqueous electrolytes, but also modifies the surface of anode to achieve the smooth deposition of Zn 2+ .A nonionic surfactant electrolyte additive, polyethylene glycol tert-octylphenyl ether (PEGTE), [179] was proved valid in in situ forming a H-ZnO passivation layer in honeycomb structure on the Zn anode surface, as shown in Figure 12a, protecting it from interfacial side reactions and regulating the distribution of surface electric field, as well as uniform Zn nucleation.Thereby, a uniform and dense layer of Zn deposition is ensured below the thin H-ZnO film (Figure 12b), as well as reduced contact to O 2 in aqueous electrolytes due the PEGTE-induced micelle particles.As a result, by additive 5wt.%PEGTE in aqueous electrolyte, the Zn metal achieved the improved cycling stability and reversibility for exceeding 2400 h or 1300 h at 5 mAh cm −2 (Figure 12c) or 10 mAh cm −2 , respectively.While an average CE of 99.2% was accomplished at 3 mAh cm −2 , Figure 12. a) Schematic of the Zn plating process in both pristine electrolyte (upper) and electrolyte with PEGTE-5 added (bottom).b) In situ optical microscopy tracking the Zn plating process in pristine electrolyte (upper) and in electrolyte with PEGTE-5 added (bottom).c) Cycling performance of symmetric Zn cells at 5 mA cm −2 and 5 mAh cm −2 .Reproduced with permission. 179Copyright 2022, American Chemical Society.d) In situ FEC-induced ZnF 2 -riched inorganic/organic hybrid SEI layer on metallic zinc anode surface.e-f) Cycling performance of e) Zn symmetric cells 4 mA cm −2 and 1 mAh cm −2 , and f) Zn||O d -NH 4 V 4 O 10 full cell at 5 A g −1 .Reproduced with permission. [192]Copyright 2023, Wiley-VCH.g) Schematic of TS-Ns-induced SEI film regulating Zn 2+ deposition.h) CE of Zn plating/stripping process in pristine electrolyte and TS-Ns-added electrolyte at 20 mA cm 2 and 5 mAh cm 2 .Reproduced with permission. [225]Copyright 2022, Elsevier.
the full cell Zn metal batteries using V 2 O 5 cathode provided an exceptional reversible capacity of 142 mAh g −1 at a low negative/positive capacity ratio (N/P ≈ 3), with stable cycling of 600 cycles.
Wu's group reported an electrolyte additive, [226] 2-methyl imidazole (Hmim), that is capable of in situ forming an inorganicorganic zinc-rich (Zn 4 SO 4 (OH) 6 /Zn(Hmim)) SEI film on the Zn anode surface.Both the side reactions and Zn dendrite formation are significantly restricted regarding the outstanding mechanical, chemical, and thermal stability of the SEI layer.Moreover, such an SEI layer exhibits chelation effect with Zn 2+ , that regulates the Zn deposition to plenty of active sites in smaller nuclei sizes.The improved Zn anode stability and reversibility were demonstrated by the ultra-stable cycling of symmetric Zn||Zn cell (2000 h at 2 mA cm −2 with CE of nearly 100%) and excel-lent performance of Zn-V 2 O 5 full cell, with high reversible capacity of 174.5 mAh g −1 after 400 cycles at 2 A g −1 , as well as CE of 99.63 %.Likewise, an inorganic/organic hybrid SEI (ZHS) layer rich in ZnF 2 was successfully built on the Zn anode surface to suppress the dendrite growth and adverse reactions, [192] via the addition of Fluoroethylene carbonate (FEC) in the ZnSO 4 aqueous electrolytes (Figure 12d).The HF component released from FEC, which is considered thermodynamically unstable in an aqueous environment, removes the zinc hydroxycarbonate passivation layer readily formed on the Zn anode surface and produce ZnF 2 .At the same time, the organic components from FEC can be further polymerized under the catalytic effect of the exposed Zn, that construct the ZHS layer on the anode surface.This ZHS layer serves as a multi-functional SEI film, which not only isolate the free H 2 O molecules and Zn bulk to inhibit the anode corrosion and HER, but also reduces the de-solvation active energy of Zn 2+ and facilitate Zn deposition kinetics.Consequentially, the SEI-coated Zn symmetric cell operated steadily for 1000 h at 4 mA cm −2 and 1 mAh cm −2 (Figure 12e), while the Zn||O d -NH 4 V 4 O 10 full cell remained a reversible capacity of 208 mAh −1 after 500 cycles at 5 A g −1 (Figure 12f).Wu Chao et al. introduced 2D ultrathin anionic tin sulfide nanosheets (TS-Ns) as an electrolyte additive, [225] that co-deposits with Zn 2+ to the Zn anode surface at the initial plating process and constructs an SEI layer in the following cycling to protect the Zn anode surface and guide uniform deposition of Zn during cycling (Figure 12g).To demonstrate the effectiveness of the in situ formed interfacial protection layer, a symmetric cell was assembled with an artificial interfacial layer consisting of TN-Ns coated on the Zn foil, which exhibited a cycle life almost ten times longer than that of PVDF-coated Zn cell.Correspondingly, the SS||Zn asymmetric cell delivered a high average CE of 99.6% over 500 cycles at 20 mA cm −2 and 5 mAh cm −2 (Figure 12h), while the Zn||Zn symmetric cell provided stable cycling for more than 3700 h at 0.2 mA cm −2 .
Conclusion and Perspective
Aqueous Zn-metal batteries (AZMBs) have become a promising technology for energy storage due to their intrinsic safety, abundant resources, as well as adequate energy density.However, the metallic zinc anode in aqueous systems still suffers from a series of interfacial issues, which include Zn dendrites and "dead Zn" formation, hydrogen evolution reactions, and zinc anode corrosion and passivation.These problems may mutually reinforce each other and cause huge deterioration in the performance of AZMBs.So far, several strategies of aqueous electrolytes engineering have emerged to stabilize the interface between electrolytes and Zn metal anode, as well as enhancing the reversibility of metal anode, which are based on two aspects: the manipulation of aqueous electrolytes and the regulation of electrolytes additives.The design of aqueous electrolytes mostly relies on the tunning of salts and concentration, that dilute and concentrated electrolytes, water-in-salts (WiS) electrolytes, and deep eutectic solvent (DES) electrolytes have been extensively studied and proved to mitigate the side reactions at the electrolyte/anode interface.On the other hand, various electrolyte additives are found effective to establish stable interfaces and achieve highly reversible Zn anode during the repeated plating and stripping process.The major mechanisms of electrolyte additive optimizing interface include: (1) inducing uniform Zn nucleation on Zn anode surface by providing abundant nucleation sites and homogeneous electric field distribution, (2) electrostatically shielding the protuberance on anode surface to avoid overlapped Zn deposition, (3) adjusting the solvation structure of Zn 2+ with water or solvent anions replaced, (4) regulating the orientation of Zn electrodeposition to establish flat anode surface, (5) in situ forming a robust SEI layer to protect the bulk Zn anode.Excitingly, these modification approaches have achieved substantial results addressing the interfacial challenges concerning metallic Zn anode in aqueous electrolytes.
Nevertheless, based on the research results to date, none of the work in electrolyte engineering has come up with an ultimate solution to completely eliminate the interfacial issues in AZMBs.In fact, there are still plenty of room for future research of AZMBs. 1) Unveiling the formation mechanism of "dead Zn".Although it has been widely acceptedthat "dead Zn" comes from the fractured Zn dendrites, where the large chunks of Zn lose contact to the Zn bulk and fail to participate in the following stripping process, the actual formation mechanism may be rather complex than this.Meanwhile, the composition of "dead Zn" has not been quantitively determined.It is important to thoroughly understand the specific origin of "dead Zn" to reduce the possibility of low CE and significant decay of capacity.This implies the necessity of utilizing advanced characterization techniques in the future research, which can help to better understand the mechanisms of dendrite formation and identify effective strategies for suppression.This could include using in situ/operando techniques, such as microscopy and spectroscopy, to directly observe the growth and evolution of dendrites during battery operation.2) Precisely evaluating the effect of modification strategies based on a unified standard of testing conditions.The difference in testing conditions causes troubles in comparing the electrochemical performance of batteries at the same scale, which may result in misleading assessment.3) Designing non-toxic and safe electrolyte additives.While aqueous zinc metal batteries are generally considered safe and environmentally friendly, most of the electrolyte additives for AZMBs are still toxic or eco-unfriendly for now.Considering the need to further improve the safety and environmental impact of aqueous zinc metal batteries, more non-toxic additive materials should be developed in the future to further enhance the sustainability and safety of the technology.4) Scaling up the production of aqueous electrolytes and electrolyte additives.Despite of the great advance that has been achieved in the laboratory, there are not any commercialized AZMBs.The scalability of aqueous zinc metal batteries is another important perspective.Scaling up the production and deployment of aqueous zinc metal batteries to meet the growing demand for energy storage requires the development of cost-effective manufacturing processes and the optimization of the battery design to achieve high performance and stability.This should be considered in the future studies.
Figure 1 .
Figure 1.Schematic of the configuration of AZMBs including the major challenges involved with Zn metal anode.
Figure 4 .
Figure 4. a) Comparison of the 3 m Zn (ClO 4) 2 aqueous electrolyte conductivity-temperature relationship with others.Reproduced with permission.[89]Copyright 2022, Wiley.b) 1 m Zn (TFSI) 2 + 21 m LiTFSI concentrated aqueous electrolyte's cycling efficiency of Zn||NiHCF batteries.Reproduced with permission.[75]Copyright 2021, Wiley.c) Diagram showing the formation of passivation layer Zn 4 (OH) 6 SO 4 •xH 2 O and the solution structure of Zn 2+ during electrodeposition in dilute electrolyte (left) and corresponding concentrated electrolyte (right).d) SEM photography of the surface morphology following 100 iterations of plating and stripping process.e) Using concentrated aqueous electrolyte, the ability of Zn on Cu foil to be stripped off the metal and the coulombic efficacy of Zn||Cu cell.Reproduced with permission.[134]Copyright 2020, Amer Chemical Society.f) Zn||stainless steel cell linear sweep voltammetry at a scan rate of 10 mV s −1 .Reproduced with permission.[146]Copyright 2023, Amer Chemical Society.
Figure 5 .
Figure 5. a) Profiles of stripping-plating displaying overpotential and matching capacity.b) Graph illustrating the connection between cut-off potentials, released capacities, and coulombic efficiencies.Reproduced with permission.[151]Copyright 2022, The Royal Society of Chemistry.c) The evolution of the FTIR spectrum between 3800 and 3100 cm −1 with rising salt concentration.d) The modification of17 O nuclei's chemical shifts in solvent as a result of variations in salt content.e) Molecular dynamics analyses of the Zn 2+ -solvation structure.f) The cycling stablity and CE of Zn/LiMn 2 O 4 full cell in HCZE at 4 C. Reproduced with permission.[152]Copyright 2018, Springer Nature.g) The cycle performance of Zn/LMO full cell in HCE and LCE at charge rate of 0.1 C and discharge rate of 0.2 C. Reproduced with permission.[146]Copyright 2021, American Chemical Society.
Figure 7 .
Figure 7. Strategies of aqueous electrolyte additives engineering for stable metallic Zn anode.
Figure 8 .
Figure 8. a) Schematic of Zn deposition process on anode surface under different electric field distribution in pristine electrolytes (upper) and GO-added electrolytes (bottom).Reproduced with permission.[200]Copyright 2021, American Chemical Society.b) Schematic of in situ formation of F-GQDs on anode surface under the action of polar groups with abundant nucleation sites provided.c) Cycling performance of symmetric Zn cells in electrolytes with or without F-GQDs added, at 0.5 mA cm −2 & 0.25 mAh cm −2 , and 10 mA cm −2 & 5 mAh cm −2 , respectively.Reproduced with permission.[201]Copyright 2023, Elsevier.d-f) SEM imaging of cross-section morphology of Zn anodes in d) pristine electrolytes after 100 h cycling, and e) EDTAadditive electrolytes after 100 h cycling, and f) EDTA-additive electrolytes after 500 h cycling.g-i) SEM imaging of surface morphology of Zn anodes in g) pristine electrolytes after 100 h cycling, and h) EDTA-additive electrolytes after 100 h cycling, and i) EDTA-additive electrolytes after 500 h cycling.j) Cycling performance of the symmetric Zn cells at 2 mA cm −2 and 2 mAh cm −2 with the corresponding.k) CE of Zn plating/stripping process.l) Cycling performance of Zn||V 2 O 5 battery at 2 A g −1 .Reproduced with permission.[202]Copyright 2022, American Chemical Society.
Figure 10 .
Figure 10.a) Schematic of solvation structures of Zn 2+ in aqueous electrolyte with different polymer additives.b) Rearrangement of the "Zn 2+ -H 2 O-SO 42− -polymer" bonding network and adjustment of space charge region of Zn anode with different functional polymer additives.c) Cycling performance of Zn anodes in 1 M ZnSO 4 aqueous electrolytes with and without 0.2 wt.% PAM additive at 2 mA cm −2 , 2 mAh cm −2 ; Reproduced with permission.[162]Copyright 2021, American Chemical Society.d) DFT calculations of bonding energies between Zn 2+ and different polar groups on silk peptide chains.e) Solvation structure of hydrated Zn 2+ with or without silk peptide.f) DFT calculation of adsorption energy of H 2 O and silk peptide on Zn (002) surface.g) In situ optical imaging of cross-section morphology of Zn deposition through time at 10 mA cm −2 .h) Cycling performance of Zn symmetric cells at 1 mA cm −2 and 1 mAh cm −2 .Reproduced with permission.[196]Copyright 2022, Wiley-VCH.
Figure 11 .
Figure 11.a) The irregular and well-oriented Zn growth on metal anode surface in ZnSO 4 (upper) and ZnSO 4 -OA (bottom) electrolytes, respectively.b) The adsorption energy of OA molecules on different crystal planes of Zn. c) Comparison between Zn‖Cu cells with or without OA additives added at 1 mA cm −2 and 0.5 mAh cm −2 .d) Cycling performance of Zn‖MnO 2 full cells with different electrolytes at 1 A g −1 .Reproduced with permission.[223]Copyright 2023, Elsevier.e) Schematic of Zn deposition in ZnSO 4 (left) and ZnX 2 -PEG300 (right; X = Cl, Br, or I) electrolytes.f) SEM imaging of the large hexagonal Zn platelet formed in ZnI 2 -5%PEG300 electrolytes.g) XRD pattern of Zn deposits after growing for 24 h at 4 mA cm −2 .h) Cycling performance of Zn||Zn symmetric cells with different electrolytes at 25 mA cm −2 , 3.2 mAh cm −2 .Reproduced with permission.[224]Copyright 2022, Wiley-VCH. | 15,713.6 | 2023-06-14T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Cancer Stem Cells in Renal Cell Carcinoma: Origins and Biomarkers
The term “cancer stem cell” (CSC) refers to a cancer cell with the following features: clonogenic ability, the expression of stem cell markers, differentiation into cells of different lineages, growth in nonadhesive spheroids, and the in vivo ability to generate serially transplantable tumors that reflect the heterogeneity of primary cancers (tumorigenicity). According to this model, CSCs may arise from normal stem cells, progenitor cells, and/or differentiated cells because of striking genetic/epigenetic mutations or from the fusion of tissue-specific stem cells with circulating bone marrow stem cells (BMSCs). CSCs use signaling pathways similar to those controlling cell fate during early embryogenesis (Notch, Wnt, Hedgehog, bone morphogenetic proteins (BMPs), fibroblast growth factors, leukemia inhibitory factor, and transforming growth factor-β). Recent studies identified a subpopulation of CD133+/CD24+ cells from ccRCC specimens that displayed self-renewal ability and clonogenic multipotency. The development of agents targeting CSC signaling-specific pathways and not only surface proteins may ultimately become of utmost importance for patients with RCC.
Introduction
Renal cell carcinoma accounts for 3-5% of all human cancers and, according to the American Cancer Society's 2023 estimates, 81,800 new cases will be diagnosed in the USA, and 14,890 individuals will die from this disease [1].The most prevalent histological subtypes of RCC are clear cell RCC (ccRCC), which can be considered as a metabolic disease due to the radical metabolic adaptations observed in cancer cells [2][3][4][5][6][7][8][9][10][11][12].Surgery is the gold standard treatment of localized disease, although one-third of patients are diagnosed with metastatic diseases and/or will develop disease recurrence after surgery [13][14][15][16][17][18][19].Ongoing research into the tumor microenvironment (TME) led to the approval of molecular targetbased agents including tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs) [20][21][22][23].Combinations of TKI and ICI are recommended as first-line therapy for advanced RCC even if patients may develop drug resistance over time [24].In this scenario, cancer stem cells (CSCs) are thought to play a crucial role in recurrence and metastasis in RCC patients.Recent research has characterized CSCs in kidney cancer, evaluated their presence, and compared their molecular profile to that of their normal counterparts.In this review, we aim to describe the main features of CSCs and their possible role in RCC biology.
Cancer Stem Cells: Definition and Properties
Different models have been proposed over the years to describe tumor development, progression, and heterogeneity.According to the clonal (or stochastic) model, differentiated cells undergo multiple mutations over time.In line with Darwinian theory, cell populations carrying a mutation that confers proliferative and/or survival advantage will replace those cells that lack it.The term "cancer stem cell" (CSC) refers to a cancer cell with the following features: clonogenic ability, the expression of stem cell markers, growth in nonadhesive spheroids, and the ability to differentiate into cells of different lineages and to generate in vivo serially transplantable tumors that reflect the heterogeneity of primary cancers (tumorigenicity).In immune-compromised mice (i.e., nonobese diabetic (NOD)/severe combined immunodeficiency mice (SCID)), CSCs appear to be the only cells able to generate a new tumor.Self-renewal depends on asymmetric cell divisions, which give rise to a quiescent stem cell and a committed progenitor that will differentiate.During the differentiation of committed progenitor cells, the expression of genes required for self-renewal (i.e., Oct4, Nanog, and Sox2) is repressed.In contrast, lineage-specific genes are switched on.In 1994, Dick and co-authors first isolated CSCs from acute myeloid leukemia (AML) patients [25].The cancer stem cell model or hierarchical model states that growth and propagation depend on CSCs, from which descendants will form a tumor (Figure 1).However, the majority of populations in the tumor mass behave as progenitor cells (or transit-amplifying cells) with limited proliferative potential.Progenitor cells may represent intermediates between stem cells and fully differentiated ones.According to this new model, CSCs may arise from normal stem cells, progenitor cells, and/or differentiated cells because of striking genetic/epigenetic mutations [26].Another theory suggests that CSCs may be derived from the fusion of tissue-specific stem cells with circulating bone marrow stem cells (BMSCs).More recently, Kreso et al. [27] proposed a unifying model of clonal evolution applied to CSCs.CSCs may acquire further mutations and generate new stem branches.Tumor cells in the non-CSCs subpopulation may undergo the epithelial-mesenchymal transition (EMT) and acquire CSC-like features, thus enhancing tumor heterogeneity [27].CSCs use signaling pathways similar to those controlling cell fate during early embryogenesis (Notch, Wnt, Hedgehog, bone morphogenetic proteins (BMPs), fibroblast growth factors, leukemia inhibitory factor, and transforming growth factor-β).While transiently activated in normal stem cells, they may encourage a long-lasting activation state in cancer stem cells [28].CSCs are thought to be the main cause of recurrence and resistance to therapy and appear inherently resistant to chemo-and radiotherapy [29,30].By promoting their active efflux, multidrug resistance (MDR) transporters (such as ATP-binding cassette-ABC) prevent drug accumulation in CSCs.ATP-binding cassette, sub-family B, member 5 (ABCB5) is a plasma membrane protein involved in the transport of small ions, sugars, peptides, and organic molecules (such as drugs) against a concentration gradient by ATP hydrolysis.It is overexpressed in CSCs of melanoma, liver, and colorectal cancers where it is thought to be associated with progression, chemotherapy resistance, and recurrence [31].It is believed that the effect of inhibiting a single ABC transporter may be counteracted by the simultaneous expression of several MDR transporters.Active DNA repair mechanisms may also explain their resistance to conventional therapies.Radiotherapy results in the production of reactive oxygen species (ROS) in cancer cells.Enhanced free radical scavenging systems (i.e., N-acetylcysteine) appear to cause lower ROS levels in both human and mouse mammary CSCs compared to more differentiated tumor cells [32].The intracellular levels of reduced glutathione (GSH) appear to be controlled by CD44, which interacts with a glutamate-cysteine transporter [33].Ataxia-teleangectasia mutated (ATM) and ataxia-teleangectasia mutated RAD3 (ATR) protein kinases are key sensors of DNA damage and drive the activation of checkpoint kinase 1 (CHK1) and 2 (CHK2) leading to DNA repair.These may contribute to therapy resistance and their pharmacological inhibition sensitized CSCs to radiotherapy [34].In stress conditions (hypoxia, ischemia, or nutrition deprivation), autophagic machinery may provide nutrients and energy [35,36].Ovarian CSCs exhibited higher basal autophagy than non-CSCs so their inhibition might reduce chemosensitivity [37].Hypoxia modulates gene expression mainly by promoting hypoxia-inducible factor-1α and 2α (HIF-1α, HIF-2α) or phosphatidiyl-inositol-3-kinase (PI3K/AKT).PI3K/AKT promotes HIF-1α/HIF-2α as a feedback loop.In pancreatic CSCs, the upregulation of VEGF, IL-6, Nanog, Oct4, and EZH2 support their invasion, migration, and angiogenesis [38,39].Although the molecular mechanism is unclear, ferroptosis is a recently described form of cell death.Iron cycling (from oxidized to reduced forms) may produce free radicals responsible for lipid peroxidation and DNA damage within cells.CSCs are typically distinguished as having a greater intracellular iron content [40,41].The growth of CSCs in ovarian cancer was reduced when their intracellular storage was reduced, indicating a connection between ferroptosis and CSCs [42].Further research might provide deeper insights into the ferroptosis-and autophagy-mediated resistance of CSCs.In different solid and hematological malignancies, CSC signaling pathways may be associated with chemoresistance.In neuroblastoma, the Wnt/β-catenin axis supports MDR1 gene expression.Notch and Hedgehog may contribute to temozolomide resistance in glioma CD133+ CSCs and to platinum resistance in ovarian CSCs [43].
Metabolism of Stem Cells and Cancer Cells
Glucose and glutamine are essential macromolecules for both pluripotent stem cells (PSCs) and cancer cells [52].Not only do they represent sources for ATP and NAD(P)H production, but their catabolism also generates precursors for de novo lipid, protein, and nucleic acid biosynthesis.The Warburg effect is a hallmark of all rapidly proliferating mammalian cells.Despite high oxygen levels, glucose is oxidized to lactate.Glycolytic flux decreases during PSCs differentiation, but is restored during the reprogramming of differentiated cells to the pluripotent state [53,54].It has been noted that transcription factors establishing pluripotency may directly regulate the glycolytic phenotype.Indeed, Oct4 binds the loci encoding for glycolysis enzymes, thereby promoting this pathway [55].In addition, several metabolic intermediates enable chromatin modifications, which in turn regulate gene expression programs involved in self-renewal and lineage differentiation.Nevertheless, the metabolism of CSCs remains poorly understood since they exhibit features of both normal stem cells and cancer cells [56,57].Contrasting results have been obtained when profiling CSCs metabolism in different cancer types.Interestingly, CSCs may undergo metabolic reprogramming in a context-dependent way (oxygen tension, pH, and glucose availability in TME) and in relation to genetic mutations and signaling pathways.Hence, CSC metabolism can switch from aerobic glycolysis to oxidative phosphor- Chemotherapy and radiotherapy mainly target proliferative cells.Thus, as long as cytotoxic stimuli occur, cells may adopt a transient state of slow proliferation rate known as drug tolerance.This condition may be reverted after the cessation of the stimuli.In contrast, environmental factors may stabilize this quiescent condition into a short-, medium-or longterm dormancy.Dormant cells are typically arrested in G0 or the G0/G1 transition [44,45].Evidence suggests that both extrinsic and intrinsic cues may induce cellular dormancy.The downregulation of the integrin receptor and downstream RAS-ERK/MAPK and PI3K-AKT signaling may drive cellular dormancy.Stress-induced pathways (i.e., the unfolded protein response) have also been implicated in cellular dormancy via p38/ERK [46].CSCs are capable of alternating between periods of rapid growth and dormancy (CSC plasticity).The identification and targeting of CSCs are further complicated by their plasticity.The dormant phenotype has emerged to be crucial for metastasis and therapy resistance in certain malignancies.Indeed, non-dividing dormant CSCs become insensitive to conventional antiproliferative drugs [47].Finally, long-term recurrence is caused by dormant tumor cells that have survived multiple therapeutic cycles.Microenvironmental cues or therapies may be responsible for cellular senescence in CSCs.It is a possibly reversable terminal cellular state because of growth arrest (cell cycle arrest in the G1 phase).Senescent cells are capable of secreting a series of cytokines (senescence-associated secretory phenotype (SASP)), which may support tumorigenesis and even stemness [48].
ICIs (anti-CTLA-4 and anti-PD-1 pathway) have been demonstrated to induce durable regression in a variety of tumors.Similar to conventional chemotherapy and radiotherapy, not every type of cancer or patient responds effectively to ICI, and CSCs may play a role in immunotherapy resistance.The immune privilege of CSCs sets them apart from differentiated tumor cells, but immunosuppressive pathways vary in a tissue-and cancer-dependent way [49].CD47-mediated phagocytosis is prevented by the overexpression of SIRPα; the interaction of CD24 with its receptor Siglec-10 limits both T cells and macrophage activities.In CSCs, MTDH and SND1 interact as a stress response: this impairs mRNA encoding components of the antigen-presenting machinery [50].T cell activity is further blocked by the increased expression of immune checkpoint molecules (PD-L1 and TIM3) upon PI3K/Akt/β-catenin axis activation.Bidirectional crosstalk occurs in the TME between CSCs and other cells.CSCs may activate cancer-associated fibroblasts (CAFs) to secrete hyaluronan and alteration in the extracellular matrix (ECM) may affect immune infiltration.The inhibition of Wnt, Notch, and Hedgehog has been approached to overcome CSCs' immune privilege.Indeed, melanoma progression has been reduced by anti-CTL4 therapy combined with Wnt signaling inhibition [51].
Metabolism of Stem Cells and Cancer Cells
Glucose and glutamine are essential macromolecules for both pluripotent stem cells (PSCs) and cancer cells [52].Not only do they represent sources for ATP and NAD(P)H production, but their catabolism also generates precursors for de novo lipid, protein, and nucleic acid biosynthesis.The Warburg effect is a hallmark of all rapidly proliferating mammalian cells.Despite high oxygen levels, glucose is oxidized to lactate.Glycolytic flux decreases during PSCs differentiation, but is restored during the reprogramming of differentiated cells to the pluripotent state [53,54].It has been noted that transcription factors establishing pluripotency may directly regulate the glycolytic phenotype.Indeed, Oct4 binds the loci encoding for glycolysis enzymes, thereby promoting this pathway [55].In addition, several metabolic intermediates enable chromatin modifications, which in turn regulate gene expression programs involved in self-renewal and lineage differentiation.Nevertheless, the metabolism of CSCs remains poorly understood since they exhibit features of both normal stem cells and cancer cells [56,57].Contrasting results have been obtained when profiling CSCs metabolism in different cancer types.Interestingly, CSCs may undergo metabolic reprogramming in a context-dependent way (oxygen tension, pH, and glucose availability in TME) and in relation to genetic mutations and signaling pathways.Hence, CSC metabolism can switch from aerobic glycolysis to oxidative phosphorylation (OXPHOS) [58].In response to hypoxia, glycolytic enzymes may be upregulated to switch to a more glycolytic phenotype, whereas CSCs rely mainly on OXPHOS in glucose-deprived conditions.Glutamine metabolism may supplement glucose by providing intermediates for nucleotides, amino acids, and lipids synthesis.Eventually, lipid metabolism is affected in CSCs.Higher amounts of lipid droplets and CD133 expression in CSCs have been associated with greater clonogenicity and tumor-forming capability [59].
Nephrogenesis and Signaling Pathways
Stem cells are known to be able to self-renew and differentiate into one or more types of mature cells.Adult stem cells are located in a specialized milieu known as the niche, and secreted effectors play crucial roles in controlling stem cell maintenance, proliferation, survival, activation and differentiation within the niche.Thus, surface receptors can be activated as well as intracellular signal cascades, which will ultimately modulate gene expression.Moreover, stem cell programming also depends on intercellular communication among stem cells, niche supporting cells, and their differentiated daughter cells [60].Previous studies on invertebrates (Drosophila) and mammalians provided deep molecular insights in stem cell signaling pathways (Wnt, Notch, Hedgehog, Hippo, Jak/STAT, BMP, etc.).Some of these signaling pathways control self-renewal and proliferation while others are involved in progenitor cell differentiation.Human nephrogenesis consists of three embryonic stages: pronephros, mesonephros, and metanephros, which will eventually develop into kidneys.The ureteric bud from the nephric duct migrates to the metanephric mesenchyme and invades it.Glial-derived neurotrophic factor (GDNF) and hepatocyte growth factor (HGF) released from the metanephric mesenchyme promote ureteric bud branching into the urinary system [61].Except for collecting duct epithelial cells, which come from the ureteric bud, nephron epithelial cells, myofibroblasts, and smooth muscle cells derive from the metanephric mesenchyme.While branching, the ureteric bud facilitates mesenchymal survival and differentiation by releasing a variety of factors such as Wnt proteins, fibroblast growth factors (FGFs), and leukemia inhibitory factor (LIF).The mesenchymal-to-epithelial transition (MET) is essential for the differentiation of mesenchymal cells (mesenchymal cap) into nephrons [62].Cap mesenchyme cells (expressing Osr1, Pax2, Wt1, Six2, and Cited1) represent the main source of renal progenitor cells.Six2 expression decreases as long as cells undergo MET and it is absent in mature kidneys [63].The Wnt9b/β-catenin axis promotes self-renewal and the differentiation of progenitor cells, while the Hippo pathway promotes kidney development.Hence, hypoplastic kidneys were noted in the case of YAP deletion (Hippo effector) [64,65].Embryonic transcription factors (such as Oct4) and renal developmental genes (Pax2, Six2, Sall1, and Wt1) are typically expressed by ARPCs, which lack mature kidney cell markers.
Adult Renal Stem/Progenitor Cells
Tissue-specific adult stem cells have been identified in many organs, including the kidneys, bone marrow, gastrointestinal mucosa, prostate, liver, brain, and skin.The fact that postnatal renal tubules may be repaired after tubular necrosis indicates the presence of self-replicating cells in the adult kidney [66].Research on chronic kidney disease (CKD) and the subsequent end-stage renal disease (ESRD) encouraged the isolation of adult stem cells and their potential role in tissue repair in the field of regenerative medicine to overcome dialysis and kidney transplantation [67][68][69].Mesenchymal stem cells (MSC) have a crucial function during nephrogenesis.The arrest of the differentiation of embryonic progenitor cells following the nephrogenic lineage results in children's Wilms tumor (WT), which has proved to be an effective biological system to study renal embryonic stem cells (ESC).In particular, WT cells shared high concordance with fetal kidneys in the expression of different markers (Pax2, Six1/2, NCAM, Fzd2, and Fzd7) [70].However, the identification of embryonic stem cell markers is severely limited by the complete exhaustion of embryonic renal stem cells during nephrogenesis.Approximately 2% of the adult kidney's total cells are remnant kidney ESCs, which are mostly found at the urinary pole of the Bowman's capsule [71].In turn, the adult kidney hosts two different pools of these cells: the resident adult renal stem/progenitor cells (ARPCs) and the circulating stem/progenitor cells.The latter group includes endothelial progenitor cells (EPCs), hematopoietic stem cells (HSCs), and bone marrow-derived MSCs (BMSCs) [72].As mentioned above, progenitor cells have a more limited capability for differentiation than stem cells.Two different subpopulations of ARPCs were initially identified: the first in the tubule/interstitium and the second in the Bowman's capsule.From the distal end of the proximal tubule, stem cells can migrate within this segment.In the Bowman's capsule, ARPCs may acquire podocytes (PDX marker) and lose stem markers (CD133 and CD24) while moving from the urinary to the vascular pole.ARPCs in the Bowman's capsule express CD106, unlike those in the tubules.CD133 + CD24 + CD106 + cells have a higher proliferation rate whereas CD133 + CD24 + CD106 − cells have a reduced self-renewal and differentiation capabilities.Therefore, CD106 − cells are thought to be in a more committed step toward differentiation [73].Progenitor cells in the Bowman's capsule also express kidney ESC and MSC (CD44) markers, as well as the stem transcription factors Oct-4 and Bmi-1.Apart from sharing CD133 and CD24 expression, these cells do not possess significant genomic differences [74,75].ARPCs exhibit clonogenicity, stem cell markers and the ability to differentiate into other types of cells, including tubular epithelium-like, adipocyte-like, neuron-like, and osteogenic-like cells.Morphologically, they have less cytoplasm, fewer mitochondria, a mature brush border and no baso-lateral invaginations.Finally, CD133 + CD24 + cells are even thought to derive to renal ESCs because of their similar phenotype.ARPCs proliferate after acute and chronic tubular damage such as in transplanted patients undergoing delayed graft function [76].They may express Toll-like receptor-2 (TLR2), which may be activated by various "damage-associated molecular pattern molecules" such as MCP-1 (monocyte chemotactic protein-1).MCP-1 expression is known to increase in the case of unilateral chronic ureteral obstruction [77,78].Upon activation, TLR2 promotes ARPCs proliferation to induce the secretion of interleukins (IL-6 and IL-8) and MCP-1 (autocrine signaling loop) [79].Over time, other stem/progenitor cells have been isolated.In the proximal tubules, Sox9 + Lgr4 + CD133 + cells may differentiate into proximal tubules, the loop of Henle, and distal tubules, but not into collecting ducts.They have brush border and epithelial polarity, but they lack Pax2 and MSC markers [80].In the S3 segment of the nephron, Pax2 + cells have been found.Typically, they show an immature phenotype as well as progenitor and mesenchymal markers.Additionally, they could migrate into injured areas and in vivo differentiate into mature tubular epithelial cells but not into the vasculature [81].Resident MSCs have been demonstrated to differentiate into mesodermal lineages, endothelial cells, and erythropoietin-producing fibroblasts when isolated from adult kidneys [82][83][84].
Renal Cancer Stem Cells
In recent years, there has been a growing interest in the identification of CSCs in renal cancer, their characterization, and comparison with the normal stem cell counterparts.Several markers have been studied in order to better identify RCC CSCs [85].Prominin-1 (CD133) is a glycoprotein expressed on the cell membrane of stem and progenitor cells within normal tissues, and it has been proposed as a putative CSC marker across different tumor types.CD133 + RCC cells did not show in vivo tumorigenic capability, but when co-transplanted with RCC cells, they enhanced tumor engraftment, vascularization, and growth; however, different results have been obtained subsequently [86,87].A wide variety of cells express CD24 on their surfaces, including hematopoietic cells, but it is typically expressed by progenitor and stem cells.When analyzing its role in RCCs, tumor grade, overall survival, and disease-free survival have been related to CD24 expression [88].In a previous study, a subpopulation of CD133 + CD24 + cells was isolated from ccRCC samples.Similar to their normal counterparts (ARPCs), these RCC-derived cells (RDCs) displayed self-renewal ability and clonogenic multipotency.Stemness-related elements (Nanog, Sox2, GATA4, and FoxA2) were confirmed while BMSC markers (CD90 and CD105) were not expressed.DNA microarray analysis was performed to better discriminate these RDCs from other cell types.It was observed that CTR2 (SLC31A2) characterized only RDCs so that neoplastic RDCs might be distinguished from normal ARPCs using CD133/CTR2 co-expression.In the presence of certain growth factors, RDCs might differentiate into osteocytes, adipocytes, or epithelial cells [89,90].CTR2 regulates copper influx through cell membranes and its trafficking from cellular storage.However, drug accumulation and cytotoxicity may be affected by chaperones and transporters that regulate copper homeostasis.In particular, CTR2 may alter the accumulation of platinum-containing drugs via macropinocytosis and then promoting RDCs chemoresistance [91].Xiao and colleagues further confirmed that CD133 + CD24 + cells isolated from RCC cells express stemnessrelated genes and assessed the Notch signaling pathway.Self-renewal potential, resistance to cisplatin and sorafenib, in vivo tumorigenicity, and invasion and migratory capability were typically recognized in these CD133 + CD24 + cells.Aberrant Notch pathway activation resulted in the upregulation of genes related to drug resistance (MDR1), self-renewal (Oct4 and Klf4), and anti-apoptotic activity (Bcl-2).These properties were partially lost upon blocking Notch pathways via exogenous (MRK-003) or endogenous (Numb) inhibitors since gene expression was reduced [92].
CD105 (endoglin) is a transmembrane glycoprotein that forms part of the transforming growth factor-β (TGFβ) complex.Its activation promotes Smad proteins, thus regulating various processes such as proliferation, migration, differentiation, and angiogenesis.Endoglin is typically expressed on endothelial cells where it is activated by TGFβ and hypoxia and silenced by tumor necrosis factor α (TNFα) [93].A subpopulation of CD105 + cells from RCC was shown to express mesenchymal markers (CD44, CD90, CD29, CD73, CD146, and vimentin), embryonic stem cell markers (Oct3/4, Nanog, Musashi, and Nestin), and the embryonic renal marker Pax2, but they lacked differentiative epithelial markers (i.e., cytokeratin-CK).Epithelial, endothelial, and CD105 − cells may arise from CD105+ CSCs differentiation.In SCID mice, a modest number of cells were able to produce serially transplantable carcinomas (the same histological pattern for the origin of tumor) with a large proportion of differentiated CD105 − cells and a small fraction of CD105 + population [94].In turn, CD105 + , CD44 + , as well as CD105 − , CD44 − , and CD105 − /CD44 − cells were able to give rise to tumors when injected into mice [95].Additionally, CD105 + CSCs are able to secrete exosomes and microvesicles containing mRNAs (VEGF, FGF, MMP2 and 9) promoting angiogenesis and metastatic niche formation as well as the impairment of T cell activation and dendritic cell activation [96][97][98].
RNA alternative splicing give rise to different isoforms of CD44, which are then involved in diverse biological processes, such as cell-cell interaction, cell adhesion, proliferation, migration, differentiation, and angiogenesis.Glycosaminoglycan hyaluronan (HA) represents the main ligand of this transmembrane glycoprotein, whereas other extracellular matrix (ECM) components may interact with CD44 (i.e., collagen, growth factors, and metalloproteinases).Its binding promotes multiple signaling pathways such as TGFβ, MAPK, PI3K/AKT and receptors tyrosine kinases (RTKs), thus encouraging cell proliferation, survival, invasion, and CSCs homing in different tumors [99,100].Wnt/β-catenin and protein kinase C (PKC) pathways may be modulated by CD44 [101].CD44 has been stated to modulate CSC niche owing to its interaction with ECM elements.Since CD44 expression was related to Fuhrman grade, primary tumor stage, histological subtypes, and poor patient prognosis, it may represent a potential marker for CSCs in RCC [102,103].
CXC chemokine stromal cell-derived factor 1 (SDF1 or CXCL12) selectively binds to the CXC-chemokine receptor 4 (CXCR4 or CD184) [104][105][106].Downstream effectors include PLC/MAPK, PI3K/AKT, JAK/STAT, and the Ras/Raf pathways.Several biological processes are activated, such as proliferation, survival, migration, stemness, and angiogenesis.In renal and other solid tumors, CXCR4 + cells migrate towards tissues expressing high levels of SDF1 to metastasize.CXCR4 + cells from RCC cell lines have already shown the high expression of stem cell-associated genes (Oct4, Sox2, and Nanog) as well as resistance to therapy (TKI).Tumor growth was impaired by blocking CXCR4 with ADM3100 or small interfering RNA (siRNA).Hypoxia and the loss of pVHL were observed to increase CXCR4 and MMPs expression in recent studies.CD133 + CXCR4 + cells were noted to locate in perinecrotic areas of RCC where they expressed HIF1α [107].In addition, hypoxia promoted the tumorigenicity of CD133 + CXCR4 + cells and HIF2α promoted the expansion of CXCR4 + CSCs [108,109].Marginal CXCR4/CD105 co-expression was confirmed; therefore, CD105 + cells may even represent a major CXCR4 subpopulation [110].Perhaps in association with another marker, CXCR4 might be investigated as a possible CSC marker in RCC.Fendler et al. [111] performed the transcriptional profiling of CXCR4/MET/CD44 + cells isolated from ccRCCs specimens.These authors showed that a greater number of CXCR4/MET/CD44 + cells was associated with higher pathological stage and Fuhrman grade, with venous and lymphatic invasion, and distant metastases.The analysis of gene and protein expression demonstrated that Wnt and Notch signaling was activated, and that their inhibition blocked these CSCs.Beta-catenin and Jade1 are stabilized so as Notch signaling is activated owing to pVHL loss [111].Notch activation in RCC CSCs promotes CXCR4 upregulation then encouraging SDF-1-induced chemotaxis [92].
Acetaldehyde dehydrogenase 1 (ALDH1) takes part in alcohol metabolism in the hepatocyte cytoplasm, and is known to be crucial for cellular differentiation, proliferation, motility, embryonic development, and organ homeostasis [112].Indeed, in healthy human stem cells, ALDH1 may also convert the retinal to retinoic acid (RA).Upon activating retinoic acid receptor (RAR), retinoic acid X receptor (RXR), and nuclear hormone receptor peroxisome proliferator activated receptor β/δ (PPAR β/δ), RA will modulate the expression of several genes.In cancer, metabolism reprogramming, DNA repair and stem-like features depend on different pathways linked to ALDH1 (RA, ROS, USP28/MYC, HIFα/VEGF, Wnt/β-catenin).Its prognostic significance in RCC remains unclear, although it has been regarded a reliable marker of CSCs in several solid cancers.For instance, it may recruit myeloid-derived suppressor cells (MDSCs) in the TME of breast cancer, thus limiting cancer immunity.Chemosensitivity is increased when inhibiting its enzymatic activity [113,114].
DnaJ homolog, subfamily B, member 8 (DNAJB8) is a member of HSP40 family of the heat shock proteins.It is typically expressed in postmeiotic sperm and spermatid and is suggested to regulate androgen signaling during spermatogenesis.Chaperones prevent cytotoxic stress by controlling protein folding.DNAJB8 might have oncogenic potential since it strongly suppresses misfolded protein aggregation.This HSP plays a role in the maintenance of RCC CSCs as its targeting fully blocked tumor formation in mice, suggesting that it may be a target for immunotherapy [115,116] (Table 1).Tumorigenicity [115] MicroRNAs (miRNAs) regulates gene expression and their roles in CSCs have been elucidated for some cancers.In RCC, increased sphere formation was shown after the inhibition of miR17 [117].
The Hoechst exclusion assay, which was first introduced in 1996, is another functional technique to identify CSCs by using a family of blue dyes called "Hoechst stains" (bisbenzimides used to stain DNA).Stem cells have a high efflux capacity, which allows them to remove Hoechst dye from the intracellular space and appear as a side population (SP).Using cell separation techniques, stem cell markers and the Hoechst exclusion assay can both be combined to enhance CSC in a biological sample.However, Hoechst staining may be excluded by some differentiated tumor cells that express high levels of ABCG2 and ABCB1 [118].
Hypoxia is thought to play a central role in the maintenance of normal embryonic and adult stem cells.Low oxygen pressure in the niche may reduce ROS-associated genotoxic oxidative damage, therefore promoting self-renewal and inhibiting differentiation.Because of mutations in VHL, which are carried by most of ccRCC, the constitutive activation of HIFs defines a pseudo-hypoxic phenotype.Transcriptomic analysis in RCC CSCs lead to the sequencing of different long non-coding RNA (lncRNAs).Hypoxia has been demonstrated to reduce androgen receptor (AR) in RCC, which may, in turn, regulate lncTCFL5-2 expression.In particular, lncTCFL5-2 seems to be enhanced by knocking down AR.The lncTCFL5-2/YBX1 complex may translocate to the nucleus where target genes are promoted, such as Sox2, CD133, and CD24 [119].
Single-cell RNA-seq analysis was performed in collecting duct renal cell carcinoma (CDRCC).EZH2 was shown to be significantly overexpressed in the CSC subpopulation to control its gene expression and self-renewal property.From this study, PARP, PIGF, HDAC, and FGFR inhibitors emerged as potential candidates for targeting CSCs [120].
Zhou et al. [29] clustered ccRCCs specimens into three subgroups based on stem/progenitor signatures.Significant antitumor immune infiltration (M1 macrophages, activated dendritic cells and CD4/CD8 T cells), enhanced HLA-I molecule expression, and cytolytic activity was associated with increased stemness signature.In contrast, high-stemness subgroup showed increased immune checkpoint molecules, cancer-associated fibroblasts (CAFs), MDSCs, and regulatory T cells (Tregs) with robust immunosuppressive properties.In this scenario, these authors even hypothesized that a stemness-related gene signature may be useful to predict anti-PD-1 responses [29].Over time, different agents have been assessed to target RCC CSCs.In response to IL-15 (a regulator of kidney homeostasis), CD105 + CSCs lost their capacity to initiate tumors, to express stem cell markers, and to form spheres.However, they also gained polarity, transmembrane resistance, epithelial markers, vinblastine, and paclitaxel sensitivity [121].PI3K/Akt/mTOR axis has already been established to play an essential role in CSC biology and mTOR inhibitors have been proved to eradicate CSCs in different human cancers (neuroblastoma, nasopharyngeal, colon, and pancreatic cancers).Further studies are needed to confirm whereas combination therapies using mTOR inhibitors are indeed effective in targeting both renal cancer cells and CSCs [122,123].Bone morphogenetic protein-2 (BMP-2) encodes a member of the TGF superfamily and is known to regulate different cellular processes such as cell differentiation, proliferation, morphogenesis, survival, and apoptosis.Depending on the cancer type, BMP-2 has been shown to either drive or prevent tumor growth.BMP-2 inhibits the tumor-initiating ability of renal CSCs and promotes bone formation in vivo.In particular, BMP-2 reduces the expression of embryonic stem cell markers and renal markers in CSCs (Oct3/4A, Nanog, and Pax-2), and increased the expression of osteogenic markers (Runx2 and collagen type I) [124].Low molecular weight inhibitors fumitremorgin C and tryprostatin, as well as monoclonal antibodies, cyclosporin A, VX710, or tariquidar, have been attempted to eradicate CSCs exploiting ABC transporters [125].It has also been suggested to use monoclonal antibodies or inhibitors against their surface markers.CD133 has been used as a target for the treatment of glioblastoma, lung cancer, and liver cancer.Many studies have shown that salinomycin is able to kill CSCs in a variety of human cancers, including gastric cancer, lung adenocarcinoma, osteosarcoma, colorectal cancer, squamous cell carcinoma, and prostate cancer.This result was most likely achieved by interfering with ABC transporters, Wnt/β-catenin signaling, or additional CSC pathways [126].Nanoparticles have been introduced to target CSCs since they are carriers for chemotherapeutic or nucleic acid drugs that accumulate at tumor sites.Combinations of paclitaxel/salinomycinloaded PEG-b-PCL polymeric micelles have been designed for breast cancer treatment.Recent studies showed that salinomycin targeted CSCs whereas paclitaxel targeted most cancer cells, producing a higher antitumoral action in vitro and in vivo than either agent alone [127,128].This combination therapy may represent an effective strategy to improve the treatment of solid tumors as it acts in the eradication of both cancer cells and their stem counterpart.
Conclusions
Most of the available cancer treatment strategies target somatic tumor cells rather than CSCs, which are assumed to be responsible for tumor recurrence and metastasis (Figure 2).
The lack of effective putative markers is the consequence of the conflicting results so far reported in the literature.In addition to not being specific between tumor types, it has been postulated that some biomarkers may be transient as they may become obsolete at particular stages of tumorigenesis.Optimizing renal CSC isolation and characterization techniques will be crucial for the development of effective therapies against CSCs.The development of agents targeting CSC-signaling specific pathways and not only surface proteins may ultimately become of utmost importance for patients with RCC.
both cancer cells and their stem counterpart.
Conclusions
Most of the available cancer treatment strategies target somatic tumor cells rather than CSCs, which are assumed to be responsible for tumor recurrence and metastasis (Figure 2).The lack of effective putative markers is the consequence of the conflicting results so far reported in the literature.In addition to not being specific between tumor types, it has been postulated that some biomarkers may be transient as they may become obsolete at
Figure 1 .
Figure 1.Summary of hypothesis leading to the origin of cancer stem cells.Self-renewal depends on asymmetric division of both normal and cancer stem cells.BMSC: bone marrow-derived stem cell.
Figure 2 .
Figure 2. CSCs appear to be resistant to conventional therapies, thus promoting relapses and metastases.Development of CSCs-targeted agents may facilitate tumor elimination.
Figure 2 .
Figure 2. CSCs appear to be resistant to conventional therapies, thus promoting relapses and metastases.Development of CSCs-targeted agents may facilitate tumor elimination. | 7,361.2 | 2023-08-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Some problems in multifractal spectrum computation using a statistical method
In practical calculations of a multifractal spectrum only limited data are available due to data overflow, measurement errors and limited calculation time. An effective method to reduce the data overflow is proposed. Some parameters are introduced to evaluate the incomplete degree of a partial multifractal spectrum. Quantitative expressions of the evaluation parameters on the partial multifractal spectra calculated using a statistical method for the Cantor sets p/1−2p/p and p/0/1−p are derived. Approximate evaluation parameters of the partial multifractal spectra calculated using the statistical method are estimated for two examples with a random fractal character. The characteristic of the evaluation parameters for the above examples is discussed in detail.
Introduction
Since the concept of multifractals was introduced in 1970s by Mandelbrot, it has been used to describe and distinguish varieties of complicated figures, systems and processes in nature. Now, it has been applied to more and more fields [1] and the theory has been developed quickly.
Many algorithms have been developed to measure the multifractal dimensions [2]- [13]. Using these standard algorithms, we can derive the Hentschelt-Procaccia fractal dimension D(q) [2], the f(α) spectrum and the other related fractal measures for precise data.
These algorithms have been tested by introducing some 'noises' (or 'errors') on the data of a system with a regular fractal characteristic. The advantage of different algorithms has been discussed intensively, such as consequences of noise on the effectiveness of fractal analysis algorithms [3,4], the convergence of the box-counting fractal [3], the comparison of the numerical correlation integral algorithm with the Badii-Politi algorithm [13] by applying the algorithms to Euclidean point sets, Koch asymmetric snowflake and the imprecise data produced by perturbing the x and y coordinates of the fractal points by 'random values' and so on.
Because of the restriction on practical measurements, we can obtain an incomplete multifractal spectrum in the finite scaling range only. The fractal characteristic in a finite scaling range has been studied in detail. However, to the best of our knowledge, quantitative discussion on the incomplete characteristic of a multifractal spectrum has not been reported in the literature so far.
A method to calculate multifractal spectra was proposed by Halsey in 1986 [14]. By means of it, the multifractal spectra for quantities or states of a system with a random fractal character can be calculated from measured data. It is so easy and simple that it has been widely used to compute varieties of multifractal spectra. It is called the statistical method in the present paper for the sake of simpleness.
The wavelet transform modulus maxima (WTMM) method [15] is a well-known method to investigate the multifractal scaling properties of fractal and self-affine objects in the presence of nonstationarities.
The detrended fluctuation analysis (DFA) method is a widely used technique for the determination of (mono) fractal scaling properties and the detection of long-range correlations in noisy, nonstationary time series. The multifractal-DFA (MF-DFA) method [16], which is based on a generalization of the DFA, is produced for the calculated multifractal characterization of nonstationary time series.
Theoretically, it can be demonstrated that since the range of a special parameter in the practical computation cannot be taken as infinite, a complete multifractal characteristic cannot be gotten using any one among the three methods (the statistical, the WTMM and the MF-DFA methods).
In the statistical method, the special parameter q is used to calculate the function χ q (l) [14]; in the WTMM method the special parameter is used to calculate the function Z(q, s) [16]; and in the MF-DFA method the special parameter is used to calculate the function F q (s) [16], where l and s are scale parameters.
The aim of the present paper is an attempt to make the first step to solve the quantitative description of an incomplete multifractal spectrum. Obviously, the completeness of a calculated multifractal spectrum is important. The more complete the multifractal spectrum, the more actual the description. By using these more complete spectra, we can find out the rule among complicated systems more easily.
A method of computing multifractal spectra
The calculation steps of the statistical method are briefly described as follows [14,17]. It is well known that the introduced multifractal is to study the singular distribution properties of some quantities or states of a system. At first, a suitable space dimension is selected, which is 1, 2 or 3 according to the fractal character of the actual quantity. Then, the space is divided into many normalized boxes of size l (l 1 and the first one is l = 1). p i (l) is the distribution probability of a quantity or a state of a system studied in box i. Thus, the multifractal is described as where α is the singularity of the probability subset, N α (l) is the number of boxes of size l with the same probability and f(α) is the fractal dimension of subset α. The multifractal spectrum f(α) can be calculated by the following function: where q is the moment order, −∞ q ∞. If the object studied has a characteristic of multifractal, then we have where The χ q (l) with the same q but different l can be calculated and the curve of ln χ q (l) − ln l is plotted. For a fractal set, the curve is a straight line. Therefore, τ(q) can be obtained from the slope of the line In practical calculation, the range of l corresponding to the linear part of the curve is called as the scaling range of the multifractal. Then, the α subset can be obtained from equation (5) by solving it for different q. The expressions for α and f are α = dτ dq (7) and It is known from expression (4) that only one p in the sum plays the main role and its corresponding τ(q) is the smallest among all the p i . Consequently, we get the following results: • The subset when q = 0 corresponds to the one with the maximum fractal dimension f max , and the related q and α are denoted as q 0 and α 0 , respectively. • The subset as q < 0 corresponds to that with α > α 0 , and the smaller the q, the larger the α. • The subset for q > 0 corresponds to that with α < α 0 and the larger the q, the smaller the α.
Finite scaling range
In practical measurements, we cannot get sufficient data that distribute in many orders of magnitude of the scaling range. It is obvious that a phenomenon occurs only during a finite time and in a finite space, and the accuracy of data is also limited by the resolutions of the instruments used. Considering that real structures exhibit self-similarity only over a finite range of scales, Berntson and Stoll [18] proposed a technique by which they estimated only over a statistically identified region of self-similarity and introduced the finite scale-corrected dimension (FSCD). Avnir et al [19] surveyed a total of 96 papers published in all Physical Review journals (Phys. Rev. A to E and Phys. Rev. Lett.) from 1990 to 1996, in which the experimental data were reported to exhibit a fractal character. The results indicated that the scaling range of the experimentally declared fractality was extremely limited, which was centred in 1.3 orders of magnitude and spanned mainly between 0.5 and 2.0. As the distribution probability decreases due to the limit of experimental conditions, some unusual phenomena may appear in the multifractal spectra. Wang and Wu [20] analysed the problem in detail.
Incomplete multifractal spectra
According to the fractal theory, q should range from −∞ to +∞ to get a complete multifractal spectrum. However, this condition cannot be satisfied.
Data overflow
In fact, the data overflow restricts the range of q in the calculation. On a computer, the range of a double-precision number is from 10 −308 to 10 308 . Therefore, p q i in expression (4) may result in a data overflow, as q is 100. Now, a method is introduced to improve this situation greatly.
The key point is to indicate each positive real number x in a special structure described by two parameters as follows. Let x = a × 10 b , where b is an integer and 1 a < 10. A positive x can be transferred to the new structure as follows: • If x 1, b = int(log 10(x)) and a = x/pow (10, b).
The new structure can be obtained for different operations easily.
The standardization process is as follows. Let It should be noticed that the a q 1 may still be overflowed as |q| is large enough. The following method is helpful to reduce the overflow probability of the p q i in expression (4).
where A is an adjustable positive number and should be satisfied with the condition, 10 −308 < p A i < 10 308 , j is a natural number and q = q − j × A. Standardize p A i and p q i . The calculation range of q increases greatly by using j times of multiplication instead of one time of standardization p q i except that the p i is overflowed. However, it leads to the decreasing computation speed. Although the calculation range of q can be taken wider and wider, it is still finite. Therefore, the evaluation of the partial multifractal spectra is very important.
Effect of measurement errors
Let σ τ indicate the standard error of τ(q), which arises from the fitting line of the ln χ q (l) ∼ ln l curve. If q is very small, α = dτ/dq ≈ τ/ q. So, the standard error of α is approximate to that of τ. Let the errors of two near τ be approximately equivalent. On the basis of equations (7) and (8), we have Obviously, σ f increases rapidly with the increasing |q|. Therefore, the range of q should be restricted to get exact results. Obviously, only the data whose errors satisfy our requirements are useful. Consequently, the errors from measurement bring us the new restriction on the calculated multifractal spectra.
Evaluation parameters
For multifractal spectra with the shape of a bell or a hook, some parameters are defined as follows.
The α corresponding to the maximum fractal dimension f max is denoted as α 0 . The part with α < α 0 is denoted as section I and that with α > α 0 , section II. The f(α) corresponding to α min and α max is represented as f 01 and f 02 , respectively. Let For a complete multifractal spectrum, f 01 , f 02 , α 01 and α 02 are used to denote the changes of f(α) and α in sections I and II, respectively and α 0 is the total change of α.
The calculated minimum and maximum α are referred to as α 1 and α 2 , respectively; obviously, α 1 < α 0 < α 2 . The symbols, f 1 and f 2 , indicate the f(α) corresponding to α 1 and α 2 , respectively. Let For a partial multifractal spectrum, f 1 , f 2 , α 1 and α 2 are used to indicate the changes of f(α) and α in sections I and II, respectively, and α is referred to as the total change of α.
The parameters α 1 / α 01 , α 2 / α 02 , α/ α 0 , f 1 / f 01 and f 2 / f 02 are called as the evaluation parameters. A partial multifractal spectrum can be evaluated using these parameters. We shall discuss the evaluation parameters of the partial multifractal spectra for two Cantor sets and two examples of the random fractals calculated by the statistical method in detail.
Multifractal spectrum computed from generator
For this Cantor set, the original region is divided into three segments with the same size in each step, but the mass distribution probability from the left to the right segment is p, 1 − 2p and p, respectively. Hence, the Cantor set is expressed as the Cantor set p/1 − 2p/p. The method of computing the multifractal spectrum for different Cantor sets has been described in the literature [14,17,20]. We will briefly summarize as follows.
After k steps, the total segments with a size of l = ( 1 3 ) k are 3 k . The mass probabilities form a distribution set {p i }, The number of intervals with p i is Let m = kξ where 0 ξ 1. As k → ∞ (or l → 0), ξ will be continuous. Therefore, Taking the logarithm, we have Since According to the Sterling formula n! ≈ √ 2πn(n/e) n , we have From expression (15), it is known that f takes its maximum f max = 1 as ξ = 2 3 .
It can be seen from equation (13) that If ξ 01 , ξ 02 , ξ 1 and ξ 2 are known, then f 01 , f 02 , f 1 and f 2 can be found by expression (15), and q 1 and q 2 are given by expression (21). As f max = 1, then we have On the basis of expression (21), the following expressions are obtained: and 6. Cantor set p/0/1 − p For this Cantor set, the calculation process and the results of the multifractal spectrum have been given in detail in the literature. Hence, we shall just give the expressions for α and f(α) deduced using these two methods directly. Then, we shall derive the expressions for the evaluation parameters using our discussion mode. For this Cantor set, the original region is divided into three equivalent segments with p, 0 and 1 − p of the mass distribution probability from the left to the right segment, respectively, in each step. Similar to the Canter set p/1 − 2p/p, the following expressions are obtained from the generator: where 0 ξ 1. For the statistical method, we have and where −∞ q ∞. Let Thus, expression (27) is same as (29) and expression (28) is equal to expression (30).
The effect of q on the completeness of calculated results
For the two Cantor sets, the quantitative expressions for the evaluating parameters are derived. The expressions show that the range of q in calculating the multifractal spectra greatly affects the evaluation parameters.
The relationship between p and q
From expressions (22), (23), (15), (33), (34) and (28), it is seen that if ξ 1 and ξ 2 are definite, the values of the evaluation parameters should be definite for the above two Cantor sets. However, from expressions (24)-(26) and (35)-(37), it can be seen that the required calculation range of q is related not only to ξ 1 and ξ 2 but also to p.
Let p min and p max indicate the minimum and the maximum distribution probability in the generators of the Cantor sets, respectively. For the Cantor set p/0/1 − p, For the Cantor set p/1 − 2p/p, It can be seen from expressions (21), (24)-(26), (32) and (35)-(37) that all q, q 1 , q 2 and q are proportional to |ln(p max /p min )| −1 . It means that when the partial multifractal spectra have the same evaluation parameters (or have the same described region relative to the complete spectrum), the smaller the difference of the distribution probability in the generators, the wider the range of q in calculating the multifractal spectra using the statistical method. |ln(p max /p min )| −1 is called the inhomogeneous effect factor. The two Cantor sets have the same expressions for α/ α 0 and q, i.e. α/ α 0 = ξ and q = ln p max p min Therefore, if ξ 1 , ξ 2 and p max /p min take the same values for the Cantor sets when calculating the partial multifractal spectrum using the statistical method, α/ α 0 and q will take the same values as well. The multifractal spectra for the Cantor sets 0.3/0.4/0.3 and 0.333/0.334/0.333 are shown in figures 2(a) and (b), respectively. The solid lines indicate the complete multifractal spectra obtained from the generators and the filled circles represent the results calculated using the statistical method. It can be seen from figure 2 that the required calculation range of q to get the complete multifractal spectrum is quite different for the two cases. There is little difference between the complete multifractal spectrum and the partial multifractal spectrum as |q| 20 in figure 2(a). In contrast, the partial multifractal spectrum for |q| 100 in figure 2(b) only describes a very small part of the complete multifractal spectrum. The large difference in the required calculation range of q comes from very different value of p max /p min for the two cases, which is 4 3 in figure 2(a) and 1.003003 in figure 2(b), respectively. The inhomogeneous effect factor is 3.476 and 333.5 for the two cases, respectively. The ratio of the inhomogeneous effect factors for the two cases is about two orders of magnitude. It leads to that the difference between two calculation ranges of q is also about two orders of magnitude. Sun et al [21] discussed the qualitative relationship between the inhomogeneous degree and q in the saturated regions using examples in the two Cantor sets. They found the smaller the inhomogeneous degree, the larger the q.
Approximate evaluation parameters
For a regular fractal, α min , α max , f 01 and f 02 are obtained from the generator. While for a random fractal, α min , α max , f 01 and f 02 are approximate. On the basis of the practical situation, two approaches can be used to estimate α min and α max . However, the approximate f 01 and f 02 cannot be found out if the errors of f(α) are too large.
Approach 1: In fact, α and the f(α) tend to be saturated when |q| is large enough. In the saturated regions, α is selected as the approximate α min and α max , and q relative to α min and α max are denoted as q max and q min , respectively.
Approach 2: Let l k represent the minimum of size l, and p k min and p k max be the minimum and the maximum distribution probability of the quantity studied, respectively as l = l k . Then, the approximate α min and α max can be written as α min = ln p k max /ln l k and α max = ln p k min /ln l k , respectively.
For a random fractal, we cannot get quantitative expressions to describe the range of α and f(α) as a function of q 1 and q 2 in a finite range of q using the statistical method. Moreover, we cannot find quantitative expressions about the required range of q related to the inhomogeneous effect factor also. However, the approximate evaluation parameters of a partial multifractal spectrum can be obtained. On the basis of expression (1), the approximate α 0 indicates the inhomogeneous degree of the fractal object studied. From expression (13) (for the Cantor set p/1 − 2p/p) and expression (27) (for the Cantor set p/0/1 − p), we get α 0 ∝ [ln(p max /p min )]. Therefore, the qualitative relationship between the inhomogeneous degree and the required range of q can still be studied.
Example 1: Hang Seng index in the Hong Kong stock market
The data are taken from the Hang Seng index in the Hong Kong stock market in a time period of 30 continuous trading days except holidays from 3 January 1994. There are totally 6240 indexes. Its multifractal spectrum is calculated using the statistical method. Figure 3(a) shows the α-q curve with |q| 300. In the two saturated regions shown in figure 3(a), q max = 150 and q min = −150 are selected using approach 1. The corresponding α is taken as the approximate minimum and maximum α of the complete multifractal spectrum, respectively. Figure 3(b) represents the f(α)-α curve (filled squares) and the σ τ -α curve (filled circles), as |q| 150.
Example 2: A photograph of a soybean leaf
The data are taken from the grey-scale values of a section of a Jindou 11 leaf's photograph. The Jindou 11 is one species of soybean, which originates from Shanxi province of China. There are 840 × 840 pixels. The multifractal spectrum is calculated using the statistical method. Figure 4(a) shows the α-q curve as |q| 30. q max = 10 and q min = −20 are selected from the two saturated regions shown in figure 4(a), respectively, according to approach 1. Figure 4(b) displays the f(α)-α curve (filled squares) and the σ τ -α curve (filled circles) as −20 q 10.
Discussion
Now, we arrive at the following conclusions by comparison. (1) In the two examples, the evaluation parameters depend on errors. It can be seen from figures 3(b) and 4(b) that except for very small σ τ near q = 0, the σ τ increases rapidly with increasing |q|, and both σ α and σ f also increase with σ τ according to equation (9). As 2q 2 1, σ f ≈ √ 2qσ τ . σ f is proportional to |q| and increases significantly. For a partial multifractal spectrum, only the data whose error satisfies the requirement are useful. If the conditions, i.e. σ τ 2.0 × 10 −4 and σ α 2.8 × 10 −4 are fulfilled, q 1 and q 2 are restricted to -37 and 39 (−37 q 39) in figure 3(b) and −2.0 and 6.2 (−2.0 q 6.2) in figure 4(b), respectively. Therefore, using approach 1, the approximate evaluation parameters α 1 / α 01 and α 2 / α 02 are 53 and 50% for example 1 and 92 and 25% for example 2, respectively. For example 1, the corresponding σ f 1.5 × 10 −2 , but for example 2, σ f 2.5 × 10 −3 . Obviously, the calculated results are more reliable and give more information in section I (α < α 0 ) for example 2 when compared with those for example 1.
(2) The approximate α 0 for example 2 (the soybean leaf) is over 10 times as large as that for example 1 (the Hang Seng index), i.e. the inhomogeneous degree is smaller for the Hang Seng index than that for the soybean leaf. It is clear from figures 3(b) and 4(b) that if the evaluated parameters in the partial multifractal spectra are the same for the two examples, the required range of q is much wider for the Hang Seng index when compared with that for the soybean leaf. It is similar to the Cantor sets p/1 − 2p/p and p/0/1 − p.
Summary
A set of the evaluation parameters is introduced to describe partial multifractal spectra with the shape of a bell or a hook. The incompleteness of the multifractal spectra calculated using the statistical method comes from the data overflow in the computation, the measurement errors and the finite time of computation. An effective method to reduce the data overflow is proposed. Quantitative expressions of the evaluation parameters for the Cantor setsp/1 − 2p/p and p/0/1 − p are deduced and discussed. For the random fractals, two approaches of computing the approximate evaluation parameters are introduced and two examples are given. For the partial multifractal spectra with the same evaluation parameters using the statistical method, the required range of q in the computation is closely related to the inhomogeneous degree of the fractal object. The smaller the inhomogeneous degree, the larger the required range of q. The α-q and σ τ -α curves are helpful in selecting a reasonable range of q in the computation. | 5,605.6 | 2004-07-01T00:00:00.000 | [
"Physics"
] |
Artificial Visual Electronics for Closed‐Loop Sensation/Action Systems
Artificial visual electronics that mimic the structure and function of human eyes can be a powerful tool to provide visual feedback in the closed‐loop sensation/action systems, which can be beneficial to achieve sophisticated functions in a precise and efficient way. Herein, how artificial visual electronics work in the closed‐loop sensation/action systems, mimicking the human eyes for human behaviors, followed by how artificial visual electronics are utilized in various fields are introduced. To fully mimic the human eyes, how to achieve the structural similarity of artificial visual electronics with eyeballs is highlighted, and focused on the key component, i.e., retina‐like 3D light‐detecting imagers. When combined with the machine‐learning method, such retina‐like 3D imagers are expected to significantly benefit the closed‐loop sensation/action systems.
Introduction
Sophisticated human behaviors can proceed naturally and smoothly because the motor systems work in coordination with the sensory systems, where closed-loop feedback continuously and efficiently happens among these systems in real time. [1] Among all the sensory systems through which brains obtain information from the environment, the eye is the predominant channel for most animals on this planet, including humans who obtain %90% of information via eyes. [2] In addition, the eye is a special organ because the retina is sensitive to light, meaning that it can deliver electrical potential to nerves under the light stimulus, whereas no other organs or nerves without modified genes have the light-detecting capability as far as we know. [3,4] Mimicking the light-detecting capability of eyes in electronics enabled the invention of artificial visual electronics. Such systems have been utilized in various commercial products, specifically referring to the image-detecting systems for smartphones, cameras, video recorders, autodriving vehicles, and security systems, all of which are indispensable to reshape the lifestyle and society in the Third Industrial Revolution. In the next industrial revolution for the new age of digitization and "smarter" networking of production systems, the closed-loop sensation/action systems will play an important role. [5,6] Considering the importance of eyes for humans, we believe that artificial visual electronics should be one of the priority subjects to be investigated in the closed-loop sensation/action systems.
In this perspective, we first discuss the importance of visual feedback in the closed-loop sensation/action systems for controlling human behaviors. Then, we summarize how artificial visual electronics have benefited society in the closed-loop sensation/ action systems. Furthermore, we introduce artificial visual electronics that fully mimic the structure and function of human eyes. Since the light-detecting elements of the retina locate at a 3D eyeball, we mainly focus on reviewing the fabrication techniques that can be utilized to build the retina-like 3D imagers. Finally, we discuss other promising techniques for artificial visual electronics, and the importance of combining the machine-learning method and artificial visual electronics in the future.
Closed-Loop Sensation/Action Systems with Visual Feedback
Humans rely on the cooperation of the sensory and motor system to conduct physical activities in a closed-loop configuration ( Figure 1a). [7,8] For instance, when a human tries to shake hands with another, the receptors on the hand first detect signals once he/she touches the hand of another human through both tactile perception and temperature perception. Then, the detected signals are delivered to the central nervous system via the afferent nerve, and a decision is made and sent to the motor system, which leads to the proceeding of the handshaking. To make sure that the handshaking happens correctly, visual feedback always happens in this process. The human instinctively checks the hand that he/she is holding via eyes, and sends visual feedback to the brain to trigger an internal decision whether to shake the DOI: 10.1002/aisy.202100071 Artificial visual electronics that mimic the structure and function of human eyes can be a powerful tool to provide visual feedback in the closed-loop sensation/ action systems, which can be beneficial to achieve sophisticated functions in a precise and efficient way. Herein, how artificial visual electronics work in the closed-loop sensation/action systems, mimicking the human eyes for human behaviors, followed by how artificial visual electronics are utilized in various fields are introduced. To fully mimic the human eyes, how to achieve the structural similarity of artificial visual electronics with eyeballs is highlighted, and focused on the key component, i.e., retina-like 3D light-detecting imagers. When combined with the machine-learning method, such retina-like 3D imagers are expected to significantly benefit the closed-loop sensation/action systems.
hand. This visual feedback is extremely important to form the closed-loop configuration for precisely controlling human behaviors.
Mimicking the natural closed-loop sensation/action systems is a highly promising strategy for artificial systems to precisely and efficiently achieve complicated functions. Such closed-loop concept has been widely utilized in various fields, such as drug delivery, [9] neural electronics, [10][11][12] human-machine interface, [13][14][15] optogenetics, [16] and so on. Adding artificial visual electronics into the closed-loop sensation/action systems will be even more powerful since it is the most direct way to gather information from the surrounding environment. In fact, there are already some reports that utilized artificial visual electronics in the closed-loop sensation/action systems. The ARGUS II device (Figure 1b), an implantable artificial visual electronic device to replace the human retina, was developed to help patients suffered from hereditary retinal diseases to fight blindness. [17] With the ARGUS II device, patients have showed an enhanced performance in the orientation and mobility tasks, including object localization, motion discrimination, and discrimination [17] Copyright 2013, American Association for the Advancement of Science), c) as photodetectors to give feedbacks to control the light intensity of the LEDs, to study animals' behavior when light stimulates gene-modified nerves (Reproduced with permission. [18] Copyright 2013, American Association for the Advancement of Science), d) as an artificial vision receptor in the bimodal artificial sensory neuron that can control the movement of a robot hand more precisely than a unimodal one. (Reproduced with permission. [19] Copyright 2020, Springer Nature), and e) as biosignal sensors to continuously monitor photoplethysmogram (PPG) signals from human body in real time (Reproduced with permission. [21] Copyright 2018, National Academy of Sciences).
www.advancedsciencenews.com www.advintellsyst.com of oriented gratings. The restored vision, together with other sensations such as audition and tactility, improved the outdoor mobility of patients, enabling them walk along pedestrian courses and identification of doors and posts. Another interesting application is the implantable optoelectronic system, which was developed to accelerate basic scientific discoveries in the field of optogenetics and their translation into clinical technologies ( Figure 1c). [18] In this typical system, the temperature sensor and inorganic photodiode (PD, as the artificial visual electronic device) can provide information on the heat generation and light intensity, which are treated as two different signals. These two signals were directly related to the irradiation intensity of the micro light-emitting diode (LED), which was utilized to stimulate the gene-modified nerves of the mouse to control its leg movement. Altering light stimulation based on feedback from the temperature sensor and the PD has great potential for a closed-loop operation and precise control of the stimulation without any side effects caused by the temperature increase. Closed-loop sensation/action systems utilizing artificial visual system also benefit other applications, such as precisely decoding human gestures to control a robot hand (Figure 1d), [19,20] and long-term human health monitoring systems ( Figure 1e). [21] 3
. Artificial Visual Electronics
The human eye is the most delicate natural optical instrument which endows us the capability to perceive the wonderful world of light. Inspired by the superior performance of the human eye, the biomimetic design emerges as an attractive strategy for developing artificial visual electronics. To exactly mimic human eyes, we need to first know the anatomic structure and understand how they detect signals. As shown in Figure 2a, [22] the eye has a spherical shape, where the lens locates at the front hemisphere to focus light that passes through and the retina locates at the back hemisphere to detect the information that light brings. [23] The detected light signals are transformed into electrical potentials, which then are delivered to the brain by the nerves. As for the uniqueness of the structure of human eyes, the hemisphere where the photoreceptors on the retina locate can benefit to significantly reduce the complexity of optical systems. With the 3D shape, human eyes can directly compensate for the aberration from the curved focal plane, whereas no commercial 2D imagers can do. [24] In addition, the entire retina of an adult human contains 5-7 million cones on an area of about 2.7 cm 2 (pixel resolution of 38.6-54.0 μm 2 ), which are responsive to light with a wavelength ranging from 380 to 760 nm. [25,26] Due to the novel imaging system, human eyes can achieve exceptional characteristics, including a wide field of view (FOV) of 150-160 , a high resolution of 1 arcmin per line pair at the fovea, dynamic and fast imaging (30-60 Hz), and capability to distinguish different colors. Human eye-inspired artificial visual electronics on the plane substrates have already been developed and commercialized. However, artificial visual electronics on a 3D layout configuration is still in the infant stage. Recently, Gu et al. reported an advanced 3D artificial visual electronic device, fully mimicking the structure of the human eyes. [22] In Figure 2b,c, the perovskite nanowires locating on a hemisphere serve as the light-detecting working electrodes, mimicking the retina, and the tungsten (W) film on an aluminum (Al) hemispherical shell worked as the counter electrode. The ionic liquid was used to fill in the cavity between the above two hemisphere shells, serving as the electrolyte that mimics the vitreous humor in the human eye. The flexible eutectic gallium indium liquid-metal wires in soft rubber tubes were used for signal transmission, mimicking the nerves connected to the brain. Due to the novel design, compared with the conventional 2D image systems, this artificial visual electronic device with a 3D hemispherical configuration ensured a more consistent distance between pixels and lens, and achieved a wider FOV and better focusing onto each pixel (Figure 2d,e). Even though the pixel resolution was about 200 μm 2 by utilizing microneedle contacts with 2 mm distance between microneedles, this artificial visual electronic device has a promising potential to achieve high imaging resolution (0.22 μm 2 ) when individual nanowires are electrically addressed. However, it remains challenging to achieve an individual connection between the nanoscale structure and the microscale wire on a 3D hemisphere shell without decreasing the resolution. More works in the future need to be done to develop such artificial visual electronics with high resolution, fully mimicking the structure and function of the eyes.
Retina-like Imagers
In the sophisticated structure of the eyes, the most critical part is the light-detecting retina that locates on the hemisphere. Therefore, to mimic the function and structure of the human eyes, an easier way is to fabricate the retina-like imagers, which is the most critical part to build the advanced 3D artificial visual electronics. Conventional inorganic imagers are constructed on the 2D rigid substrates, which cannot deform to adjust themselves into a retina-like 3D shape due to the brittle nature of these rigid materials. Because the fabrication for the inorganic imagers is very complicated, it is also extremely difficult to directly fabricate the devices on a retina-like 3D hemispherical shell. Until now, only several groups reported the fabrication of the retinalike 3D imagers using structure engineering, a commonly utilized approach to minimize the strain on the devices under mechanical deformations. As for the fabrication procedures, imagers are first constructed on the 2D planar substrates, followed by deforming them into a retina-like 3D shape. The important and challenging task is to minimize the strain on the rigid devices during the deforming process. Ko et al. reported the first retina-like 3D imager by connecting each light-detecting pixel in the arrays of devices with compressive metal connections (Figure 3a). [27] After fabricating the imager on the 2D substrate, they transferred the imager onto a retina-like 3D hemispherical shell. During the transfer process, the mechanical strains were released by forming a buckled structure on the metal connections whereas the light-detecting devices in the imager did not deform and thus kept functionality. As-fabricated retinalike 3D imagers were consisted of 16-by-16 light-detecting pixels with a resolution of 860 Â 860 μm 2 and a curvature radius of %10 mm. Song et al. [28] and Kim et al. [29] further improved the maximum strain tolerance on the metal connection by utilizing a serpentine structure (Figure 3b,c). With a higher strain tolerance, it increased the density (270 Â 190 μm 2 for the study by Other methods were then developed to significantly minimize the tensile/compressive strains that happened to the metal connections. Inspired by the traditional origami art, Zhang et al. assembled five pieces of flexible light-detecting imagers onto a retina-like 3D substrate (Figure 3d). [30] Because all the lightdetecting pixels in the imagers just experienced a bending deformation, there were very small strains exerted on both the connections and light-detecting devices. Utilizing this strategy, it is possible to greatly improve the density and number of the light-detecting pixels in the retina-like 3D imager since no additional design is needed on the metal connections. However, the accurate resolution and number of pixels were not reported.
Another method to attach the flexible 2D light-detecting imager onto a retina-like 3D substrate was achieved by significantly reducing the thickness of the device. Wu et al. reported an ultrathin and conformable perovskite-based light-detecting imager with a total thickness of 2.4 μm (Figure 3e). [31] Combined with the vacuum-assisted drop-casting patterning process, they were able to reduce the size of the light-detecting pixel as small as 50 Â 50 μm 2 . Due to the ultrathin thickness, the ultraflexible imager can be conformally wrapped around a walnut that had a similar shape to the eyeball. However, in the arrays of the Figure 2. Artificial visual electronics, mimicking human eyes. a) Schematic showing the structure of the human eye, which is the organ for human to detect light signals and then transfer the potential to the nerves and the brain. b) The detailed structure of the hemispherical biomimetic electrochemical eye (EC-EYE). c) The optical image of the of the EC-EYE. d) The schematic FOV of the planar and hemispherical image-sensing systems. e) The reconstructed image (letter "A") of EC-EYE and its projection on a flat plane. a-e) Reproduced with permission. [22] Copyright 2020, Springer Nature.
www.advancedsciencenews.com www.advintellsyst.com device, cross talk could happen because there was no switching device in the circuit. In addition, for both of the aforementioned two approaches, since the imagers were not stretched or compressed on the 3D surface, there was inevitably space between the retina-like substrate and part of the devices in the imagers. [32] However, how this space issue will influence the image quality recorded by these retina-like imagers has never been discussed.
Future Perspectives
For the fabrication of the retina-like 3D imager, compared with the structure engineering, fabricating an imager with intrinsic stretchability can be another approach. Although the intrinsically stretchable organic field-effect transistors (OFETs) and circuits have been achieved recently, [33] there has not been any group reporting that such circuits can be combined with light-detecting devices, but it is possible when considering the fast development of intrinsically stretchable OFETs. Another way is to directly fabricate the light-detecting imagers on the retina-like 3D substrates by 3D/4D printing techniques. [34][35][36] In addition, compared with the human retina, all reported retina-like 3D imagers showed a lower resolution and fewer pixels and they did not show the capability to distinguish different colors, which means that improving these parameters will be important tasks in the future. In addition, there is a demand on how to use the signals detected by artificial visual electronics to guide the action of the closed-loop sensation/action systems. Recently, Wang et al. reported the fusion of multisensory and a machine-learning method for the closed-loop sensation/action systems in Figure 4. [20] For the fusion of multisensory (Figure 4a), they integrated visual data captured by a camera (Figure 4b) with somatosensory data from skin-like stretchable strain sensors. By utilizing the machine-learning method, they were able to recognize the human gestures with an accuracy of 100% even in nonideal conditions where images are noisy and under or overexposed (Figure 4c). The detected gestures were then utilized to guide a robot by assigning motor commands to different gestures. Due to the machine-learning method, they guided the robot pass through the labyrinth with zero error whereas there were six errors without using the machine-learning method (Figure 4d,e). In addition, different machine-learning methods imager, which is realized by an origami approach. e) Optical image (left) and structure illustration of a hemispherical imager, which can be seamlessly attached onto a walnut that has a similar shape with an eyeball. a) Reproduced with permission. [27] Copyright 2008, Springer Nature. b) Reproduced with permission. [29] Copyright 2020, Springer Nature. c) Reproduced with permission. [28] Copyright 2013, Springer Nature. d) Reproduced with permission. [30] Copyright 2017, Springer Nature. e) Reproduced with permission. [31] Copy right 2021, Wiley-VCH.
www.advancedsciencenews.com www.advintellsyst.com have been developed for the processing of visual data via various perception and reasoning algorithms, including support vector machine, [37] K-nearest-neighbor, [38] convolutional neural networks, [39] and artificial neural network. [20] Therefore, it can be expected that combining the machine-learning method with the artificial visual electronics can further benefit the closed-loop sensation/action systems in the future.
Conclusion
In this perspective, we introduced the significance of advanced artificial visual electronics for the closed-loop sensation/action systems. To fully mimic the human eyes, artificial visual electronics with a 3D retina-like configuration was highlighted.
Furthermore, we summarized various reported methods to construct 3D retina-like imagers, and proposed other promising fabrication techniques in the future for the application of such imagers in the closed-loop sensation/action systems. Finally, we proposed to combine the machine-learning method and artificial visual electronics in the closed-loop sensation/action systems. showing the closed-loop sensation/action system consisting of a data acquisition unit-somatosensory, a camera for capturing visual images, a computer, a wireless data transmission module, and a quadruped robot. c) Pictures of ten categories (I to X) of hand gestures that were assigned with specific motor commands for the movement guidance of the quadruped robot. FM, forward move; BM, back move. d) Scenarios of the robot walking through the labyrinth based on visual-based recognition and e) bioinspired somatosensory-visual-associated learning recognition. Reproduced with permission. [20] Copyright 2020, Springer Nature.
www.advancedsciencenews.com www.advintellsyst.com Xiaodong Chen is the president's chair professor of materials science and engineering, professor of chemistry and medicine (by courtesy) at the Nanyang Technological University (NTU), Singapore. He serves as director of Innovative Center for Flexible Devices (iFLEX) and director of Max Planck-NTU Joint Lab for Artificial Senses at NTU. His research interests include mechano-materials and devices, the integrated nano-bio interface, and cyber-human interfaces. | 4,361 | 2021-06-29T00:00:00.000 | [
"Art",
"Engineering"
] |